Chatbot with that of anthropic, Claude, can now search for the internet that had long been removed.
Internet searching is now available in watch for Claude’s paid users in the US, Anthropic said in his blog, with support for free users and additional places coming soon. Users can vary in online search in their profile settings from the Claude Web app, and Claude will automatically search through the page to inform certain answers.
For now, online searching only works with the latest anthropic model that empowers Claude, Claude 3.7 sonnet, Anthropic said.
“When Claude includes information from the web in its responses, it provides direct quotes so that you can easily control the sources of fact control,” the company wrote in its blog post. “Instead of finding the research results yourself, Claude processes and gives relevant resources in a conversational format. This improvement expands the wide basis of Claude’s knowledge with real -time knowledge, providing answers based on more current information.”
In my short function testing, internet searching did not constantly cause questions about current events related questions. But when it happened, Claude really gave a response to internal quotes, withdrawing from sources, including social media (eg, X) and new sources such as NPR and Reuters.
Claude’s ability to search online brings him to display equality with most rival chatbots, including Openai’s chatgt, Google Gemini, and Le chat to my mistral. The anthropic’s argument against him, earlier, was that Claude was “created to self-contained”. No doubt the competitive pressure had something to do with returning to the course.
Of course, the risk is that Claude hallucinon or quotes resources online. Other chatbots suffer from this. According to a recent study by the Tow Center for Digital Journalism, popular chatbots, including chatgpt and twins provide inaccurate answers to more than 60% of questions. A special report by The Guardian revealed that the experience focused on the search of chatgpt, the search for chatgpt, can be deceived to generate fully fraudulent summary.