AI

Anthropic CEO claims AI models hallucinate less than humans

Anthropic CEO Dario Amodei believes that today’s AI models hallucinate, or come up with things as if they were true, at a lower pace than people, he said during a press conference at the first Anthropic developer event, code with Claude, in San Francisco on Thursday.

Amodei said this all in the midst of a larger point that he made: that AI-Hallucinations are not a limitation on the path from Anthropic to Agi-AI systems with intelligence at human level or better.

“It really depends on how you measure it, but I suspect that AI models are likely to hallucinate less hallucinating than people, but they hallucinate in more surprising ways,” said Amodei, responding to WAN’s question.

The CEO of Anthropic is one of the most bullish leaders in the industry about the prospect of AI models that AGI reach. In a widely spread newspaper he wrote last year, Amodei said that he believed that Agi could arrive in 2026. During Thursday’s press conference, the anthropic CEO said that he saw steady progress and noticed that “the water rises everywhere.”

“Everyone is always looking for these hard blocks about what [AI] Can do, “said Amodei.” They are nowhere to be seen. There is no such thing. “

Other AI leaders believe that hallucination offers a major obstacle for reaching Agi. Earlier this week, Demis Hassabis, CEO of Google DeepMind, said that today’s AI models have too many ‘gaps’, and have too many obvious questions wrong. Earlier this month, for example, a lawyer represented Anthropic was forced to apologize in court after they used Claude to make quotes in a court application, and the AI ​​Chatbot hallucinated and got names and titles wrong.

See also  Longbridge CEO Chris Mayer about reverse mortgage partnerships with Forward Companies

It is difficult to verify the claim of Amodei, largely because most hallucination benchmarks place AI models against each other; They do not compare models with people. Certain techniques seem to help lower hallucination rates, such as giving AI models access to the search for web. Separately, some AI models, such as the GPT-4.5 from OpenAi, have in particular lower hallucination rates on benchmarks compared to early generations of systems.

However, there are also indications that hallucinations are actually getting worse in advanced reasoning of AI models. The O3 and O4-mini models from OpenAI have higher hallucination rates than the previous-gene reasoning models from OpenAi, and the company does not really understand why.

Later in the press conference, Amodei pointed out that TV broadcasters, politicians and people in all types of professions always make mistakes. According to Amodei, the fact that AI also makes mistakes is not a knock on his intelligence. The CEO of Anthropic, however, recognized the trust with which AI models false things like facts can be a problem.

Anthropic has even done a considerable amount of research into the tendency of AI models to mislead people, a problem that seemed particularly in the recently launched Claude Opus 4 of the company. Apollo Research, a security institute that has given early access to test the AI ​​model, it turned out that an early version of Claude Opus 4 has a high tendency to give a high tendency against Humans and Deceive Them against Humans and Decive Them. Apollo went so far that Anthropic should not have released that early model. Anthropic said it came with a few mitigations that seemed to tackle the problems that Apollo addressed.

See also  Sam Altman firing drama detailed in new book excerpt

Amodei’s comments suggest that anthropic can regard an AI model as AGI, or is equal to intelligence at a human level, even if it still hallucinates. However, an AI that hallucinates can fail to fail due to the definition of many people.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button