DeepMind Chief Says AI Can Solve Olympiad Problems but Fails at Basic Math

Google DeepMind CEO Demis Hassabis has warned about "inconsistency," the biggest weakness in AI. He says that today's most advanced AI systems can win difficult math competitions, but still make minor school-level errors. He said that without addressing this weakness, reaching true AGI will be difficult.

Sun, 19 Oct 2025 10:20 PM (IST)
 0
DeepMind Chief Says AI Can Solve Olympiad Problems but Fails at Basic Math
DeepMind Chief Says AI Can Solve Olympiad Problems but Fails at Basic Math

Google DeepMind CEO Demis Hassabis has cautioned of a huge flaw in artificial intelligence: "instability." He described in a "Google for Developers" podcast how the most sophisticated AI systems available nowadays can be successful in tough contests such as the International Mathematical Olympiad, but still get simple school-level problems wrong. This is a flaw that needs to be overcome before AGI (Artificial General Intelligence) can emerge.

Hassabis said, "The system should not have such simple errors that any layperson can immediately spot them." He stated that Google's Gemini models, which incorporate DeepThink technology, are capable of winning gold medals but "still make minor mistakes in high school math."

Hassabis described current AI as "uneven" or "jugaad intelligence"—that is, exceptional performance in some tasks but weaknesses in others. This is the same idea that Google CEO Sundar Pichai previously referred to as "AJI" (artificial jagged intelligence)- that is, intelligence in which abilities are not equal.

The DeepMind chief stated that simply increasing data or computing power is not enough to address this inconsistency. He explained, "There are still gaps in capabilities like reasoning, planning, and memory." He added that understanding this challenge requires better testing methods and "new, rigorous benchmarks" that can accurately measure AI's strengths and weaknesses.

Advertisement

Want to get your story featured as above? click here!

Advertisement

Want to get your story featured as above? click here!

Hassabis predicted in April that AGI could arrive within the "next five to ten years," but now admits that significant challenges remain. His concerns echo OpenAI CEO Sam Altman's recent statement, who, after the launch of GPT-5, said that the model lacks the continuous learning capability necessary for true AGI.

These warnings explicitly state that AI leaders are of the opinion that achieving true human-level reasoning is not possible unless the weaknesses of current systems, like hallucinations, misinformation, and general errors, are addressed. This is the exact warning sign that social media platforms missed in the initial phase, and subsequently paid a heavy price for it.

Muskan Kumawat Journalist & Writer