AGI is a pipe dream until we solve one big problem, AI experts say, even as Google celebrates Gemini's success Top AI minds say bigger models aren’t smarter ones
By Eric Hal Schwartz, TechRadar, December 9, 2025
- AI researchers at NeurIPS 2025 say today’s scaling approach has hit its limit
- Despite Gemini 3’s strong performance, experts argue that LLMs still can’t reason or understand cause and effect
- AGI remains far off without a fundamental overhaul in how AI is built and trained
Recent successes by AI models like Gemini 3 don't disguise the more sobering message that emerged this week at the NeurIPS 2025 AI conference: that we might be building AI skyscrapers on intellectual sand.
While Google celebrated its latest model’s performance leap, researchers at the world’s biggest AI conference issued a warning: no matter how impressive the current crop of large language models may look, the dream of artificial general intelligence is slipping further away unless the field rethinks its entire foundation.
All agreed that simply scaling today’s transformer models, giving them more data, more GPUs, and more training time, is no longer delivering meaningful returns. The big leap from GPT-3 to GPT-4 is increasingly seen as a one-off; everything since has felt less like breaking glass ceilings than merely polishing the glass.
AGI is a pipe dream until we solve one big problem, AI experts say, even as Google celebrates Gemini's success | TechRadar
Firefox Is Adding an AI Window to Its Browser: Here’s Everything We Know Mozilla wants Firefox to incorporate AI, but not necessarily in the same way as other browsers.
Firefox Is Adding an AI Window to Its Browser: Here's Everything We Know |