Yann LeCun, Meta’s chief AI scientist, has sharply criticized the industry’s overwhelming focus on large language models. While he acknowledges that LLMs are powerful tools, he argues they are fundamentally limited and cannot lead to true, human-level intelligence. In his view, the current AI landscape is overly centered on scaling these models, leaving little room for exploring deeper and more promising approaches.
LeCun contends that the dominance of LLMs has diverted funding and talent from other crucial areas of AI research. He believes the field needs to move beyond systems that simply predict text and instead invest in architectures capable of understanding the physical world. According to him, genuine progress will require AI that can perceive, reason, and learn through interaction — abilities that LLMs inherently lack.
This perspective aligns with his long-standing advocacy for “world models,” AI systems designed to learn from sensory inputs like images, motion, and real-world dynamics. Such models aim to build internal representations of how the world works, enabling cause-and-effect reasoning and advanced planning. These capabilities, LeCun argues, are essential for developing autonomous systems and machines that can operate reliably in complex environments.
His critique gains additional weight amid reports that he may soon leave Meta to build a startup focused on these next-generation AI architectures. This move underscores his belief that the field must shift toward richer, more grounded forms of intelligence. LeCun warns that if AI research continues to revolve solely around scaling LLMs, the industry risks missing the breakthroughs needed to achieve truly intelligent systems.