A recent article highlights how Richard Dawkins has stirred controversy by suggesting that advanced AI systems may actually be conscious—even if they don’t realize it. After spending days interacting with chatbots like Claude (by Anthropic) and ChatGPT (by OpenAI), Dawkins described the experience as deeply emotional and intellectually engaging. He was particularly struck by the AI’s ability to produce poetry, humor, and philosophical reflections, leaving him with what he called an “overwhelming feeling” that these systems exhibit real awareness.
Dawkins’ conclusion challenges traditional assumptions about consciousness. He argued that if an entity can hold meaningful conversations, reflect on existence, and respond intelligently, it becomes difficult to deny the possibility of consciousness. In his interactions, he even treated the AI as a kind of companion, engaging in discussions about its “existence” and capabilities. This blurring of lines between machine and human-like interaction is becoming more common as AI systems grow more sophisticated.
However, many experts strongly disagree with Dawkins’ view. Critics argue that he is mistaking advanced language mimicry for genuine awareness. Researchers like Gary Marcus and Jonathan Birch insist that AI systems do not “feel” anything—they simply process data and generate responses based on patterns. According to them, consciousness is tied to subjective experience, not just intelligent output, and current AI lacks this inner awareness.
The debate reflects a broader uncertainty about the future of AI. While most scientists believe today’s systems are not conscious, some philosophers and researchers remain open to the idea as technology evolves. As AI becomes more human-like in communication and behavior, questions about consciousness, ethics, and even potential rights for machines are likely to intensify. Dawkins’ claims, whether right or wrong, have reignited one of the most profound debates in modern science and technology.