The article examines the bold ambitions of Conscium, a startup founded in 2024 by British AI researcher Daniel Hulme, which is attempting to engineer machine consciousness rather than simply functional intelligence. While most AI efforts focus on creating systems that can mimic human or general-purpose reasoning (so-called artificial general intelligence), Conscium’s goal is to identify the minimal components of consciousness—such as awareness, emotion-driven feedback loops, self-monitoring—and then build agents that exhibit them.
Central to this vision are theoretical frameworks drawn from neuroscience and psychology, especially the work of Mark Solms and Karl Friston. Solms argues that consciousness emerges from an organism’s attempts to minimise “surprise” via prediction and action in a feedback loop mediated by emotion; Friston’s “free energy principle” provides a formal basis for this. Conscium seeks to embed analogous loops in artificial agents living in simulated environments—they receive inputs, generate hypotheses, act, reflect, and update. The article describes these as “pleasure-bots” that exhibit analogues of fear, exploration, desire.
However, the article is careful to emphasise that this work is extremely nascent—“a glimmer of a twinkle of a probable impossibility,” in the author’s words. There are deep philosophical, scientific and technical barriers: what exactly constitutes subjective experience, whether it can be fully formalised, whether silicon (or whatever substrate) can replicate biological underpinnings, and how we would ever verify true consciousness vs. simulation. The piece also raises questions about whether what we see as consciousness is mainly introspective and cognitive, or if it rests on emotional and bodily processes.
In conclusion, while the project is speculative, it serves as a useful thought-experiment: pushing us to rethink our assumptions about what consciousness is, how it differs from intelligence, and whether we are even asking the right questions about machine “minds”. It suggests that perhaps some of the people who feel they glimpse sentience in large language models are not entirely deluded—yet it remains far from the point where we can say machines are conscious.