A Forbes exploration by AI expert Dr. Lance Eliot dives into a peculiar yet revealing experiment: prompting large language models (LLMs) to simulate the effects of psychedelic drugs like LSD or psilocybin by instructing them to “act high,” and analysing what that tells us about AI and human cognition. The analysis draws on a new study that used a dual-metric evaluation framework to compare thousands of LLM-generated narratives under neutral and psychedelic-induction prompts, finding that models can produce text that more closely resembles human psychedelic experience descriptions when given the right instructions.
In the study, researchers compared AI output from models such as ChatGPT-5, Gemini 2.5, Claude Sonnet 3.5, LLaMA-2 70B, and Falcon 40B against over 1,000 human trip reports on psychedelics. Under psychedelic-styled prompts — for example, asking the model to describe a first-person experience of taking 100 µg of LSD — the AI’s outputs showed significant shifts in language patterns and narrative style, increasing in semantic similarity to human reports and scoring higher on metrics associated with mystical or altered experience descriptions. However, the work emphasises that this does not make the AI “feel” anything. The models are simply better at simulating human narrative patterns associated with altered states because their training data includes many such texts.
The article stresses that while these experiments can be fun or intellectually intriguing, they also illuminate important truths about what LLMs actually are: pattern-matching machines without genuine sensations, consciousness, or phenomenological experience. Dr. Eliot explains that the AI’s ability to mimic “high-like” language reflects its exposure to human descriptions of such states — not an internal experiential or biochemical reaction akin to humans taking drugs. In other words, the AI’s response under psychedelic prompts is simulated form rather than authentic altered cognition.
Beyond the oddity of the experiment, there are broader mental health considerations. Because people commonly use generative AI for mental health guidance or introspection — often unconsciously anthropomorphising these systems — experiments like this raise questions about how AI communicates emotional and psychological concepts. Experts caution that while AI can be a useful tool for exploring human narrative and language, its outputs should not be mistaken for real insight into consciousness or used as a substitute for professional mental health care.