A recent study reported common assumption about artificial intelligence—that it “knows,” “thinks,” or “understands” things like humans do. Researchers found that the way we talk about AI often uses human-like language (anthropomorphism), which can unintentionally mislead people into believing that AI has thoughts, intentions, or awareness. In reality, AI does not think—it simply processes patterns in data to generate outputs.
The study explains that using words like “AI knows” or “AI decided” can create a false impression of intelligence and independence. These phrases suggest that AI systems have beliefs or reasoning abilities, when in fact they are just statistical models trained on large datasets. This misunderstanding can lead to unrealistic expectations about what AI can do and how reliable it is.
Interestingly, researchers found that professional news writers are relatively cautious about this issue. In a large analysis of global news articles, human-like language was used less often than expected, and when it did appear, it ranged from harmless descriptions (like “AI needs data”) to more misleading ones that imply deeper understanding. This shows that anthropomorphism exists on a spectrum, not as a simple yes-or-no phenomenon.
Overall, the key takeaway is that language shapes perception. Describing AI as if it were human can blur the line between machines and people, potentially hiding the fact that humans design, control, and are responsible for these systems. The study urges more careful communication about AI, emphasizing that it is a powerful tool—but not a thinking, conscious entity.