Italy’s antitrust authority has officially closed investigations into several AI companies after they agreed to strengthen warnings about the risks of AI “hallucinations” — false or misleading information generated by artificial intelligence systems. The probe targeted China’s DeepSeek, France’s Mistral AI, and Turkey-based Scaleup Yazilim Hizmetleri over concerns that users were not being adequately informed about the limitations and inaccuracies of generative AI tools.
Italy’s competition and consumer protection regulator, the AGCM, said the companies agreed to adopt binding commitments designed to improve transparency. These measures include adding permanent disclaimers to chatbot interfaces and websites warning users that AI-generated responses may contain fabricated or inaccurate information. DeepSeek also committed to investing in technologies aimed at reducing hallucination risks, while acknowledging that such errors cannot currently be fully eliminated.
The case reflects growing regulatory pressure on AI developers across Europe as governments move from broad AI ethics discussions toward direct consumer protection enforcement. Regulators increasingly view hallucinations not merely as technical flaws, but as potential unfair commercial practices if companies fail to clearly communicate the limitations of their systems. Italy has been especially active in scrutinizing AI companies, previously investigating firms over data privacy, transparency, and platform dominance concerns.
More broadly, the decision highlights how AI governance is evolving around transparency and accountability rather than outright bans. Instead of prohibiting generative AI tools, regulators are pushing companies to make risks more visible and understandable to users. The Italian actions may also influence future enforcement under broader European AI rules, including the EU AI Act, where disclosure obligations and risk management are expected to become central requirements for advanced AI systems.