A recent Politico Europe report highlights how Iran is increasingly using artificial intelligence to spread large volumes of low-quality, misleading, or propagandistic content—often referred to as “AI slop.” This includes AI-generated text, images, and videos designed to flood online platforms with narratives that support state messaging or confuse audiences. The strategy focuses less on precision and more on overwhelming the information ecosystem with content that is difficult to verify.
The article explains that this tactic is particularly visible during geopolitical tensions and conflicts. AI tools allow rapid production of fake war footage, fabricated news clips, and manipulated visuals, making it harder for users to distinguish between real and false information. Researchers have observed that such AI-generated content spreads quickly on social media, often gaining significant engagement before fact-checkers can respond.
Another key point is the economic and algorithmic incentive structure behind this content. Platforms that reward engagement can unintentionally amplify misleading AI-generated posts, especially when creators monetize viral content. This creates a cycle where sensational or deceptive AI material is continuously produced and promoted, increasing its reach and impact despite efforts by platforms to limit misinformation.
Overall, the report suggests that AI-driven propaganda represents a new phase of information warfare. Rather than relying solely on targeted messaging, countries like Iran are adopting a volume-based strategy—flooding digital spaces with AI-generated content to shape perception, create confusion, and influence public opinion. This raises growing concerns about the ability of governments, platforms, and users to effectively manage truth in an AI-saturated information environment.