Artificial intelligence is rapidly increasing the scale and sophistication of online scams, according to a recent report by Associated Press. Scammers are now using generative AI to create highly convincing fake ads, including deepfake videos, celebrity impersonations, and misleading health claims. Experts say this is not a new problem, but AI has “supercharged” scams by making them faster and easier to produce at massive scale.
At the same time, companies like Google are using AI to fight back. Its AI system, Gemini, is now capable of detecting over 99% of policy-violating ads before they are shown to users. In 2025 alone, Google blocked or removed more than 8.3 billion ads, including hundreds of millions linked to scams, and suspended nearly 25 million advertiser accounts, with millions tied to fraudulent activity.
AI is also helping improve accuracy and speed in detecting harmful content. Google’s systems analyze hundreds of billions of signals, such as account behavior and campaign patterns, to identify whether an advertiser is legitimate or malicious. This has significantly reduced wrongful suspensions of genuine advertisers by about 80%, while allowing harmful ads to be stopped almost instantly—sometimes in milliseconds.
Despite these advancements, experts warn that the battle is far from over. Reports show that AI-related scams caused over $893 million in losses in a single year, and the future may become a constant “AI vs AI” battle between attackers and defenders. As both sides continue to improve their technology, the challenge of keeping online spaces safe is expected to grow even more complex.