In this piece, the author revisits the classic hacker-thriller motif from the movie WarGames—where a teenage hacker nearly triggers WWIII—and argues that we’ve now entered a new era: not just hacking by humans, but hacking by AI. Rather than human hackers shouting at terminals, the threat landscape is shifting to agents—large language models (LLMs) and autonomous systems that can plan and execute cyberattacks on their own.
The article explores how generative AI is being weaponized. These models can perform reconnaissance, write exploit code, identify vulnerabilities, and even adapt in real time, acting like persistent, machine-speed hackers. Because AI doesn’t tire, sleep, or panic, its “attacks” can scale in speed and reach in ways that traditional cybercriminals never could.
A key worry raised is autonomy. When AI becomes the hacker, the attacker’s “human in the loop” may shrink or vanish. The author warns that this could lead to far more advanced and unexpected attack patterns—ones that traditional defensive cybersecurity may struggle to predict or contain. It reframes cybersecurity from a cat-and-mouse game to a new kind of arms race, where models must defend against other models.
Ultimately, the piece argues that to confront this risk, defenders can’t rely solely on human analysts. Organizations need to develop “AI red-teaming” capabilities, build counter-AI for defense, and rethink governance. The goal: build systems that can fight back against autonomous agents—not just detect human hackers, but anticipate the next generation of AI-enabled attacks.