The article describes how artificial intelligence (AI) is fundamentally transforming cybercrime by making advanced attacks far cheaper and easier to launch. Attackers are now using AI tools to automate and scale up threats: everything from AI-generated ransomware, deepfake scams, identity fraud and even critical-infrastructure hacks. This lowers the barrier to entry for criminals — what once required skill and resources may now be done by relatively small groups.
One major concern is how AI amplifies traditional scams. For instance, deepfake technology is being used to impersonate individuals — including corporate executives or relatives — to defraud unsuspecting victims via convincing audio or video. Scams have become more realistic and much harder for people to detect, increasing financial and identity-theft risks.
Ransomware is also evolving: some groups now embed AI in their attack workflows — from planning reconnaissance, writing attack code, to negotiating ransoms automatically. This automation enables more rapid, adaptive, and large-scale cyberattacks than ever before — even from actors with limited technical skills.
The increasing use of AI for crime is happening even as lawmakers and regulators scramble to keep up. In some regions, like Washington State in the U.S., laws have recently been updated to criminalize malicious deepfakes — covering digital likenesses used to defraud, intimidate or harass.