A recent report reveals that 72% of people believe scam attempts are becoming more convincing, largely because cybercriminals are increasingly using artificial intelligence to strengthen their tactics. AI-powered fraud now includes realistic phishing emails, voice cloning, deepfake videos, and personalized scam messages that closely mimic genuine communication. This growing sophistication is making it much harder for individuals to distinguish between real and fake digital interactions.
The study further shows that 56% of respondents specifically feel AI is making fraud more difficult to identify. Unlike traditional scams, AI-enabled attacks can be tailored to a person’s online behavior, communication style, and financial habits. Fraudsters can now generate highly believable messages in seconds, imitate trusted contacts, and even automate scams at scale, which significantly increases both the frequency and success rate of fraudulent activities.
A major concern is the rise of deepfake-based impersonation and identity fraud. Criminals are using AI tools to replicate voices, faces, and official documents, making scams appear authentic. This has become especially dangerous in banking, fintech, and e-commerce, where identity verification is crucial. Experts warn that such AI-driven deception is eroding public trust in digital platforms and financial systems, forcing businesses to invest in stronger fraud detection mechanisms.
Overall, the report highlights that while AI offers many benefits, it is also intensifying cyber risks by making scams more persuasive and harder to detect. As fraud techniques continue to evolve, users are being urged to verify sources carefully, avoid sharing sensitive information quickly, and rely on multi-factor authentication and secure verification methods. The findings underscore the urgent need for stronger cybersecurity awareness and AI-powered defense systems in today’s digital environment.