The article describes how the influencer community — represented by creators like Jeremy Carrasco — is raising an alarm about a growing flood of AI-generated content on social media. Many TikToks and short-form videos are now being made using generative-AI tools, producing deepfakes, synthetic personalities, and misleading content — often without clear labeling. This proliferation is eroding user trust and undermining the authenticity of digital creators’ work.
Carrasco and others warn that this trend threatens the “creator economy”: AI-generated content can be produced cheaply and at scale, flooding users with slick but hollow videos. That crowds out genuine creators, dilutes content quality, and makes it harder for real people to get visibility. Because platforms frequently fail to enforce labeling rules, many viewers can’t tell whether a video was made by a human or an AI — undermining transparency and fairness.
In response, some advocates are calling for stronger “AI literacy” — helping users learn to recognise synthetic content, understand how AI-generated content is made, and be skeptical about what they watch. The idea is that better-informed audiences will value authenticity, spot deepfakes or misleading messages, and support genuine human creators rather than blindly consuming AI-generated material.
The situation underscores a broader challenge for social media today: as generative-AI becomes easy to use and widely available, the balance between creativity, ethics, and authenticity becomes fragile. Without robust media-literacy efforts, labeling policies, and user awareness — the flood of AI-driven content could transform online culture in ways that disadvantage real creators, mislead audiences, and erode trust.