A recent study revealed that Meta and X approved advertisements containing violent hate speech against Muslims and Jews ahead of Germany's federal elections. The research, conducted by corporate responsibility group Eko, tested the platforms' ad review processes by submitting deliberately harmful ads.
Shockingly, X approved all 10 of the submitted hate speech ads, while Meta approved five out of 10, despite both companies' policies prohibiting such content. Some ads even featured AI-generated imagery depicting hateful narratives without disclosing their artificial origin, which is against Meta's policies.
The approved ads promoted extremist hate speech, including calls for violence against immigrants and Jews, and even referenced Nazi war crimes. These findings raise significant concerns about the platforms' content moderation practices and their potential impact on the electoral process.
The study's results have been submitted to the European Commission, which oversees the enforcement of the Digital Services Act (DSA). The DSA requires platforms to remove illegal content, including hate speech, and can impose fines of up to 6% of a company's global annual revenue for non-compliance.
This incident highlights the need for stricter regulations and more effective content moderation practices to prevent the spread of hate speech and misinformation on social media platforms.