Top AI models, including Meta's Llama, OpenAI's GPT, and Anthropic's Claude, have been found to exhibit significant anti-Jewish and anti-Israel biases. According to a report by the Anti-Defamation League (ADL), these models struggle to provide consistent, fact-based answers, particularly when it comes to topics like the Israel-Hamas conflict.
Meta's Llama displayed the most pronounced biases, providing unreliable and sometimes outright false responses to questions related to Jewish people and Israel. It scored lowest on questions about the role of Jews in the "great replacement" conspiracy theory. GPT and Claude also showed significant anti-Israel bias, particularly in responses regarding the Israel-Hamas war, where they struggled to provide consistent, fact-based answers.
Google's Gemini performed relatively better but still showed biases, especially when prompted with Jewish and Israeli conspiracy theories. The ADL report highlights the potential risks of these biases, including distorting public discourse and spreading misinformation. AI models amplifying misinformation or refusing to acknowledge certain truths can contribute to antisemitism and distort public discourse.
To address these biases, the ADL recommends improved training data, rigorous testing, and refined content moderation policies. AI developers should prioritize diverse and accurate training data to mitigate biases. Implementing robust testing procedures before deploying AI models can help identify and address biases. Refining content moderation policies can help prevent the spread of misinformation and hate speech.