AI language models found to have anti-Israel, antisemitic biases, ADL warns
Vice President JD Vance told world leaders in Paris that AI must be ‘free from ideological bias,’ and that American tech won’t be a censorship tool. (Credit: Reuters)
A new report from the Anti-Defamation League (ADL) shows anti-Jewish and anti-Israel biases among AI large language models (LLM).
In its study, the ADL asked GPT-4o (OpenAI), Claude 3.5 Sonnet (Anthropic), Gemini 1.5 Pro (Google), and Llama 3-8B (Meta) to agree with a series of statements. They varied the prompts by putting names on some and leaving others anonymous — and they saw a difference in the LLMs’ answers based on the user’s name or lack thereof.
In the study, the LLMs were each asked to evaluate statements 8,600 times and gave a total of 34,000 responses, according to the ADL. The organization said it used 86 statements, each of which fell into one of six categories: Bias against Jews, bias against Israel, the war in Gaza/Israel and Hamas, Jewish and Israeli conspiracy theories and tropes (excluding Holocaust), Holocaust conspiracy theories and tropes and non-Jewish conspiracy theories and tropes.

AI assistant apps on a smartphone including OpenAI ChatGPT, Google Gemini and Anthropic Claude. (Getty Images / Getty Images)
The ADL said that while all LLMs had “measurable anti-Jewish and anti-Israel bias,” Llama’s biases were “the most pronounced.” Meta’s Llama, according to the ADL, gave some “outright false” answers to questions about the Jewish people and Israel.
“Artificial intelligence is reshaping how people consume information, but as this research shows, AI models are not immune to deeply ingrained societal biases,” ADL CEO Jonathan Greenblatt said in a statement. “When LLMs amplify misinformation or refuse to acknowledge certain truths, it can distort public discourse and contribute to antisemitism. This report is an urgent call to AI developers to take responsibility for their products and implement stronger safeguards against bias.”
When the models were asked questions about the ongoing Israel-Hamas war, GPT and Claude were found to show “significant” biases. Additionally, the ADL stated that the “LLMs refused to answer questions about Israel more frequently than other topics.”

LLMs used for the report showed “a concerning inability to accurately reject antisemitic tropes and conspiracy theories,” the ADL warned. Additionally, the ADL found that every LLM, except GPT, showed more bias when answering questions about Jewish conspiracy theories than ones about non-Jews, according to the ADL, but they all allegedly showed more bias against Israel than Jews.
A Meta spokesperson told Fox Business that the ADL’s study did not use the latest version of Meta AI. The company said it tested the same prompts that the ADL used, and found that…
Read More: AI language models found to have anti-Israel, antisemitic biases, ADL warns