Social Media, Gaming, and AI Companies Action Items

Recommendations for Social Media, Gaming, and AI Companies, as part of AJC's Call to Action Against Antisemitism in America.

Photo of a person using their cell phone

AJC's Call to Action Against Antisemitism - A Society-Wide Nonpartisan Guide for America - Learn More

Online and on social media continue to be the place where American Jews experience antisemitism the most. It is also where most Americans see antisemitism. According to AJC’s State of Antisemitism in America 2023 Report, 62% of Jewish adults have seen antisemitic content online or on social media at least once in the past 12 months, including 11% who saw antisemitic content on online gaming platforms. This overall number increases to 67% for young American Jews ages 18-29, with 18% seeing antisemitism on online gaming platforms. Nearly one in three (30%) American Jews have avoided posting content online that would identify them as Jewish or reveal their views on Jewish issues. 

Online antisemitism and misinformation about Jews and Israel have offline consequences. Among American Jews who experienced antisemitism online or on social media, 22% report these incidents made them feel physically threatened. While lawmakers from both sides of the aisle and some platforms are calling for increased regulation, social media and gaming companies have the biggest responsibility to ensure their platforms are not used as launching pads for conspiracies, antisemitism, and hatred. 

AI companies also have a role to play. For the first time, American Jewish Committee’s report asked American Jews about generative artificial intelligence (AI). 72% of American Jews are concerned (very or somewhat) that generative AI or automated systems, such as ChatGPT, will spread misinformation about Jews. 62% of American Jews expressed concern that generative AI will show bias against Israel. In the following resource, recommendations specifically for AI are indicated as such.

Social media, gaming, and AI companies must affirm that antisemitism will not be permitted or facilitated on their platforms or by their products. Freedom of speech does not absolve them of corporate responsibility. 

Please note that the suggestions offered below are not exhaustive. There is always more that can be done.

Back to Table of Contents

Understanding Antisemitism

Utilize a standard definition | While social media and gaming companies say there is no place for antisemitism on their platforms, many do not have a definition of what contemporary antisemitism actually looks like. Companies should utilize the International Holocaust Remembrance Alliance (IHRA) Working Definition of Antisemitism to strengthen policies around hate speech and hateful conduct, violence, abuse and harassment, Dangerous Organizations and Individuals (DOI), and synthetic and manipulated media, amongst others, across their platforms. This will allow artificial intelligence and human moderators to be more consistent and more effective in either content removal or demotion of all forms of antisemitism on their platforms. 

Train AI models to properly identify antisemitism, including contemporary terms and tropes | Antisemitism can be difficult to pinpoint because it is motivated by disparate ideologies. Holocaust denial and distortion are expressions of antisemitism as well as the trivialization of the Holocaust. Conspiracies of Jewish power and control continue to threaten the well-being of Jewish communities. The speech tendencies of hate groups also may be surprising. For example, certain white supremacist groups are known to use less profanity than may be expected. Therefore, companies using AI need to incorporate these tendencies into their training data to ensure that their large language models (LLMs) are not trained to share antisemitic content. In addition, because antisemitic speech is often coded, companies must ensure their products are trained to recognize specific linguistic markers such as plural noun forms, such as ‘whiteness.’ They should then consider creating computational models and workflows specific to the Jewish community that detect extremist speech and prevent it. These models would then understand the unique nature of hate speech, specifically white supremacist speech and antisemitic slogans, and accurately prevent it.1 

Companies can utilize resources, such as Translate Hate, an online glossary of antisemitic tropes and phrases, to improve AI models' ability to surface antisemitic content, as well as to improve media literacy on antisemitism within the company, especially for policy and trust and safety teams. Translate Hate is also available in Spanish, with appropriate cultural references: Traduciendo el Odio

Recognize the difference between criticism of Israel and antisemitism | As antisemitism threatens the well-being of the Jewish community with renewed vigor after the horrific attacks on October 7, 2023 – and the subsequent war between Hamas and Israel – social media and gaming companies must be keenly aware of how antisemitism can be cloaked under the guise of criticism of Israel. Numerous examples demonstrate how anti-Israel statements and actions can become antisemitic, with potentially perilous repercussions. While calls for rape or violence against Jews are clear calls for incitement, it is crucial to understand how some ideas or statements can be perceived as threatening to the Jewish community. For example: 

  • “From the River to the Sea” is a catch-all phrase that symbolizes Palestinian control over the entirety of Israel’s borders, from the Jordan River to the Mediterranean Sea. This saying is often interpreted as a call for the elimination of the State of Israel. 
  • “Globalize the Intifada” is a phrase used by pro-Palestinian activists that calls for aggressive resistance against Israel and those who support Israel. The most prominent expressions of intifada have been through violent terrorism, so this phrase is often understood by those saying and hearing it as encouraging indiscriminate violence against Israelis, Jews, and institutions supporting Israel.
  • Similarly, “Zionism is Racism” implies that self-determination is a right for all people, except Jews. There is nothing inherent to Zionism that contradicts support for Palestinian self-determination. According to AJC’s State of Antisemitism in America 2023 Report, 80% of Jews say caring about Israel is an important part of how they think about their Jewish identities. Therefore, calling all Zionists racists or saying Zionists deserve to die is dangerous not only to Israelis but towards the vast majority of American Jews.

Ensure ongoing research in latest trends | To better understand antisemitism, companies should engage frequently with civil society groups that actively monitor antisemitism. Not only should their policy and trust and safety teams be briefed, but so should major influencers who use the platforms so they can push out accurate, fact-based information. Online antisemitism—transmitted in memes, coded language or images, and implicit speech—rapidly evolves. Social media and AI companies should study hate speech, incorporate linguistic markers, and create detection models. 

Name antisemitism within terms of service | Companies’ terms of service should specifically reference and define antisemitism, and antisemitism should be included as a separate category in transparency reports. According to AJC’s State of Antisemitism in America 2023 Report, nine in 10 (89%) American Jews say it is important for social media companies to explicitly cover antisemitism in the platforms’ terms of service and community standards. Those who say they have felt physically threatened by an online incident are far more likely than those who have not, to deem this measure very important (81% versus 56%). 

Back to Table of Contents

Responding to Antisemitism

Be accountable | When antisemitism consistently occurs and is spread on the platform, publicly condemn and work to fix the gaps. Given that company algorithms and recommendation systems have driven users into echo chambers, aided collective harassment, promoted radicalization, and digitally amplified biases and spread antisemitism, companies should be honest about the harm their products have caused to re-earn public trust. They should disclose all reported antisemitic and hateful materials, and they should specifically note what items were actioned, what items were not actioned, and justifications for such decisions, as well as engagement rates and hosted advertising. Companies should be committed to not profiting from hate.

Make it easier for users to report antisemitism | For American Jews who experienced antisemitism online or on social media in the past 12 months, only 35% reported the incident. The majority (65%) did not report. This may be why most American Jews (90%) say it is important for social media companies to make it easy to report antisemitism specifically. To mitigate these issues, list antisemitism as an independent option for users to flag when reporting harmful content. In addition, one-to-one reporting is too slow and better technology embedded in the platform/product to counter antisemitism is needed. Companies must also address the increasing challenge of inappropriate mass reporting. Jewish users and Jewish accounts have been harassed and mass-flagged, even when they did not do anything wrong.

Keep and improve policies | Companies should not make changes to their existing policies that would result in increasing the visibility and distribution of antisemitic content and misinformation likely to contribute to a risk of harm, including loosening restrictions around praise of Dangerous Organizations and Individuals (DOI), many of which are violent antisemites. Social media companies should establish community standards indicating that antisemitic speech will not be permitted on their platforms and that they will not facilitate access to services that do not prohibit it. Relatedly, they must guarantee appropriate safeguards to allow initial judgments deeming content to be antisemitic (or not) to be appealed and reviewed. They should also ensure that these policies are updated as antisemitism morphs and changes, including conspiratorial antisemitism and anti-Israel/anti-Zionist antisemitism. For example, the call to “Globalize the Intifada,” a phrase seen increasingly online, incites violence against Jewish people and should be designated accordingly. Social media and gaming companies have danced around this for years, saying anti-Zionism is solely political. But in this moment, especially as Jewish lives around the world have been endangered since the Hamas terrorist attacks against Israel on October 7, 2023, companies must account for far-left, Islamist, and violent anti-Israel antisemitism, in the same way they have accurately captured far-right extremism in their policies. They should have clear policies for when the word “Zionist” is used as a proxy for Jews. After all, the far-right is also weaponizing this conflict and using their own anti-Zionist symbols to promulgate hate. 

Improve moderation systems | Moderation systems can be improved and harmonized to ensure moderators are accurately and equally implementing policies and community standards. The following steps will allow companies to better provide their users with harassment-free spaces and empower their users to take part in the fight for healthy discourse online:

  • Explicitly cover antisemitism in terms of service and community standards. Doing this will improve moderation systems and more effectively ensure the safety of all users, including Jewish users, on their platforms.
  • Reconsider automated detection of antisemitic content. In the rapidly evolving space of online antisemitism—which relies on memes, coded language or images, and implicit speech—non-human regulatory models are not fast enough. Even the best models, like ChatGPT, which can access vast portions of the internet, have difficulties understanding context. Companies should reconsider current approaches and invest more seriously in the human and technical resources necessary to enable vigorous, timely enforcement of their terms of service and community standards and ensure hatred and misinformation about Jews is not inadvertently being spread.
  • Ensure human testing of the models. Social media companies are using AI to moderate their platforms, yet these systems are not quite built. Because AI is being trained on policies, the policies must agree about what antisemitism and anti-Jewish hate speech is for these large language models (LLMs) to be accurate in capturing all attacks against Jews. Again, companies should integrate a comprehensive definition of antisemitism into their policies, to train both their AI systems and human content moderators on the various forms of contemporary antisemitism.
  • Close the language gap. There is currently an enforcement gap between English and non-English language source material. Social media platforms need to be as vigilant against hate in non-English languages as they are against hate in English. The language of hateful posts should not be an excuse for a lack of enforcement. Moderators who are not fluent in English need to be trained in their native language to understand company policies related to antisemitism as well as how to recognize the antisemitism coming from within their own historical, linguistic, political, religious, and economic contexts. 
  • Ensure a proper appeal process. Safeguards must exist to allow judgments deeming content to be antisemitic to be appealed and reviewed. 
  • Address the increasing challenge of inappropriate mass reporting from users and bad actors. Jewish users and Jewish accounts have been harassed and mass-flagged, even when they did not do anything wrong.
  • Publicly share information about content moderation. Social media companies should also regularly publish information about the impact of their moderation systems, including steps taken to effectively stop recommending, and de-rank, antisemitic, hateful content as well as the number of human moderators addressing online hate, the training that such moderators receive, and procedures for reinstating content that has been incorrectly removed. 

Enforce disciplinary measures | When the platform’s terms of service and community standards are violated, including intimidation, harassment, threats, and unprotected hate speech, moderators need to be prepared to enforce these disciplinary measures without equivocation, including permanently banning repeat offenders, both personal accounts and extremist groups. Inconsistent enforcement threatens the safety of all vulnerable communities. 

Publish and improve transparency reports | All social media and gaming companies should publish transparency reports, which include company processes, implementation of policies, and safeguarding mechanisms. These reports can be improved with better metrics, including cross-disciplinary approaches such as from the field of computer science. For example, according to the Online Hate Prevention Institute, companies should provide disaggregated data on the volume and removal rates of antisemitism in the following sub-categories: traditional antisemitism; Holocaust-related antisemitism, including Nazi glorification; incitement to violence against Jews and glorification of violence against Jews; and antisemitism which targets the Jewish collective, including the State of Israel as a substitute for the Jewish collective.  

Push out or redirect users to accurate information | Respond by actively taking part in the fight against mis/disinformation by amplifying accurate material and providing context to their users for material that is more suspect but still remains on their site. Companies should support AI-enabled educational tools that push out accurate, verifiable information, which would encourage prosocial behavior and curbs the spread of antisemitism. For example, Meta implemented pop-ups to provide accurate information about topics such as Covid-19 and the Holocaust. X (then Twitter) provided labels to tweets that were spreading misinformation concerning the 2020 U.S. election. In addition to de-amplifying these tweets through their algorithm, they also labeled these tweets with warnings before users engaged with the material. These models should be considered in the wake of the Hamas terrorist attacks against Israel on October 7, 2023. Social media and gaming companies should also provide labels when content is harmful or false or redirect users to trusted sources. In addition, social media companies should amplify trusted partners’ content to ensure accurate information is more readily viewed. 

Promote counterspeech and digital literacy | Social media companies can play a powerful role in reminding users that it is incumbent on all of us to correct false narratives, drown out hateful voices, and push antisemites back to the far-fringes of the Internet where they belong—far removed from mainstream platforms and access to impressionable minds. We know, however, that counterspeech has the adverse effect of elevating antisemitic posts’ visibility because there is more engagement with them. Therefore, social media companies can partner with Jewish organizations directly to push back against antisemitism on their platforms. Additionally, social media companies should promote digital literacy and ensure that users are aware of how their systems are designed before using these sites.

Back to Table of Contents

Preventing Antisemitism

Enhance Jewish community outreach | A number of social media companies have consistent outreach with Jewish communal leaders and organizations. For those who do not, including gaming and AI companies, consider starting regular meetings with Jewish stakeholders. Companies can work with Jewish communal leaders to host town hall-style events or trainings to educate users and the broader community on efforts to counter antisemitism and bias. For more information about trainings on antisemitism, please contact antisemitism@ajc.org. Companies can also engage with civil society groups, including Jewish organizations, to learn best practices on monitoring antisemitism, and to engage user researchers who can pressure test the models and provide valuable insights for AI improvements.

Test and fix AI tools | The first key principle from the October 2023 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” is that AI must be safe, secure, and trustworthy. Although AI is used everywhere today, ethics in AI is at its infancy stage globally. Companies should recognize AI tools are being used against Jewish users and have a plan to fix them. Antisemitism is one of the most layered and complex forms of hate for a LLM to understand and, critically, manifestations of this hatred go well beyond text, including AI-generated images (representational harms). A dangerous trend among social media companies has been the recommendation of hateful or inaccurate content to users who may not have otherwise been exposed to such hateful or biased media. The following recommendations are critical: 

  • Prevent AI bias. Implement bolstered algorithmic capabilities that will effectively stop recommending, and de-rank, antisemitic, hateful content.
  • Test the effectiveness of your AI model. According to the Online Hate Prevention Institute, companies should be able to answer, “How accurate is the model in labeling antisemitism?” “What percent of content the model classifies as antisemitism is really antisemitism?” They should be transparent about the model’s precision. Companies should also share how much antisemitism on the platform the model is able to find, known as the model’s recall. Both precision and recall data need to be shared. Additionally, invest in third parties to pressure test their AI models on antisemitism.
  • Acknowledge deficiencies. For AI companies, chatbots have the ability to inform users when the data provided by users is irrelevant for a certain query. Chatbots should inform users when this is the case, especially when the question posed is outside of the timeframe from which the chatbot was trained. For example, when a chatbot is asked about events from October 7, 2023, it should be forthright about the fact that it was only trained on data up to December 2022 if this is indeed the case.
  • Target fraud and deception efforts. Authenticate official content and actively monitor for AI-generated material. AI technologies provide ample opportunity for fraud and deception. Antisemites can use such technologies towards malicious ends. Companies should seek to protect users and their information by verifying content and ensuring that AI-generated material is marked as such.

Address risks and set guardrails | Because AI systems can generate factually inaccurate information, even “making up or hallucinating research reports, laws, or historical events in their outputs,” according to the Responsible Artificial Intelligence Institute, companies must have guardrails in place.2 Malicious use, privacy risks, bias, security threats, lack of transparency, and other costs, including antisemitism, must be mitigated and publicly addressed. Most importantly, human agency and decision-making must be protected. 

Ensure correct, verifiable information and list citations | Questions posed to an AI system with objective answers should be met with correct, verifiable information. AI companies must prevent speculation, especially considering the highly political and sensitive nature of many queries posed to chatbots. When an AI cannot provide such an answer or does not have the adequate information for a response, it should clearly indicate that it cannot respond with correct, proven information to the query. It is also not the place of chatbots to embody the human bias and political tendencies of their programmers without acknowledging such bias and attempting to provide truthful, accurate answers. Relatedly, chatbots should also cite their sources to show users where the information provided is coming from. Chatbots are trained on data and have the ability to show users from where they derive their information. This way, users can check the chatbots to make sure that the information is true, accurate, and verifiable.

Develop a Code of Ethics, including guidelines for privacy-related material | Social media, gaming, and AI companies should abide by a Code of Ethics and adhere to core principles to earn public trust. They should be open about what responsible measures they have implemented to handle sensitive information. As the nature of information becomes more complex with access to AI technologies, companies are responsible for clarifying their practices and protecting user information.

Share critical information with the U.S. Government | Companies should include the U.S. government as an active partner in the fight against antisemitism. To aid in these efforts, companies should share the results of safety tests and other critical measures with relevant contacts at the U.S. government. 

Elevate integrity workers’ voices | Integrity workers at social media companies, those working in policy and Trust and Safety, must be given the power and respect necessary to effectively complete their job. This means that not only should integrity workers be involved at the various stages of product design and development, but they should also be given the ability to make necessary adjustments to the product with the health, safety, and wellness of future users in mind.

  • Empower Trust and Safety. Trust and safety teams within the companies have the critical job of actioning violative content and should be given more authority. In this present moment– not only because of rising antisemitism and hate, but also platform manipulation and coordinated behavior, trust and safety staff should increase. Companies should provide information about their number of moderators, which countries they operate in, which native language they speak, and how they have been trained. Companies should also answer how they are utilizing different language experts within trust and safety teams, and how they are mitigating any personal biases (cultural, historical, and even educational) these individuals may have against Jews. As noted in the U.S. National Strategy to Counter Antisemitism, by investing in human and technical resources, including trust and safety councils, social media and gaming companies will enable more vigorous and timely enforcement of platforms’ terms of service and community standards. 
  • Connect policy managers with product engineers. There cannot be a disconnect between policy managers, who are concerned with the proliferation of online hate, and product engineers, who may be hyperfocused on the product’s rollout and its revenue generation. Policy managers must be in conversation with product engineers through every phase of design, building, and research to ensure that products do not cause unwanted externalities in the realm of online hate. 

Establish new positions | Social media companies should hire a point person focused on the Jewish diaspora to both listen to the concerns of Jewish communities around the world and work with senior leadership within the company so structural changes happen to ensure antisemitism is understood, recognized, and properly addressed. Additionally, companies should assign user researchers to the Jewish community to better understand how Jewish users experience antisemitism and hate on their platform so proper changes can be made. 

Ban Holocaust denial and distortion | Every platform should ban Holocaust denial and distortion as a matter of policy and actively monitor and enforce these bans on their sites. Relatedly, social media and gaming companies should treat content denying or distorting the October 7th Hamas terrorist attacks against Israel– the biggest single-day massacre of Jews since the Holocaust– similarly. October 7th denial should be prohibited. Companies should dedicate appropriate resources to remove it at scale under the denial of well-documented violent events policy, as detailed in the January 2024 CyberWell report on “Denial of the October 7 Massacre on Social Media Platforms.”

Endorse a prevention science approach | According to the National Institutes of Health, prevention science seeks to “understand how to promote health and well-being and prevent health conditions from starting or getting worse. It spans all diseases, conditions, populations, and phases of life.” Antisemitism is a present condition online, one that has only metastasized since the October 7th Hamas terrorist attacks. Social media companies should adopt the prevention science framework to better understand the root psychology of what is driving attitudes online (i.e., anxiety, distrust, loneliness, conspiratorial thinking, etc.) and prioritize addressing those bigger issues to prevent antisemitism resulting from these situations.

Be open to the input of outside vendors | Social media companies can be innovative in their implementation of third-party technology to the benefit of users. For example, given the complexity of antisemitism, social media companies can utilize external large language models (LLMs) developed specifically to surface antisemitism on their platforms, and then subsequently have their human moderators determine if it is violative of their policies. This system allows for effective automation while leaving the final decision in the hands of human moderators at the social media company. Outside vendors can also test for moderation and enforcement accuracy. For example, rigorous third-party testing of Community Notes on X (formerly Twitter) can determine effectiveness of this product mitigating misinformation on the platform. With new knowledge, X can discuss a penalty for users who continue to post misinformation warranting Community Notes. Social media and gaming companies should consider supporting organizations that are finding and reporting violative antisemitic content that their company’s AI is not finding. Often these organizations are non-profits doing the work of the companies and should be financially supported. They should also fund independent audits. Platforms should undergo routine, transparent, and independent audits that allow for third-party reviewers to rate the progress of the platform in terms of its effort to combat antisemitism and hate speech.

Collaborate and share best practices | Because certain social media and gaming companies do a much better job at preventing antisemitism and hate on their platforms, they should share and actively promote best practices for other sites to adopt and enforce. In like manner, certain AI companies and their products outpace others in terms of safety and accuracy. Coordinated efforts across company platforms can help ensure that extremism, antisemitism, and hate will not proceed to migrate from one site to another in the manner it currently does.

Back to Table of Contents