Holly Huffnagle, American Jewish Committee (AJC) U.S. Director for Combating Antisemitism, briefed members of the Inter-parliamentary Task Force on Online Antisemitism. Established in September, the task force includes representatives from the U.S. Congress, and the parliaments in Australia, Canada, Israel, and the United Kingdom.

“The future of combating antisemitism, in many ways, is combating the digitization of the problem. We all need to send the message that antisemitism in any form is unacceptable on or offline,” Huffnagle told the task force.

The severity of the problem of online antisemitism was revealed last month in AJC’s new report on The State of Antisemitism in America. Eighty-eight percent of American Jews said antisemitism in the U.S. today is a serious problem, and 82% said it has increased over the past five years.

One out of five American Jews (22%) have been the target of an antisemitic remark online or through social media in the last five years. Of this group, 62% said they had been the targets of antisemitic remarks on Facebook, 33% said they had experienced antisemitism on Twitter, 12% on Instagram, and 10% on YouTube.

Nearly half (46%) of American Jews who said they reported online antisemitism to a social media platform said no steps were taken to address the incident. And 24% of American Jews have avoided posting content online that would identify them as Jewish or reveal their views on Jewish issues.

“When we think about arguments in support of free speech online, the AJC report shows American Jews feel intimidated and chilled from speaking. The online space is not an equal free speech playing field,” Huffnagle told the legislators on the task force.

Huffnagle noted that major tech companies took an absolutist position for years that their platforms were an open marketplace of ideas. “The best ideas did not rise to the top. Instead, their platforms became breeding grounds for violence and extremism,” she said.

“Major tech companies’ business models rely on increased engagement, and we know lies, fear, and anger generate the most engagement. After all, lies spread six times faster than truth,” said Huffnagle, “It is imperative to our efforts that antisemitic content is reported and removed when it violates the platform’s policies.”

Social media giants, responding a national outcry, have begun to expand their definitions of hate speech and to moderate content and take it down, said Huffnagle, citing as an example Facebook’s decision last month to ban Holocaust denial.

Huffnagle offered several policy recommendations for the task force to encourage tech companies to implement, including:

-- Follow a universal standard of what antisemitism is. Platforms should map the IHRA Working Definition of Antisemitism onto their policies. This will allow artificial intelligence and human moderators to be more effective in either content removal or demotion of all forms of antisemitism. In addition, understanding the complexity of contemporary antisemitism must consider different language and cultural differences. Moderators who are not fluent in English need to be trained in their own language to understand policies related to antisemitism.

-- Demand transparency. Tech platforms should be transparent in the drafting of policies, algorithms, and moderation systems and abide by a set of core principles that will earn public trust. Platforms should correct the algorithms which allow hateful communities to cross-pollinate and grow. Moderation systems should be improved and harmonized to ensure moderators are accurately and equally implementing the policies and community standards.

-- Establish an interparliamentary system to gather and share new data quickly. The task force should convene an international group of data scientists, tech experts, and scholars who can research and assess various platforms’ algorithms: Are the algorithms still actively promoting hateful content?  How can algorithms prevent previously removed content from reappearing? Answers to these questions will empower the task force to recommend technical changes instead of listening at face value to the platforms’ own defense of their algorithms and policies.

-- Focus on non-mainstream platforms. Today antisemitic radicalization is more on fringe platforms, such as 8chan (now 8kun). Moreover, many antisemitic and racist websites are no longer hosted in the United States, and instead have gone to outside hosting domains with less restrictions. The inter-parliamentary task force is uniquely positioned to work with countries, especially in southeast Asia or in former Soviet republics, which play host to these non-mainstream platforms and sites.

Huffnagle offered two specific recommendations for Members of Congress. She asked the U.S. representatives who serve on the task force and their colleagues in Congress to support reforming Section 230 of the Communications Decency Act to hold tech companies liable if their algorithms promote harmful content.

And, she called for designating transnational white supremacist groups as terrorist organizations, which will mandate social media companies to remove their content and severely limit white supremacists’ ability to recruit online.

Back to Top