At the recent annual conference of the Organization for Security and Co-operation in Europe (OSCE) held in Helsinki, significant attention was given to the growing challenge of digital antisemitism fuelled by social media platforms and unregulated artificial intelligence (AI). The OSCE, established during the Cold War under the Helsinki Accords of 1975, remains a prominent forum addressing issues of xenophobia and antisemitism across its member states. The Israeli Democracy Institute highlighted the increasingly urgent role that technology policy plays in combating the spread of hate online.

The conference took place in February 2024, where discussions underscored a concerning trend: social media platforms are not just passive conduits but active amplifiers of antisemitic content due to their algorithmic design and content moderation practices. Algorithms developed to maximise user engagement tend to prioritise sensationalist and extreme viewpoints over balanced and factual information. Platforms such as TikTok, YouTube, Facebook, and Instagram were cited for promoting antisemitic narratives through their recommendation engines and feeds.

Specifically, the Israeli Democracy Institute noted that such platforms engage in what it termed “feature-based antisemitism,” where the very architecture of these services facilitates the dissemination and amplification of hateful content. For instance, antisemitic posts that reference stereotypes about Jewish control of global finance are often not flagged by content monitoring systems because they are interpreted as statements of power rather than incitement or humiliation, in contrast to racist posts against other groups which are more swiftly removed.

Moreover, the rise in antisemitic content coinciding with recent geopolitical events—such as the spike since October 7—has been linked to intentional manipulation of these algorithms by various actors. This has effectively placed Jewish communities worldwide within a “ring of fire” of disinformation circulating on digital platforms.

In addition to social media, the emergence of artificial intelligence is accelerating the automation of antisemitism. AI systems are now capable of rewriting historical narratives, generating digital images with exaggerated antisemitic features, and producing tailored persuasive content that can obscure or distort facts about the Holocaust and the establishment of the State of Israel. This technological progression marks a shift from overt manifestations of antisemitism, such as public denunciations and physical segregation, to subtle, hidden discrimination enacted through data profiling, genetic analysis, and differential treatment in digital services and pricing.

The implications of these developments were also discussed in the context of regulatory environments. While there were hopes for coordinated efforts between governments and technology companies to address these challenges, concerns have been raised about the current trajectory of regulatory changes in the United States. Despite pledges to combat antisemitism, recent policy shifts include the rollback of existing AI regulations and directives to limit government involvement in content moderation, potentially hampering efforts to control the spread of hate speech and disinformation originating from US-based tech firms.

This regulatory divergence was highlighted by contrasting debates contemporaneously occurring in Paris at the Artificial Intelligence Action Summit. Washington expressed opposition to what it termed “over-regulation” of AI, warning against potential stifling of innovation. Meanwhile, the European Union continued to advocate for robust oversight through mechanisms like the Digital Services Act and the Artificial Intelligence Act, focusing on algorithmic accountability and data protection. The Israeli Democracy Institute emphasised that the future of combating digital antisemitism lies not in traditional forums but in these policy arenas where the regulation of technology is being shaped.

Recommendations put forward include heightened vigilance regarding technology companies under authoritarian influence, such as TikTok and DeepSeek, whose agendas may conflict with democratic principles. Alongside the rapid advancement of AI capabilities, the Institute called for international agreements to restrict the misuse of AI for manipulative and harmful purposes. Strengthening privacy and data protection laws remains crucial, particularly in relation to sensitive personal information that could be exploited to facilitate racist and antisemitic actions.

The Israeli Democracy Institute conveyed a warning that the surge of antisemitism driven by social media and AI represents a broader threat to minorities and marginalised groups globally. Without effective global protections and regulations, these technologies risk exacerbating divisions and enabling widespread discrimination far beyond the Jewish community.

The Times of Israel is reporting on the detailed analysis presented by the Israeli Democracy Institute regarding the intersection of technology, policy, and the intensification of antisemitism in the digital age.

Source: Noah Wire Services