The United States, the United Kingdom, and the European Union are preparing for upcoming elections amidst growing concerns over the threat posed by artificial intelligence (AI) technology in spreading disinformation. In the US, incidents like a deepfake robocall in New Hampshire have highlighted the potential of AI to influence voters through deceitful means. Public Citizen’s Lisa Gilbert referred to this as “disinformation on steroids,” underscoring the challenge ahead of the 2024 election.

In the UK, ahead of the general election, Home Secretary James Cleverly warned that hostile nations could use AI deepfake technology to disrupt the electoral process by creating highly realistic fake videos. This warning is set against a backdrop of global concerns, with deepfake videos being identified as a significant threat to democratic processes. The London Mayor, Sadiq Khan, also expressed the urgency for legislation to combat the misuse of such technology.

In response to these threats, Meta, the parent company of Facebook and Instagram, announced the creation of a dedicated team to counter deceptive AI content in preparation for the EU elections. This initiative is part of a broader effort to safeguard the integrity of the electoral process from the misuse of generative AI technologies. Despite Meta’s commitment, experts remain concerned about the effectiveness of these measures.

These developments highlight the challenges facing electoral integrity in the digital age, with deepfakes and other AI-generated disinformation raising concerns across the globe. Although measures such as banning AI audio robocalls in the US and collaborative efforts by tech giants have been initiated, the rapid advancement of AI technology and regulatory gaps continue to pose significant threats. As elections approach in these regions, the focus intensifies on the need for robust government regulations and industry-wide commitments to counteract the influence of misleading AI-generated content on the democratic process.