Author: Piero Soave and Wesley Issey Romain - AI, Cyber Security & Space Team
The year 2024 is sure to be remembered when it comes to elections: first, never before have so many people around the world been called to cast their vote; second, these elections will be the first to take place in a world of widespread Generative Artificial Intelligence (GenAI). The combined impact of these two elements is likely to have a lasting impact on democracy. This article looks at how GenAI can influence the outcome of elections, reviews examples of risks from recent elections, and investigates possible mitigations.
The year of high-stakes elections
In over 70 elections throughout 2024, some 800mn voters will take to the ballots in India, 400mn in Europe, 200mn in the United States of America, and many more across Indonesia, Mexico, and South Africa1. In many cases, these elections will be polarized and will feature candidates from populist backgrounds. Previous electoral rounds have scarcely been an example of moderation, featuring instead accusations of foreign interference, and a deadly assault on the US Congress. Whoever wins the most votes will make decisions on topics as consequential as the US-EU relationships, the future of NATO, trade wars, the geopolitical equilibrium in the Middle East, Hindu-Muslim relationships, and more. With so much at stake, the risk of election interference warrants a closer look.
Enter GenAI
The launch of OpenAIโs ChatGPT at the end of 2022 brought GenAI to the mainstream. GenAI indicates an AI system that has the ability to create content in the form of text, audio or video. Having been popularized by ChatGPT, there are now thousands of applications readily available at minimal to no cost. These systems have been trained on billions of elements of text, sound or video, and are able to respond to a user query and create synthetic content in those formats.ย
The existing legal and regulatory frameworks are poorly suited to mitigate the risks deriving from GenAI. Since the launch of ChatGPT, there have already been lawsuits related to intellectual property2, sanctions to corner-cutting lawyers3, egregious reinterpretations of historical facts4, as well as general concern about the bias inherent in these systems5. One specific problem related to GenAI is that ofย deepfakes, i.e. audio or video files that show people saying or doing things they never in fact said or did. This content is so realistic that it is all but impossible to determine whether what is in front of us is reality, or an artificial creation. The consequences are far-ranging, from the potential increase in financial and other fraud6, to the infringement of privacy and individual rights7. But it is in the domain of politics thatย deepfakesย are particularly troubling. They can be used for a variety of bad purposes, from misleading voters about where, when and how they can vote, to spreading fake content from well recognizable public figures, to generating inflammatory messages that lead to violence8.ย
GenAI and misinformation in elections
Misinformation is not a new phenomenon, and it certainly is older than artificial intelligence. However, technology can exacerbate and multiply its effects. By some accounts, โ25% of tweets spread during the 2016 US presidential elections were fake or misleadingโ9.ย GenAI has the potential to turbocharge the creation of fake content, as this no longer requires sophisticated tools and expertise - anyone with an internet connection could do it.ย
Examples ofย deepfakeย interference in the political process abound10, despite the relative young age of the technology. In what is perhaps the most consequential event to date, Gabonโs President Ali Bongo appeared in a 2019 video in good health, despite having recently suffered a stroke. The media started questioning the veracity of the video - which is still being debated - ultimately triggering an attempted coup11. Crucially, Schiff et al suggest that โthe mere existence ofย deepfakesย may allow for plausible claims of misinformation and lead to significant social and political harms, even when the authenticity of the content is disputed or disprovedโ12.
During Argentinaโs 2023 presidential elections, both camps made extensive use of AI generated content. Ads featured clearly fake propaganda images of candidates as movie heroes, dystopian villains or zombies. In an actualย deepfakeย video - labeled as AI generated - โMr Milei explains how a market for human organs would work, something he has said philosophically fits in with his libertarian viewsโ13.ย Also in 2023, synthetic content featured in mayoral elections14 in Toronto and Chicago, the Republican primaries in the US, Slovakiaโs parliamentary electionsย - all the way to New Zealand15.
In the run-up to general elections in India, the Congress party shared aย deepfakeย video of a Bharat Rashtra Samiti leader calling to vote for Congress. The video was shared on social media and messaging apps as voters went to the ballot, and was viewed over 500,000 times before the opposing campaign could contain the damage. AI is being widely used in India to create holograms of candidates, and translate speeches across multiple local languages - as well as for less ethical and transparent objectives16.
In an attempt to simulate bad actorsโ attempt to generate misinformation, researchers tested four popular AI image generators and found that the tools โgenerated images constituting election disinformation in 41%โ of cases. This is despite policies in place for these tools which should prevent the creation of misleading materials about elections. The same researchers looked for evidence of bad use and found that individuals โare already using the tool to generate content containing political figures, illustrating how AI image tools are already being used to produce content that could potentially be used to spread election disinformationโ17.

Controls and mitigations
Regulation around AI is moving fast in response to even faster technological advancements. Perhaps the most thorough attempt at creating a regulatory framework is the EU AI Act18, approved in March 2024. In the US, a mix of federal and state initiatives seek to address several concerns related to AI, from bias to GenAI, and data privacy. These include the 2023 Presidential Executive Order and related OMB guidance; the NIST Risk Management Framework; and state legislation, from the early New York City Law 144 to the more recent California guidance and proposed bills. Other countries, from Singapore to Australia and China, have approved similar rules.ย
Looking at elections integrity specifically, the EU adopted in March a new regulation โon the transparency and targeting of political advertising, aimed at countering information manipulation and foreign interference in electionsโ. This focuses mostly on making political advertising clearly recognizable, but most of the provisions wonโt enter into force before the autumn of 202519. Also in March, the European Commission leveraged the Digital Services Act - which required very large online platforms to mitigate the risks related to electoral processes - to issue guidelines aimed at protecting the June European Parliament elections. The guidelines include labeling of GenAI content. Although these are just best practices, the Commission can start formal proceedings under the Digital Services Act if it suspects a lack of compliance20. In the US, two separate bipartisan bills have been introduced in the Senate: theย AI Transparency in Elections Act21ย and theย Protect Elections from Deceptive AI Act22.
These frameworks have yet to stand the test of time, and the proliferation of open-source models and APIs makes it an uphill struggle for regulators. Regulation aroundย deepfakesย specifically is scarce and complex, as it needs to address two separate issues: the creation of the synthetic material, and its distribution. What regulation does exist, tends to focus on sexual content23, although in some cases political content is also covered24. Existing norms around privacy, defamation or cybercrime can offer some support, but are ultimately inadequate to prevent harm25. Some tech solutions are available, such as watermarks, detection algorithms to verify authenticity, or including provenance tags into content26. Whether these techniques are able to prevent or counter the creation and spread ofย deepfakesย at scale remains an open question - and some of them may have unintended drawbacks27. The experience of social media platforms in tackling the spread of harmful content and misinformation is mixed at best28. Platformsโ efforts to mitigate harm (from content moderation to the provision of trustworthy information), and solutions proposed by other parties (such as the removal of the reshare option) are steps in the right direction - but seem unlikely to move the needle.
It is possible that tech developments in the near future will make it easier to detect and disrupt the flow of disinformation, fake news andย deepfakesย that threaten to sway elections - such as the recently released OpenAI detector29. But the best tool available right now might be literacy interventions, which can make readers more alert to fake news3031. For example, news media literacy aims to provide the tools to assess information more critically and to identify false information. Hameleers found that this type of intervention is effective at reducing the perceived accuracy of false information, although importantly it does not reduceย agreementย with it (when the readerโs beliefs align with its message)32.ย
Conclusions
2024 will be a critical year for liberal democracies and election processes worldwide, from the Americas and Europe to Africa and Asia. Election outcomes will play a crucial role in shaping the orientation of the most pressing issues in world affairs.
The advent of AI tools such as Generative AI threatens electoral processes in democratic countries as it increases the risks of disinformation, potentially swaying voting outcomes. GenAI effectively gives anyone the ability to create synthetic content and deploy it in the form of robocalls, phishing emails, realisticย deepfakeย photography or video, and more. Once this content is online, previous experience teaches that it is very difficult to moderate or eliminate, especially on social media platforms.
While continuing to support tech-based initiatives to detect or tag synthetic content, Governments and education institutions should invest in information literacy programs to equip people with the tools to critically evaluate information and make informed electoral decisions.ย
- Keating, Dave. โ2024: the year democracy is voted out?โย Gulf Stream Bluesย (blog).ย Substack.ย Dec 29, 2023.<https://davekeating.substack.com/p/2024-the-year-democracy-is-voted?r=wx462&utm_campaign=post&utm_medium=web&triedRedirect=true> โฉ๏ธ
- Grynbaum, Michael M., and Ryan Mac. โThe Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work.โย New York Times.ย Dec 23, 2023. <https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html>ย โฉ๏ธ
- Merken, Sara. โNew York lawyers sanctioned for using fake ChatGPT cases in legal brief.โย Reuters. June 26, 2023. <https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22> โฉ๏ธ
- Grant, Nico. โGoogle Chatbotโs A.I. Image Put People of Color in Nazi-Era Uniforms.โย New York Times.ย Feb 22, 2024. <https://www.nytimes.com/2024/02/22/technology/google-gemini-german-uniforms.html> โฉ๏ธ
- Nicoletti, Leonardo., and Dina Bass. โHumans are Bias. Generative AI is even Worse.โย Bloomberg. June 9, 2023. <https://www.bloomberg.com/graphics/2023-generative-ai-bias/> โฉ๏ธ
- Sheng, Ellen. โGenerative AI financial scammers are getting very good at duping work email.โย CNBC. Feb 14, 2024. <https://www.cnbc.com/2024/02/14/gen-ai-financial-scams-are-getting-very-good-at-duping-work-email.html โฉ๏ธ
- Weatherbed, Jess. โTrolls have flooded X with graphic Taylor Swift AI fakes.โย The Verge. Jan 25, 2024. <https://www.theverge.com/2024/1/25/24050334/x-twitter-taylor-swift-ai-fake-images-trending> โฉ๏ธ
- Alvarez, Michael R., Frederick Eberhardt., and Mitchell Linegar. โGenerative AI and the Future of Electionsโย California Institute of Technology Center for Science, Society, and Public Policy (CSSP), July 21, 2023. <https://lindeinstitute.caltech.edu/documents/25475/CSSPP_white_paper.pdf> โฉ๏ธ
- Bovet, Alexandre., and Hernรกn A. Makse. โInfluence of fake news in Twitter during the 2016 US presidential election.โย Nature Communications.ย Vol. 10(1), 7. Jan 2, 2017. <https://pubmed.ncbi.nlm.nih.gov/30602729/> โฉ๏ธ
- Bontcheva, Kalina., Symeon Papadopoulous., Filareti Tsalakanidou., Riccardo Gallotti.,ย et al.ย โGenerative AI and Disinformation: Recent Advances, Challenges, and Opportunitiesโ.ย European Digital Media Observatory (EDMO), February 2024. <https://edmo.eu/edmo-news/new-white-paper-on-generative-ai-and-disinformation-recent-advances-challenges-and-opportunities/> โฉ๏ธ
- Delcker, Janosche. โWelcome to the age of uncertaintyโ.ย Politico. Dec 17, 2019. <https://www.politico.eu/article/deepfake-videos-the-future-uncertainty/> โฉ๏ธ
- Bueno, Natalia., Daniel Schiff., and Kaylyn Jackson Schiff. โThe Liarโs Dividend: The Impact of Deepfakes and Fake News on Politician Support and Trust in Media.โย Georgia Institute of Technology GVU Center.ย <https://gvu.gatech.edu/research/projects/liars-dividend-impact-deepfakes-and-fake-news-politician-support-and-trust-media> โฉ๏ธ
- Nicas, Jack., Lucia Cholakian Herrera. โIs Argentina the First A.I. Election?โย New York Times. Nov 15, 2023. <https://www.nytimes.com/2023/11/15/world/americas/argentina-election-ai-milei-massa.html> โฉ๏ธ
- Wirtschafter, Valerie. โThe Impact of Generative AI in a Global Election Yearโ.ย Brookings Institution. Jan 30, 2024. <https://www.brookings.edu/articles/the-impact-of-generative-ai-in-a-global-election-year> โฉ๏ธ
- Hsu, Tiffany., and Steven Lee Myers. โA.I. Use in Elections Sets Off a Scramble for Guardrails.โย New York Times. June 25, 2023. <https://www.nytimes.com/2023/06/25/technology/ai-elections-disinformation-guardrails.html> โฉ๏ธ
- ย Sharma, Yashraj. โDeepfakes democracy: Behind the AI trickery shaping Indiaโs 2024 election.โย Aljazeera. Feb 20, 2024. <https://www.aljazeera.com/news/2024/2/20/deepfake-democracy-behind-the-ai-trickery-shaping-indias-2024-elections> โฉ๏ธ
- โFake image factory: How image generators threaten election integrity and democracy.โย Center for Countering Digital Hate (CCDH). March 6 2024. <https://counterhate.com/wp-content/uploads/2024/03/240304-Election-Disinfo-AI-REPORT.pdf>ย โฉ๏ธ
- Abdurashitov, Oleg., and Caterina Panzetti. โAI Regulatory Landscape in the US and the EU: Regarding the Unknown.โย ITSS Verona.ย Jan 18, 2024. <https://www.itssverona.it/ai-regulatory-landscape-in-the-us-and-the-eu-regulating-the-unknown-ai-cybersecurity-space-group>ย โฉ๏ธ
- โEU introduces new rules on transparency and targeting of political advertising.โย Council of the European Union. March 24, 2024. <https://www.consilium.europa.eu/en/press/press-releases/2024/03/11/eu-introduces-new-rules-on-transparency-and-targeting-of-political-advertising/> โฉ๏ธ
- โCommission publishes guidelines under the DSAโย European Commission. March 26, 2024. <https://ec.europa.eu/commission/presscorner/detail/en/ip_24_1707> โฉ๏ธ
- โMurkowski, Klobuchar Introduce Bipartisan Legislation to Require Transparency in Political Ads with AI-Generated Content.โย Lisa Murkowski, United States Senator for Alaska. March 6, 2024. <https://www.murkowski.senate.gov/press/release/murkowski-klobuchar-introduce-bipartisan-legislation-to-require-transparency-in-political-ads-with-ai-generated-content>ย โฉ๏ธ
- Klobuchar, Hawley, Coons, Collins Introduce Bipartisan Legislation to Ban the Use of Materially Deceptive AI-Generative Content in Elections.โย Amy Klobuchar, United States Senator. September 12, 2023.ย โฉ๏ธ
- UK Ministry of Justice., and Laura Farris MP. โGovernment cracks down on โdeepfakesโ creation.โ Press Release. April 16 2024. <https://www.gov.uk/government/news/government-cracks-down-on-deepfakes-creation>ย โฉ๏ธ
- Ahmed, Trisha. โMinnesota advances deepfakes bill to criminalize people sharing altered sexual, political content.โย Associated Press (AP).ย May 11, 2023. <https://apnews.com/article/deepfake-minnesota-pornography-elections-technology-5ef76fc3994b2e437c7595c09a38e848>ย โฉ๏ธ
- Jodka, Sara H. โManipulating reality: the intersection of deepfakes and the law.โย Reuters. Feb 1, 2024. <Manipulating reality: the intersection of deepfakes and the law | Reuters>ย โฉ๏ธ
- Content Authenticity Initiative Website: <https://contentauthenticity.org/>ย โฉ๏ธ
- Wirtschafter, Valerie. โThe Impact of Generative AI in a Global Election Yearโ <https://www.brookings.edu/articles/the-impact-of-generative-ai-in-a-global-election-year> โฉ๏ธ
- Aรฏmeur, Esma., Sabrine Amri., and Gilles Brassard. โFake news, disinformation and misinformation in social media: a review.โย Social Network Analysis and Mining. Vol. 13, 30, 2023. <https://link.springer.com/article/10.1007/s13278-023-01028-5#Fn18>ย โฉ๏ธ
- Cade Metz and Tiffany Hsu, โOpenAI Releases โDeepfakeโ Detector to Disinformation Researchersโ,ย New York Times,ย May 7, 2024.
<https://www.nytimes.com/2024/05/07/technology/openai-deepfake-detector.html> โฉ๏ธ - Jones-Jang, S Mo, Tara Mortensen, and Jingjing Liu. โDoes Media Literacy Help Identification of Fake News? Information Literacy Helps, but Other Literacies Donโt.โย American Behavioral Scientist. Vol. 65(2). <https://journals.sagepub.com/doi/10.1177/0002764219869406>ย โฉ๏ธ
- Helmus, Todd C. โArtificial Intelligence, Deepfakes, and Disinformation: A Primerโ.ย RAND Corporation, July 2022.ย <http://www.jstor.org/stable/resrep42027> โฉ๏ธ
- Hameleers, Michael. โSeparating truth from lies: comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the US and Netherlands.ย Information, Communication, & Society,. Vol 25(1). 2022. <https://www.tandfonline.com/doi/full/10.1080/1369118X.2020.1764603>ย โฉ๏ธ
