Election Interference in An Age of AI-Enabled Cyberattacks and Information Manipulation Campaigns

Image of a digital hand placing a marked ballot paper into an election ballot box An image of an AI generated hand placing a marked ballot paper into a ballot box. Source: Adobe Stock

This year, 49% of the global population in 64 countries will participate in elections. While conversations have circulated over how global policy will change based on the outcome of these elections, there is a parallel discussion taking place: how can citizens trust the outcome of elections in an artificial intelligence (AI) driven world?

The growing threat of AI-enabled offensive cyber and information manipulation campaigns can destabilize electoral processes in ways nations have not previously experienced, by targeting information platforms and election infrastructure. That is why it’s crucial to address some of the ways in which AI models can be maliciously repurposed to compromise electoral integrity and propose preventative measures that government and industry can take. While targeted information poisoning campaigns—involving adding or remorphing existing information into false or misleading information—and cyberattacks have been prevalent in the context of elections for many years, the poisoning of AI models serves as a risk multiplier that can amplify existing election-related risk. 

As generative AI models are increasingly leveraged by bad actors for offensive cyber-operations and election interference, there is little evidence to suggest that these models are allowing for the creation of novel Techniques, Tools and Procedures (TTPs) in offensive cyber-operations at the moment. Instead, GenAI models are amplifying bad actors’ speed and scale in cyber-operations, and in some cases, increasing the quality of their attack vectors, especially in the case of social engineering attacks. Below are some state-of-the-art capabilities that AI models are currently enhancing for bad actors in the threat landscape.

AI models are making it easier for bad actors to carry out harmful information campaigns and cyberattacks on election systems. This issue was highlighted at the International Conference on Cybersecurity in January. Cyber threat groups, regardless of their skill level, can leverage AI to generate malware and gain strategic advantage throughout different stages of an offensive cyber-operation. For example, a bad actor in Country X might aim to target a U.S. state’s voter registration database or IT infrastructure used to manage elections. This adversary can theoretically repurpose an LLM by training it on old malware, or by creating several copies of malware that have the same functionality but different source codes. Hackers on the dark web have already discovered ways to leverage ChatGPT to generate potential attack vectors. OpenAI recently announced that they discovered an account “knowingly violating [Application Programming Interfaces] (API) usage policies which disallows political campaigning, or impersonating individuals without consent”. These AI-enabled exploits are capable of disrupting voting databases and electoral processes more broadly.

Additionally, the tools required to create information poisoning campaigns are becoming more accessible. The production of AI-generated deepfakes has already undermined electoral processes by deterring citizens from voting or by twisting political narratives to influence votes. For instance, in the January 2024 U.S. primary elections, an AI-powered robocall impersonating President Biden targeted New Hampshire voters, prompting them to stay home instead of taking to the ballots. A similar instance of AI-enabled election interference was detected in Slovakia, where a fake AI-generated audio interview showcasing a top candidate claiming to have rigged the election went viral on social media.

It is important to note that not all AI-generated information is fake news. While AI can amplify efforts to proliferate misleading information, synthetic or manipulated information does not necessarily correlate with poor quality information. However, if poor quality information — whether AI-generated or not — dominates information spaces, it endangers the way systems and institutions are represented, and ultimately, the way voters perceive them. 

AI is making it easier for threat actors to target specific communities through tailored information campaigns. Threat actors may leverage AI systems to target the recommendation algorithms on information platforms, the information itself, and/or the specific audiences that the content reaches. Recommendation algorithms record a user's interests through self-selected personalization determined by the user’s content engagement patterns.These algorithms will also push specific content onto additional users with similar engagement patterns—since it is likely that they will positively engage with it—a process also known as pre-selected personalization.

Ultimately, self-selected and pre-selected personalization can generate a “filter bubble” that can deepen polarization and push users to engage with fringe content. These filter bubbles are vulnerable to exploitation by bad actors who may hack the recommender algorithms through “shilling attacks.” For example, attackers can make fake user profiles to befriend real users and interact with specific content, like fake news. This tricks the recommendation algorithms into showing this content to both the fake and real users, increasing public exposure to harmful information.

AI-enabled cyberattacks can become stealthier in allowing hackers to go undetected for a period of time in targeted election systems, otherwise known as persistence. AI models can support bad actors in building polymorphic malware — a type of malicious software that morphs its code each time it spreads to a new computer. This change is minor and does not alter what the malware does, but it helps the malware bypass detection by security software. This may also enable threat actors to conduct vulnerability mapping by crafting training data of past exploited vulnerabilities and then optimizing a model to detect similar patterns. This process would increase the hacker’s attack surface so that almost any node throughout the election supply chain can become a target: gaining access to voters’ devices, poisoning information sources, targeting voting ballots, etc.

The fast pace at which technological advancements integrate into our sectors is also increasing cyber risks to elections. “Secure by design” is often overlooked in platform, model, and product design and deployment, making this infrastructure more vulnerable to attacks during critical moments like elections. Here are some crucial steps security specialists across industry and government can take to mitigate AI-amplified cyber risks to elections and shift the AI offense-defense balance

Equip election platforms with automated breach risk prediction mechanisms. Security specialists should equip platforms with breach risk prediction mechanisms to provide insight on threats faced by similar industries and technologies and increase the resilience of election infrastructure. If AI models are trained on specific election-related data points (i.e. malware targeted during previous elections), they can be optimized to predict how and where breaches are most likely to occur. By updating threat detection algorithms on voting databases, intrusions can also be quickly contained to minimize damages to elections. 

Train AI models on past exploited vulnerabilities to detect similar ones in current election systems. Organizations should invest in developing unique AI models that can be trained on the source code of past exploited vulnerabilities. These models can then be used to scan current election infrastructure and information platforms to predict and detect similar vulnerabilities prone to exploitation. The Cybersecurity & Infrastructure Security Agency has already undertaken an “operational pilot using AI cybersecurity systems” to support vulnerability mapping in government networks. In that lens, AI models can also be trained to discover new vulnerability patterns that institutions might not have already detected. However, these tools can also fall into the hands of malicious actors, by granting them access to a catalog of vulnerabilities they can select when creating exploits. AI can also be used for red-teaming efforts—by launching offensive attacks to test the resilience of platforms. Continuous testing and adaptation makes it more costly for attackers to find vulnerabilities. 

Leverage AI models to make election-related information manipulation campaigns less effective. Information platforms can integrate AI models that can differentiate between genuine and manipulated content, trained on large datasets of real and AI-generated misleading media. These platforms can also integrate AI models that can detect inconsistencies in digital files that may not be visible to the human eye by looking for signs of editing in pixel patterns. This dual strategy can ensure a higher accuracy in detecting election-related deep fakes. Platform owners can also develop adequate APIs that allow other institutions to integrate deepfake detection capabilities into their platforms, broadening the scope of mitigation. 

To address the global rise in election-related disruptions caused by AI-generated malicious content, the “Trusted Election Analysis” working group, a collaboration between the Harvard Belfer Center and the Trust in Media Cooperative (TiM), is on a mission to safeguard election-related information and ultimately rebuild trust in institutions. I co-lead one of the action groups tasked to shape standards and measurements for information quality. The working group’s vision is to empower voters to consume accurate, explainable and transparent information.

AI models can be trained to detect and mitigate false and misleading information specifically around election topics. These models can be integrated with platform algorithms to continuously monitor and assess the quality of information being disseminated. 

As AI continues to reshape the cybersecurity threat landscape, its dual capacity to both undermine and enhance our electoral systems is critical. On one hand, adversaries can leverage AI models to orchestrate cyberattacks and undermine public trust in elections. On the other hand, AI models can also serve as a critical tool for detecting and neutralizing these threats, ensuring that electoral processes can be safeguarded. Therefore, the ability to safeguard electoral processes is key to restoring and maintaining public trust in the integrity of elections. The dual use of AI reflects a profound shift in how we understand and respond to cybersecurity challenges in democratic governance, highlighting the essential role of AI in shaping the future of our electoral systems.

 

The views expressed in this article are those of the author and do not represent those of any previous or current employers, the editorial body of SIPR, the Freeman Spogli Institute, or Stanford University.

 

Stanford International Policy Review

Want to know more? Click on the following links to direct back to the homepage for more amazing content, or, to the submissions page where you can find more information about being a future author!