Fake Engagement: Social Media’s Threat to Election Integrity
Section 1: Introduction: The Architecture of Digital Deception
The proliferation of social media has fundamentally reshaped the landscape of political communication, creating unprecedented opportunities for civic engagement and democratic discourse. However, these same platforms have become fertile ground for a sophisticated and pervasive form of manipulation that poses a direct threat to the integrity of elections worldwide. This threat, broadly termed “fake engagement,” involves the use of inauthentic social media accounts and coordinated activity to manipulate public opinion, distort political discourse, and undermine the foundations of democratic processes. It is a central component of modern information warfare, leveraging the architecture of digital platforms to deceive, divide, and disenfranchise. Understanding how this architecture of deception functions
—its core concepts, tools, and actors
—is the first step toward building effective defenses for democratic institutions.
1.1 Defining the Core Threat: From Misinformation to Malign Influence
At the heart of fake engagement lies the strategic dissemination of problematic information, which can be categorized into distinct “information disorders” based on the intent to harm. Misinformation is false information shared without malicious intent, such as when a user unknowingly shares an incorrect news story. In contrast, disinformation is false information that is deliberately created and shared with the express purpose of causing harm to a person, social group, organization, or country. This is the primary ammunition of influence operations. A third category, malinformation, involves the sharing of genuine information, such as private emails or documents, with the intent to inflict harm.
Fake engagement is the operational method through which these information disorders are weaponized. It can be defined as the use of inauthentic social media accounts and coordinated activity to artificially shape, inflate, or distort online conversations, thereby manipulating perceptions of public opinion. Technology platforms like Meta have adopted the term “inauthentic behavior” to describe this activity, focusing on actions where groups of accounts work in concert to mislead others about who is behind them and what their purpose is. This focus on deceptive behavior, rather than simply the content itself, provides a more robust framework for identifying and countering these malign influence campaigns.
1.2 The Arsenal of Deception: Bots, Trolls, and Coordinated Networks
The actors behind fake engagement campaigns deploy a diverse arsenal of tools and tactics to achieve their objectives. These tools are often used in combination to create a powerful engine for amplifying disinformation and suppressing authentic speech.
Social Bots
A bot is an automated social media account programmed to perform tasks typically associated with human users, such as posting content, liking posts, following other accounts, and sharing specific messages. While not all bots are malicious
—some provide useful services like weather updates
—they are a key tool in political manipulation campaigns. Malicious bots are characterized by anonymity (lacking personal information or using stolen profile pictures), hyperactivity (posting dozens or hundreds of times per day), and the ability to make a hashtag or message trend through sheer volume. They are used in tactics like “repost storms,” where a lead account triggers a network of bots to share content simultaneously, and “trend jacking,” where bots insert propaganda into popular, unrelated conversations to maximize reach.
Human-Operated Trolls
Unlike automated bots, trolls are human users who deliberately post provocative, offensive, and unconstructive content to annoy, harass, and disrupt online conversations. In the political context, trolls often operate from centralized locations known as “troll farms,” which are sponsored by state or political actors. A prominent example is Russia’s Internet Research Agency (IRA), which employed hundreds of individuals to create fake American personas and sow discord during the 2016 U.S. election. Trolls are particularly effective at engaging in nuanced harassment, evading simple automated detection, and creating an environment of hostility designed to silence political opponents.
Coordinated Inauthentic Behavior (CIB)
The most sophisticated influence operations no longer rely on a simple distinction between bots and humans. Instead, they utilize a hybrid approach, which researchers and platforms now identify as Coordinated Inauthentic Behavior (CIB). CIB is defined as an organized effort by a network of accounts to manipulate public debate for a strategic goal, where the accounts’ inauthenticity is central to the deception. This concept represents a crucial evolution in understanding the problem. Early analysis focused on identifying the technical signatures of automation, but malicious actors adapted by using human-operated accounts to mimic genuine behavior. The CIB framework shifts the focus of detection from the nature of an individual account to the suspicious, coordinated behavior of the network as a whole. This acknowledges that the core of the manipulation lies in the covert coordination itself, whether executed by bots, humans, or a combination of both. This makes detection far more complex, as it requires analyzing patterns of interaction and network-level similarities rather than just flagging individual accounts with bot-like characteristics.
1.3 The Actors and Their Motives: A Diverse Threat Landscape
The architecture of digital deception is not a monolithic enterprise but a complex ecosystem of diverse actors with overlapping and sometimes conflicting motivations.
- State-Sponsored Propaganda: Hostile foreign governments, particularly Russia, China, and Iran, have weaponized fake engagement as a tool of hybrid warfare to interfere in the elections of other nations. Their primary goals are often geopolitical: to sow societal discord, erode public trust in democratic institutions, undermine specific candidates, and advance their own strategic interests. These operations are typically well-funded, sophisticated, and persistent.
- Domestic Political Campaigns: Political parties and candidates within a country increasingly employ these same tactics against their domestic rivals. This can involve creating fake accounts to attack opponents, deploying bots to amplify positive messaging, and orchestrating “astroturfing” campaigns to create a false impression of grassroots support for a candidate or policy.
- For-Profit Disinformation: A significant portion of what is commonly called “fake news” is produced by financially motivated actors. These individuals and organizations create websites with sensational, false, or misleading headlines designed to attract clicks and generate advertising revenue. While their primary motive is profit rather than political ideology, their content is often highly partisan and emotionally charged, contributing significantly to the pollution of the information ecosystem during an election.
- Ideological Extremists: Non-state groups, including extremist organizations and conspiracy communities, use fake engagement to polarize society, recruit new members, and mainstream their fringe views. They thrive on creating and amplifying divisive content that deepens social cleavages.
These different actors often operate in a symbiotic fashion. A narrative seeded by a state-sponsored campaign can be picked up and amplified by domestic political partisans who see it as beneficial to their cause. For-profit websites may then repackage and sensationalize the narrative to drive traffic. This interconnectedness makes attribution difficult and demonstrates that the threat is not a single entity but a networked web of interests that can align to corrode the information environment.
Section 2: The Mechanics of Manipulation: From False Social Proof to Algorithmic Dominance
Fake engagement campaigns are effective because they exploit the intersection of human psychology and the technical architecture of social media platforms. They manufacture the illusion of consensus to prey on cognitive biases while simultaneously hijacking the platforms’ automated systems to achieve mass distribution. These two pillars of manipulation
—one social, one technical
—work in a powerful feedback loop to make disinformation go viral.
2.1 Manufacturing Consensus: The Psychology of False Social Proof
At its core, much of fake engagement is a psychological operation designed to create a distorted perception of reality. By simulating a crowd, manipulators can influence individuals to conform to a manufactured norm.
Astroturfing and the Bandwagon Effect
One of the most common tactics is “astroturfing,” the practice of creating a fake grassroots movement. Networks of bots and trolls are used to post similar messages, promote the same hashtags, and like or share a candidate’s content, creating the illusion of widespread, organic public support. This tactic directly exploits the bandwagon effect, a cognitive bias where individuals are more likely to adopt a certain belief or behavior if they perceive that many others are doing so. Seeing a post with thousands of likes or a hashtag trending nationally can lead a user to assume the underlying message is popular and therefore credible, even if that popularity is entirely fabricated by a botnet.
Cognitive Dissonance and Confirmation Bias
Disinformation is most potent when it aligns with what people already believe.
Emotional Manipulation
Influence operations often bypass rational thought by targeting emotions. Disinformation is frequently packaged in sensational or shocking headlines, often in all caps and with exclamation points, and paired with manipulated or out-of-context images designed to provoke strong reactions like anger, fear, or anxiety. Research has found that emotionally charged content, particularly that which triggers negative emotions, is more likely to be shared. This emotional arousal short-circuits critical evaluation, making users more susceptible to believing and spreading falsehoods.
2.2 Hijacking the Algorithm: Weaponizing the Code
While psychological manipulation is the goal, the technical architecture of social media is the means. The business model of these platforms, which is predicated on maximizing user engagement, has created an algorithmic environment that is uniquely vulnerable to manipulation.
The Engagement-Driven Ecosystem
Social media algorithms are not designed to prioritize truth or accuracy; they are designed to prioritize and amplify content that generates the most engagement
—likes, comments, shares, and watch time. Engagement metrics serve as the fundamental signals that tell a platform’s curation system what content is important and should be shown to more users. This creates a structural conflict between the platforms’ commercial interests and the health of the democratic information space. The very mechanics that drive platform growth and revenue also reward sensational, divisive, and emotionally charged content, which is often the hallmark of disinformation.
False Amplification
Malicious actors understand this system and exploit it through “false amplification”. By deploying bots to generate thousands of likes and shares on a piece of disinformation, they can artificially boost its engagement metrics at an immense scale and with incredible speed. This flood of fake engagement tricks the platform’s algorithm into perceiving the content as highly popular and relevant. Consequently, the algorithm promotes the content, pushing it into the feeds of millions of real users and placing it on “trending” topic lists, giving it a veneer of legitimacy and a massive audience it would never have achieved organically.
Algorithmic Bias and Political Advertising
Platform algorithms are not neutral. Studies of political advertising on platforms like Facebook and Instagram have revealed significant biases in how ads are delivered. Research on the 2021 German federal election, for example, found that populist parties promoting incendiary political issues achieved greater reach for their advertising budget, likely because the algorithm favors content that attracts high attention. These algorithmic biases can create an unlevel playing field, giving an advantage to parties that use more divisive and emotionally charged messaging, all while operating within an opaque system that is beyond public scrutiny.
The interplay between technical and psychological manipulation creates a powerful, self-reinforcing cycle. A botnet first generates a wave of fake engagement on a piece of disinformation (the technical manipulation). This tricks the platform’s algorithm into amplifying its reach (the technical consequence). As a larger audience of real users is exposed to the content, its artificially inflated popularity triggers their cognitive biases, such as the bandwagon effect, leading them to perceive it as credible and share it with their own networks (the psychological manipulation). This new wave of authentic human engagement further boosts the content’s metrics, signaling to the algorithm that it is even more important, which in turn drives even wider algorithmic amplification. This vicious cycle is the core engine of how disinformation goes viral and permeates the digital public square.
2.3 Case Study in Micro-Targeting: The Cambridge Analytica Scandal
The 2016 U.S. presidential election brought to light one of the most sophisticated and controversial examples of digital manipulation: the work of political consulting firm Cambridge Analytica. This case serves as a landmark illustration of how large-scale data harvesting can be combined with psychographic profiling to execute highly targeted influence campaigns.
Data Harvesting and Psychographic Profiling
The scandal began with the collection of personal data from up to 87 million Facebook users without their informed consent. This was accomplished through a third-party quiz app that not only collected data from the users who took the quiz but also scraped the data of their entire network of Facebook friends. The harvested data was detailed enough to allow Cambridge Analytica to build psychographic profiles for millions of voters, using their online activity (such as page likes) to infer personality traits based on established psychological models.
Weaponizing Profiles for Political Persuasion
Cambridge Analytica then weaponized these profiles to assist the presidential campaigns of Ted Cruz and, most notably, Donald Trump. The firm used the psychographic data for micro-targeting, a technique that involves delivering highly customized messages to narrow segments of the electorate. The goal, as described by the company’s CEO, was to identify individuals who could be persuaded to vote for their client or, just as importantly, discouraged from voting for the opponent.
For the Trump campaign, this meant swing voters might be shown negative ads about Hillary Clinton that were tailored to their specific fears or anxieties, while confirmed Trump supporters would receive encouraging, get-out-the-vote messages and information about polling locations. The Cambridge Analytica scandal revealed the dark potential of combining behavioral psychology with big data, demonstrating a new frontier of political persuasion that operates at an individual level, largely invisible to public scrutiny.
Section 3: The Spectrum of Harm: Corrupting Discourse, Suppressing Votes, and Eroding Trust
The consequences of fake engagement are not confined to the digital realm. They manifest as tangible harms to the democratic process, ranging from the degradation of public discourse to direct interference in voting and a systemic erosion of trust in the institutions that underpin a free society. These harms are interconnected, creating a cascading effect that weakens democratic resilience from multiple angles.
3.1 Distortion of Public Discourse and Political Polarization
Fake engagement fundamentally corrupts the “marketplace of ideas” by flooding it with inauthentic and malicious content, making it difficult for citizens to engage in informed deliberation.
Amplifying Harmful Narratives
One of the most immediate effects of fake engagement is the artificial amplification of fringe narratives. Botnets and coordinated networks can take conspiracy theories, extremist ideologies, and baseless rumors from the dark corners of the internet and inject them into mainstream conversations. By making these topics trend on social media, they gain a veneer of legitimacy and receive coverage from traditional media outlets, granting them a level of attention and credibility they do not deserve.
Creating Echo Chambers and Deepening Polarization
The algorithms that drive social media, when manipulated by fake engagement, contribute to the creation of ideological echo chambers. By feeding users a constant stream of content that confirms their existing biases, these platforms can make people more extreme in their views and less tolerant of opposing perspectives. Fake engagement exacerbates this process by promoting the most divisive and emotionally charged content, inflaming partisan tensions and making political compromise more difficult.
Silencing Authentic Voices
A particularly insidious tactic of fake engagement is the use of coordinated harassment campaigns to silence dissenting voices. Journalists, political opponents, election officials, and ordinary citizens who challenge disinformation narratives are often targeted by swarms of trolls and bots. These campaigns, which can include threats of violence, doxxing (publishing private information), and relentless verbal abuse, are designed to intimidate individuals into self-censorship, effectively removing authentic and critical voices from the public square and chilling free expression.
3.2 Direct Interference with the Electoral Process
Beyond distorting discourse, fake engagement is used to directly interfere with the mechanics and legitimacy of elections.
Voter Suppression
A primary goal of many disinformation campaigns is to suppress voter turnout, particularly among historically marginalized communities. This is achieved by spreading false or misleading information about the voting process.
Common tactics include disseminating incorrect polling dates, times, or locations; spreading false rumors about voter eligibility requirements; and creating confusion about mail-in voting procedures. In some cases, these campaigns involve explicit threats, such as circulating false claims that Immigration and Customs Enforcement (ICE) agents will be present at polling places to intimidate immigrant communities. The emergence of generative AI has added new tools to this arsenal, such as the use of deepfake audio to create robocalls impersonating a candidate telling their supporters to stay home.
Delegitimizing Election Results
Perhaps the most dangerous impact of fake engagement is its role in undermining public confidence in the outcome of an election. The relentless propagation of baseless claims of widespread voter fraud, rigged voting machines, and “stolen” elections has become a central strategy for actors seeking to destabilize democracies. This narrative, exemplified by the “Big Lie” following the 2020 U.S. presidential election, erodes the foundational principle of democratic legitimacy: the peaceful acceptance of election results. By convincing a significant portion of the population that the election was fraudulent, these campaigns lay the groundwork for challenging the results, refusing to concede, and even inciting violence.
Intimidation of Election Officials
The spread of disinformation, particularly false claims of fraud, has led to a dramatic and dangerous increase in threats and harassment targeting non-partisan election officials and volunteer poll workers. A 2022 survey found that 64 percent of election officials reported that the spread of false information has made their jobs more dangerous. This hostile environment not only endangers the personal safety of these public servants but also threatens the integrity of election administration itself, as experienced officials may choose to leave the profession, creating a critical loss of institutional knowledge.
The harms of fake engagement constitute a systemic assault on the epistemic security of a democracy
—the shared foundation of facts, norms, and institutional trust that allows a society to deliberate and make collective decisions. The attacks are not isolated; they are a holistic strategy to dismantle the public’s ability to agree on a shared reality. When citizens cannot trust the media to report facts, cannot trust election officials to count votes fairly, and are told the entire process is rigged, they are left in a state of confusion and cynicism. In such an environment, the only remaining source of “truth” becomes one’s partisan in-group, making democratic compromise and peaceful governance nearly impossible.
This digital assault has a disturbing and tangible downstream effect. The false narratives popularized by fake engagement are often weaponized by lawmakers to justify the passage of real-world voter suppression legislation. This creates a destructive feedback loop: first, a disinformation campaign manufactures a crisis of confidence by spreading lies about widespread voter fraud. Second, lawmakers cite this manufactured public concern as a pretext to enact new, restrictive voting laws, such as stricter voter ID requirements or limits on mail-in voting. Third, these new laws can disenfranchise legitimate voters and create fresh “information gaps” and confusion about the voting process, which in turn become fertile ground for the next wave of disinformation. In this way, online deception is laundered into legal, tangible barriers to democratic participation.
3.3 The Corrosion of Democratic Foundations: Eroding Institutional Trust
The ultimate consequence of a sustained campaign of fake engagement is the erosion of public trust in the core institutions of a democratic society. This long-term damage may be the most difficult to repair.
Undermining Trust in the Media
Disinformation campaigns frequently target the news media, labeling established journalistic outlets as “fake news” and promoting conspiracy theories about their motives. Research has shown that sustained exposure to this type of content is linked to a significant decline in public trust in mainstream media across the political spectrum. As trust in professional journalism wanes, citizens lose access to a crucial source of verified information and accountability, making them more vulnerable to manipulation.
Degrading Confidence in Government and Elections
By relentlessly pushing narratives of corruption, incompetence, and systemic fraud, fake engagement campaigns directly attack citizens’ faith in their own government and democratic processes. When a large segment of the population believes that elections are rigged and that government institutions are illegitimate, the social contract begins to fray. This can lead to decreased political participation, increased civil unrest, and a weakening of democratic norms.
The Partisan Trust Paradox
The effect of disinformation on trust is not always straightforwardly negative. Some research has uncovered a “partisan trust paradox,” where exposure to partisan fake news can have divergent effects. For moderates and conservatives in the U.S. during a period of Republican governance, exposure to right-leaning fake news was associated with a decrease in trust in the media but an increase in trust in political institutions like Congress and the justice system. This suggests that disinformation can be used not just to indiscriminately destroy trust, but to strategically shift it. By discrediting independent arbiters of truth like the media, partisan actors can consolidate their supporters’ trust in the political institutions they control, insulating themselves from accountability and reinforcing partisan loyalty.
Section 4: Global Battlegrounds: Case Studies in Digital Election Interference
The theoretical mechanics and harms of fake engagement are best understood through their real-world application. Influence operations have been documented in elections across the globe, and while the core tactics are often similar, they are invariably adapted to the specific political and media landscape of the target country. These case studies illustrate both the common playbook of computational propaganda and its context-specific mutations.
4.1 The 2016 U.S. Presidential Election: A Two-Pronged Assault
The 2016 U.S. election serves as a watershed moment in the history of digital interference, revealing a sophisticated, multi-front assault from both foreign and domestic actors.
The Russian Interference Campaign
The U.S. Intelligence Community concluded that the Russian government conducted an extensive influence operation, code-named “Project Lakhta,” with the explicit goals of harming Hillary Clinton’s campaign, boosting Donald Trump’s candidacy, and sowing political and social discord in the United States. This operation was ordered directly by Russian President Vladimir Putin and had two main components. First, the Internet Research Agency (IRA), a state-sponsored troll farm based in St. Petersburg, created thousands of fake social media accounts posing as American activists and political groups. These accounts spread disinformation, organized real-world rallies, and promoted divisive content on issues like race, religion, and gun rights, reaching an estimated 126 million people on Facebook alone. Second, hackers affiliated with Russia’s military intelligence service (GRU) infiltrated the computer systems of the Democratic National Committee (DNC) and Clinton campaign officials, stealing and strategically leaking damaging documents and emails through platforms like WikiLeaks.
Cambridge Analytica’s Data-Driven Campaign
Operating parallel to the Russian effort was a domestic campaign of unprecedented scale and sophistication. The political consulting firm Cambridge Analytica, working for the Trump campaign, utilized personal data improperly harvested from up to 87 million Facebook profiles. This data was used to build detailed psychographic profiles, which allowed the campaign to segment the electorate based on personality traits and psychological vulnerabilities. These profiles were then used to deliver highly customized, micro-targeted digital advertisements. The strategy focused not only on energizing Trump’s base but also on voter suppression, for instance, by targeting potential Clinton supporters in key districts with negative messaging designed to discourage them from voting at all.
4.2 The 2016 Brexit Referendum: Botnets and Echo Chambers
The UK’s referendum on EU membership, held in the same year as the U.S. election, was also marked by significant digital manipulation. Research uncovered a network of more than 13,493 automated Twitter accounts, or bots, that were intensely active in the two weeks surrounding the vote. These bots systematically amplified messages supporting the “Leave” campaign, creating a distorted perception of public opinion. Tellingly, this entire network of accounts was deleted and disappeared from the platform immediately after the referendum was over. Further analysis showed that Eurosceptic voices were far more numerous and active on social media platforms than those supporting “Remain,” creating powerful, ideologically polarized echo chambers that reinforced the Leave narrative.
4.3 The Philippines: The “Duterte Effect” and Disinformation Armies
The 2016 presidential election in the Philippines demonstrated the power of combining a populist political figure with organized online armies.
Rodrigo Duterte’s campaign pioneered a strategy that became known as the “Duterte effect”. This involved the coordinated use of aggressive social media posting, paid trolls, and large, dedicated fan groups to dominate the online discourse. These “online armies” were used to spread propaganda, share easily digestible soundbites from Duterte’s speeches, and launch vicious harassment campaigns against opponents and critics, with high-profile women being frequent targets. This case study highlights how fake engagement can be used to construct a cult of personality around a leader and create a hostile online environment that silences dissent.
4.4 Brazil: Disinformation in Closed Messaging Networks
Brazil’s recent elections, particularly in 2018 and 2022, have been plagued by disinformation campaigns, often spearheaded by then-President Jair Bolsonaro and his supporters, who relentlessly spread false claims questioning the integrity of the country’s electronic voting system. The Brazilian case is notable for the central role played by closed messaging applications like WhatsApp and Telegram. Unlike public-facing platforms such as Facebook or Twitter, these apps allow disinformation to spread within large, private, and often encrypted groups, making the content nearly impossible for researchers and fact-checkers to monitor and counter. A study on the 2022 election found that while general use of messaging apps for news was not a predictor of misbelief, active participation in political groups on WhatsApp or Telegram was the single strongest predictor of holding misinformed beliefs about the election.
4.5 Kenya: Inflaming Ethnic Divisions
The Kenyan experience illustrates how digital influence operations can be devastatingly effective when they exploit and inflame pre-existing societal fault lines. As early as the 2007 election, mass SMS text messages were used to spread hate speech and incite ethnic violence. By the 2017 and 2022 elections, these tactics had evolved into more sophisticated social media campaigns involving paid influencers, coordinated disinformation, and the spread of “fake news” designed to stoke ethnic tensions for political gain. This case demonstrates that social media does not create these divisions, but it acts as a powerful and dangerous accelerant, allowing malicious actors to mobilize hatred and violence with unprecedented speed and scale.
The tactics of digital influence are not monolithic; they are strategically adapted to the unique media ecosystem and socio-political context of each target nation. In the United States, with its high penetration of Facebook and Twitter, these open platforms became the primary battlegrounds. In Brazil, where WhatsApp is a dominant mode of communication, the fight moved into closed, encrypted networks. In Kenya, tactics evolved from older technologies like mass SMS to sophisticated social media campaigns that tapped into deep-seated ethnic grievances. This adaptability demonstrates that there can be no “one-size-fits-all” defense against fake engagement.
Furthermore, a recurring playbook emerges from these cases, particularly in the Philippines and Brazil. Illiberal or populist leaders like Duterte and Bolsonaro used fake engagement not just as a temporary tool to win an election, but as a continuous method of governance. By persistently attacking the integrity of democratic institutions
—be it the press, the judiciary, or the electoral system itself
—they use disinformation to consolidate power. This strategy creates an information environment where their authority is insulated from factual challenges and electoral accountability, revealing fake engagement as a key instrument of modern democratic backsliding.
Table 4.1: Comparative Analysis of Influence Operations in Key Elections
| Election/Referendum | Primary Actors | Key Tactics | Primary Platforms | Core Narrative(s) | Documented Impact |
|---|---|---|---|---|---|
| 2016 U.S. Election | Russian IRA (State-sponsored); Cambridge Analytica (Commercial/Political) | Troll Farms; Hacking & Leaking; Data Micro-targeting; Psychographic Profiling | Facebook, Twitter, YouTube | Sowing social discord; Pro-Trump/Anti-Clinton messaging; “Corrupt” opponent | Reached 126M+ users; Fueled polarization; Damaged Clinton campaign |
| 2016 Brexit Referendum | Domestic Campaigns; Suspected Foreign Influence | Botnets; Coordinated Amplification; Creation of Echo Chambers | Anti-EU; “Take Back Control”; Anti-Immigration | Artificial amplification of “Leave” messages; Skewed online discourse | |
| Philippine Elections | Domestic Political Campaigns (e.g., Duterte) | “Online Armies” of Trolls & Supporters; Coordinated Harassment; Propaganda | Facebook, YouTube | Populist nationalism; Attacking opponents (especially women); “Strongman” leadership | Dominated online discourse; Silenced critics; Built cult of personality |
| Brazilian Elections | Domestic Political Campaigns (e.g., Bolsonaro) | Mass Messaging in Closed Groups; Delegitimization of Voting System | WhatsApp, Telegram | “Electronic voting is fraudulent”; Anti-institutional conspiracies | Widespread belief in electoral misinformation; Undermined trust in elections |
| Kenyan Elections | Domestic Political Campaigns | Hate Speech; Paid Influencers; Exploitation of Ethnic Tensions | SMS, Facebook, WhatsApp | Incitement along ethnic lines; “Fake news” about opponents | Inflamed ethnic violence; Undermined democratic institutions |
Section 5: The Next Wave: Election Integrity in the Age of Generative AI

As platforms and societies begin to grapple with the existing threats of fake engagement, a new and potentially more disruptive technological wave is cresting: generative artificial intelligence (AI). AI tools capable of creating realistic text, images, audio, and video are becoming widely accessible, threatening to dramatically escalate the scale and sophistication of influence operations and push democratic societies toward a deeper epistemic crisis.
5.1 The Democratization of Disinformation
The most immediate impact of generative AI is that it radically lowers the barrier to entry for creating high-quality, convincing disinformation.
Lowering the Barrier to Entry
Previously, a large-scale influence operation required significant resources, such as the state funding and hundreds of personnel of Russia’s IRA. Generative AI changes this calculus entirely. Tools like ChatGPT can produce endless streams of human-like text for propaganda, while image generators like Midjourney or DALL-E can create photorealistic fake images with a simple text prompt. An operation that once cost millions of dollars can now be executed by a small team, or even a single individual, for a fraction of the cost.
Hyper-Realistic Deepfakes
The most alarming development is the rise of “deepfakes”
—AI-generated audio or video that convincingly depicts real people saying or doing things they never did. The technology for voice cloning has advanced to the point where only a small sample of a person’s real voice is needed to create a synthetic replica. This threat was made real in the run-up to the 2024 New Hampshire primary, when an AI-generated robocall impersonating President Joe Biden was sent to voters, telling them not to vote in the primary. Such deepfakes could be deployed in the final hours of a campaign to spread a devastating last-minute smear, with little time for the targeted candidate to effectively debunk it.
AI-Enhanced Influence Operations
Generative AI can also be used to supercharge existing tactics. It can help foreign actors overcome previous weaknesses, such as grammatical errors or a lack of cultural nuance, that made their propaganda easier to spot. AI-powered chatbots could be deployed at scale to engage in automated, personalized conversations designed to persuade or deceive voters. This represents a phase shift in the information war, moving from an era of distributing pre-made falsehoods to an era of mass-producing customized, synthetic realities. The battlefield is no longer just about which narrative wins, but about whether citizens can trust any audio-visual information they encounter online.
5.2 The Epistemic Crisis: The Liar’s Dividend
Beyond the creation of fake content, the mere existence of generative AI creates a secondary, more corrosive problem known as the “liar’s dividend”. As the public becomes more aware that convincing deepfakes are possible, it becomes easier for malicious actors to dismiss authentic, incriminating evidence as fake. A politician caught on a real video making a damaging statement can simply claim the video is a “deepfake,” exploiting public uncertainty to evade accountability.
This phenomenon threatens to poison the entire information ecosystem, eroding the foundational trust in all forms of media and making it nearly impossible to establish a shared set of facts for public debate.
5.3 The Challenge of Detection and Moderation
The rapid advancement of generative AI poses immense challenges for those tasked with defending the information space.
The Detection Arms Race
There is a technological arms race between AI models that generate synthetic media and the tools designed to detect it. While promising detection methods are being developed
—such as Intel’s “FakeCatcher,” which analyzes subtle changes in facial blood flow invisible to the human eye
—they are constantly trying to keep up with more sophisticated generation techniques. As AI models become more powerful, the artifacts and glitches that once gave away a fake are becoming rarer, making reliable detection increasingly difficult.
Content Moderation at Scale
The potential for AI to flood social media with synthetic content presents a nightmare scenario for platform content moderators. The sheer volume and speed at which AI can produce content could overwhelm existing moderation systems, which already struggle to keep pace with human-generated material. This creates severe risks of both under-enforcement, where harmful deepfakes and disinformation are allowed to spread unchecked, and over-enforcement, where platforms’ automated systems mistakenly flag and remove legitimate, authentic content, leading to censorship.
This dynamic creates a fundamental asymmetry that disproportionately empowers malicious actors over defenders. The cost, speed, and ease of creating convincing disinformation with AI are falling much more rapidly than the cost, speed, and difficulty of detecting and debunking it at scale. A bad actor can generate a thousand different, plausible-looking fake images of voter fraud in minutes for a negligible cost. In contrast, fact-checkers, journalists, and platform moderators must expend significant human time and resources to investigate and debunk each one. The sheer volume of potential fakes can function as a “denial-of-service” attack on society’s truth-seeking capacity, sowing chaos and confusion even if many of the individual fakes are eventually proven false. In the age of generative AI, the strategic advantage lies with the offense.
Section 6: The Tripartite Response: Platforms, Policies, and Public Resilience
Countering the multifaceted threat of fake engagement requires a comprehensive, multi-stakeholder approach. No single entity can solve the problem alone. An effective defense depends on the coordinated actions of technology platforms, governments and regulators, and a resilient and informed public, supported by civil society. However, the current landscape of countermeasures is fragmented, inconsistent, and often lagging behind the evolving threat.
6.1 Platform Governance: Self-Regulation and Its Limits
As the primary vectors for the spread of disinformation, social media platforms bear a significant responsibility for mitigating its harms. Their approaches to this responsibility, however, vary dramatically.
Meta (Facebook/Instagram)
Meta has developed a relatively structured and public-facing set of policies aimed at protecting elections. Its Community Standards explicitly prohibit voter interference, coordinated inauthentic behavior (CIB), and certain forms of hate speech. The company invests in threat intelligence teams to proactively identify and dismantle CIB networks. Its Advertising Policies, which are more restrictive than its general content rules, prohibit ads that discourage voting, prematurely declare victory, or seek to delegitimize the electoral process. Meta also partners with third-party fact-checkers to label and reduce the distribution of false content and maintains a public ad library for transparency. Despite these efforts, the platform’s core algorithmic model, which prioritizes engagement, remains a key vulnerability.
X (formerly Twitter)
The governance approach at X has undergone a radical transformation since its acquisition by Elon Musk in 2022. The platform has reversed many of its previous safety policies, including a ban on all political advertising that had been in place since 2019. Key trust and safety teams responsible for content moderation and election integrity were dismantled or drastically reduced. Musk has reinstated thousands of accounts that were previously banned for spreading misinformation and hate speech, and has personally used the platform to amplify conspiracy theories. The platform has shifted to a “crowdsourced” fact-checking model called Community Notes, which often fails to quickly and effectively debunk false claims. Furthermore, Musk established an “election integrity community” on the platform, which has become a major hub for users to share unsubstantiated and fabricated claims of voter fraud, including attempts to dox election workers.
YouTube
YouTube’s policies prohibit content that aims to mislead voters about voting procedures, incites interference with democratic processes, or contains certain types of technically manipulated media. The platform uses recommendation systems to try to surface content from authoritative news sources when users search for election-related topics. However, YouTube made a controversial policy change in 2023, announcing it would no longer remove content that advances false claims about fraud in the 2020 and other past U.S. presidential elections. The company justified this reversal by citing the importance of open debate, even of “disproven assumptions.” This decision reflects a significant philosophical shift away from content removal and toward a more permissive, speech-protective stance, which critics argue risks allowing harmful electoral falsehoods to persist and spread.
The stark contrast between Meta’s structured (though imperfect) system and the post-acquisition chaos at X demonstrates the fundamental unreliability of a purely self-regulatory model. Without external, legally binding requirements, platform safety policies are subject to the commercial pressures and ideological whims of their leadership. This inconsistency makes a strong case for baseline government regulations that apply to all major platforms, ensuring a minimum standard of care for democratic processes regardless of who owns the company.
6.2 Government Regulation: The Search for Accountability
In response to the failures of self-regulation, governments around the world have begun to explore legislative and regulatory solutions. However, these efforts are shaped by vastly different legal traditions regarding free speech and corporate responsibility.
The U.S. Context and Section 230
In the United States, the conversation about platform regulation is dominated by Section 230 of the 1996 Communications Decency Act. This law provides internet platforms with broad immunity from liability for the content posted by their users. It is credited with enabling the growth of the modern internet, but critics now argue that it shields massive, powerful companies from responsibility for the harms their platforms facilitate, including the spread of election disinformation. The debate over amending or repealing Section 230 is highly polarized, with some arguing for changes that would incentivize more aggressive content moderation and others pushing for changes that would prevent platforms from removing political speech.
The European Union’s Digital Services Act (DSA)
The European Union has taken a fundamentally different approach with its landmark Digital Services Act (DSA). Rather than focusing on liability for individual pieces of content, the DSA establishes a framework of risk management and due diligence for large online platforms. The law requires these companies to conduct regular risk assessments of how their services could be used to spread disinformation or undermine elections, and to implement reasonable measures to mitigate those risks. It also mandates greater transparency around algorithms and content moderation, and gives regulators significant enforcement powers, including the ability to levy massive fines. The DSA represents a shift from a speech-centric model to a systemic risk and duty-of-care model.
International Approaches
Other democracies are experimenting with their own regulatory models. France has enacted a law that, during election periods, allows for the expedited removal of disinformation. Germany’s Network Enforcement Act (NetzDG) requires platforms to quickly remove content that is “clearly illegal” under German law, such as hate speech. In the U.S., the state of California passed a law (AB 2655) requiring platforms to label or remove deceptive AI-generated deepfakes close to an election, though this law is currently being challenged in court by X on constitutional grounds.
This global landscape reveals a fundamental tension between the American free speech-absolutist model, which prioritizes non-interference with expression, and the European risk-management model, which prioritizes the protection of democratic processes from systemic threats. This regulatory fragmentation creates an uneven global defense system, allowing international disinformation campaigns to exploit the more permissive legal environments to target vulnerable populations.
6.3 Civil Society and Public Resilience: Building a Human Firewall
While platforms and governments debate top-down solutions, a diverse ecosystem of civil society organizations, researchers, and educators is working on bottom-up approaches to build public resilience against disinformation.
Fact-Checking Initiatives
Independent fact-checking organizations have emerged as a critical line of defense.
A growing body of research confirms that fact-checking is broadly effective at correcting misperceptions and reducing belief in false claims across different countries and political contexts. Exposure to a fact-check can significantly increase a person’s factual accuracy, and these effects can be durable for at least several weeks. However, the limitations of fact-checking are also clear. Its effects on changing voting intentions or feelings toward well-known political figures are often minimal, as partisan loyalties can override factual corrections. Furthermore, the effects of a single fact-check can decay over time, suggesting the need for repeated exposure or other interventions to reinforce accurate beliefs.
Media Literacy Campaigns
Another key strategy is to equip citizens with the skills to identify and resist manipulation themselves. Media literacy campaigns aim to teach people how to spot the signs of false news, such as sensational headlines, suspicious links, or manipulated images. Large-scale experiments in the U.S. and India, modeled on a real-world Facebook campaign, found that a simple “tips-based” intervention significantly improved participants’ ability to discern between legitimate mainstream news headlines and false news headlines. Investing in digital and AI literacy, particularly through formal education curricula, is seen as a crucial long-term strategy for building societal resilience.
Monitoring and Advocacy
Civil society organizations play a vital role in holding platforms and governments accountable. Groups like the Brennan Center for Justice and the Open Society Foundations monitor the spread of disinformation, research its impact on vulnerable communities, and advocate for policy solutions. They also work to build rapid-response networks that connect election officials, community groups, and the media to quickly identify and disseminate corrective information when a disinformation campaign emerges. These efforts help to fill the gaps left by overburdened election officials and slow-moving platform responses.
Section 7: Conclusion and Strategic Recommendations
Fake engagement on social media is not a peripheral issue or a mere nuisance of the digital age; it is a systemic and escalating threat to the integrity of democratic elections globally. The evidence presented in this report demonstrates that a complex ecosystem of state, political, and commercial actors is leveraging the psychological and algorithmic vulnerabilities of our information environment to distort public discourse, suppress votes, and erode the foundational trust upon which democracy depends. The advent of generative AI is set to dramatically amplify this threat, democratizing the tools of mass deception and creating an environment of profound epistemic uncertainty.
7.1 Synthesizing the Threat: The Self-Reinforcing Cycle of Democratic Decay
The core danger of fake engagement lies in its ability to create a self-reinforcing cycle of democratic decay. It begins by polluting the information space, making it difficult for citizens to distinguish fact from fiction and engage in reasoned deliberation. This degradation of public discourse fuels polarization and erodes the shared factual reality necessary for a society to address collective challenges. This epistemic breakdown, in turn, weakens trust in core democratic institutions
—the media, the electoral system, and government itself. As these institutions lose legitimacy, they become more vulnerable to further manipulation and attack, and the public becomes more susceptible to conspiratorial and anti-democratic narratives. This “doom loop” presents an “existential” challenge that requires a robust, proactive, and multi-layered defense.
7.2 The Path Forward: A Multi-Layered Defense Strategy
There is no single “silver bullet” solution to the problem of fake engagement. An effective response cannot rely solely on technological fixes, government regulation, or public education alone. Instead, a durable defense strategy must be a portfolio approach, integrating actions across all three domains to create overlapping layers of protection. Policymakers, platforms, and the public must act in concert to manage this complex and evolving threat.
7.3 Strategic Recommendations
Based on the analysis in this report, the following strategic recommendations are proposed for key stakeholders:
For Policymakers and Regulators:
- Adopt Systemic Regulation Focused on Transparency and Risk Mitigation. The polarized debate in the U.S. over amending or repealing Section 230 has led to legislative paralysis. A more productive path forward would involve shifting focus from liability for individual content to systemic accountability, drawing lessons from the EU’s Digital Services Act. This would include mandating that large platforms conduct regular, independent audits of the systemic risks their services pose to electoral processes and requiring them to implement reasonable mitigation measures.
- Mandate Transparency for Political and AI-Generated Content. Legislation should require clear, conspicuous, and standardized labeling of all political advertising, including information on who paid for the ad and the targeting criteria used. Furthermore, any use of synthetic or AI-generated content in political advertising must be explicitly disclosed to the audience, a measure already being implemented in some jurisdictions.
- Invest in and Foster International Cooperation. Democratic governments should increase funding for independent research into the psychosocial and algorithmic dynamics of disinformation. They must also work together to establish international norms and attribution standards for responding to foreign state-sponsored influence operations, creating a unified front against transnational threats to democracy.
For Technology Platforms:
- Fundamentally Redesign Algorithms to Prioritize Democratic Health Over Engagement. Platforms must move beyond reactive content moderation and address the root cause of the problem: recommendation algorithms that reward sensationalism and outrage. This requires re-engineering these systems to de-prioritize divisive, unverified, and inauthentic content, even if it results in a short-term reduction in engagement metrics. Introducing “friction”
—such as prompts that encourage users to consider the accuracy of an article before sharing
—can be a low-cost, effective way to promote more mindful behavior. - Invest in Proactive, Global Trust and Safety Operations. Platforms must make sustained, significant investments in their trust and safety teams, ensuring they are adequately staffed with personnel who possess the linguistic and cultural expertise necessary to identify and counter threats in all regions where they operate, particularly in the Global South, which is often underserved. This includes proactive threat intelligence to identify and dismantle CIB networks before they can cause widespread harm.
- Lead the Development and Adoption of Content Provenance Standards. To combat the threat of deepfakes, all major platforms should collaborate on and implement robust, industry-wide technical standards for content provenance, such as cryptographic watermarking or signatures. This would allow users to more easily verify the origin and authenticity of the images and videos they encounter.
For Civil Society, Educators, and the Media:
- Scale and Integrate Media and AI Literacy Education. Public and private funders should dramatically increase support for media literacy programs. These programs should be integrated into formal education curricula at all levels, from primary school to university, to build long-term, generational resilience against manipulation.
- Establish and Strengthen Rapid-Response Information Networks. Civil society organizations should work to create formal networks that connect fact-checkers, journalists, academics, and election officials. These networks can serve as an early warning system to identify emerging disinformation narratives and coordinate the rapid, widespread dissemination of accurate, corrective information through trusted messengers.
- Revitalize Local Journalism. A healthy democracy requires a healthy local news ecosystem. Philanthropic organizations and policymakers should explore and fund models to support local journalism, which serves as a trusted source of community information and can fill the “information gaps” that are so often exploited by disinformation campaigns. A well-informed citizenry, grounded in reliable local reporting, is the ultimate human firewall against digital deception.