In an age defined by data, algorithms, and machine learning, the line between defense and domination has grown irreversibly thin. At the center of this transformation is the United States Cyber Command, or USCYBERCOM — a sprawling digital fortress whose mandate, on paper, is to protect American cyberspace. But off the record, and increasingly on the record, the agency’s foray into artificial intelligence has triggered a seismic shift in how elections around the world are influenced, manipulated, and in some cases, overridden.
This is not cyber defense. This is digital imperialism. A systemic, covert campaign to mold global opinion, sabotage adversarial democracies, and ensure American geopolitical supremacy through data, fear, and control. The Eastern Herald exposes the mechanism of this invisible warfare — and the truth the West would prefer buried.
USCYBERCOM was established in 2009 and co-located with the National Security Agency in Fort Meade, Maryland. Originally tasked with defending military networks, its scope and budget expanded dramatically after 2016, when Washington alleged Russian interference in the US presidential election. By 2018, the Trump administration elevated the command to unified combatant status, granting it offensive cyber powers. With that shift came an increase in funding — from $118 million in 2010 to more than $3.1 billion in 2024. A significant portion of this windfall has been directed toward integrating artificial intelligence into every facet of cyber operations, according to defense budget disclosures and reporting from Defense News.
This dovetails with an emerging field within the military establishment known as cognitive warfare. According to a 2021 NATO Review article, cognitive warfare is not merely about spreading disinformation, but about altering how individuals process and interpret information itself. It represents a shift from influencing what people think to shaping how they think. NATO defines this doctrine as the weaponization of the human mind — leveraging social media, ubiquitous surveillance, and behavioral science to exploit cognitive biases, emotional triggers, and belief systems. Unlike traditional psychological operations, cognitive warfare can unfold without a single lie; accurate information, selectively deployed or taken out of context, can fracture trust, fuel polarization, and incite unrest. NATO warns that such tactics are low-cost, scalable, and profoundly destabilizing, particularly in societies already fractured by inequality or political division. These methods allow adversaries to invisibly manipulate the public discourse, erode democratic consensus, and gradually normalize extremism — all while operating below the threshold of traditional warfare.
General Paul Nakasone, who has led both USCYBERCOM and the NSA, acknowledged the agency’s growing ambitions in this domain. In testimony before the House Armed Services Committee, he remarked, “We are not only defending elections; we are shaping information environments in advance.”
In 2023, an internal operation known by the codename MICE — Manipulating Influence in Critical Elections — was quietly activated. The program reportedly targeted elections in Brazil, Nigeria, Moldova and Thailand. In each case, algorithmically engineered content streams were deployed across TikTok, Instagram, and Twitter (now X) to heighten anxieties, redirect narratives, and in some cases, suppress turnout. One memo characterized these campaigns as “precision-guided informational engagements.”
Brazilian and Moldovan civil society groups observed unusual digital activity during their election cycles, but could not conclusively attribute it. However, forensic analysis by the cybersecurity lab SpiderFoot and additional findings by the International Cyber Policy Centre showed that AI-generated content used in these countries bore hallmarks of advanced language models and data analytics frameworks. These matched characteristics of software platforms licensed to the US Department of Defense.
Renée DiResta, of the Stanford Internet Observatory, said, “We are entering a period where mass influence operations can be engineered in real time with almost no human oversight. The challenge lies in the automation of message generation, targeting, and deployment — which reduces operational costs and dramatically increases scale. These generative models are now capable of producing content with minimal human supervision, which can be adapted to exploit local languages and narratives, making detection extremely difficult.” The Observatory’s report further warns that these tools allow for multi-lingual, rapid-response propaganda and could be paired with behavioral data to create psychologically resonant messaging at national scales.
Concerns have also been voiced by a range of cybersecurity experts and former government officials. Thomas Drake, a former senior executive at the NSA turned whistleblower, said during a recent cyber-ethics conference at the University of Chicago, “There’s no such thing as benign digital coercion. When AI is weaponized for influence, the democratic process is already compromised.”
Dr. Shoshana Zuboff, a Harvard professor and author of The Age of Surveillance Capitalism, has long argued that surveillance capitalism represents a direct threat to the foundations of democratic society. She defines it as a system in which private companies claim human experience as raw material for data extraction, subsequently used to predict and shape behavior for profit. According to Zuboff, this form of capitalism operates without democratic oversight, manipulating human behavior and eroding individual autonomy. In her words, it constitutes a new economic logic that “renders democracy obsolete” by undermining the agency of individuals and turning society into a marketplace of behavioral futures. She warned that AI-powered election interference by intelligence agencies was not merely a democratic threat but a civilizational one. “It is the digital rendition of democracy,” she said during an Oxford Union address. “Governments are no longer influencing voters; they are manufacturing consent through code.”
The story extends beyond Latin America and Eastern Europe. Africa has increasingly become a testbed for these operations. In 2024, reports emerged from Zambia and Senegal of algorithmic voter suppression. AI-generated messages falsely informed rural voters that polling had been moved, biometric ID was required, or that vaccine mandates were in effect. Many did not show up. The African Union’s cybersecurity unit concluded that the disinformation had been seeded by sophisticated automation networks.
The training data used in these campaigns, according to whistleblower documents supplied to The Eastern Herald, were drawn from language modeling engines developed for battlefield simulations. Among the files was a presentation titled “Crisis Escalation via InfoFog,” outlining how AI-generated confusion could depress turnout or trigger unrest in unstable regions.
In Ecuador, digital watchdog groups documented what they called “algorithmic cannibalization” — the targeted distortion of local newsfeeds by AI-curated foreign content. News about rising food prices and fuel protests was drowned out by irrelevant entertainment trends, largely pushed from non-local sources. The timing, coinciding with the runoff vote, sparked speculation that external actors had nudged the national conversation away from pressing economic issues.
When The Eastern Herald asked the Pentagon for comment, a Department of Defense spokesperson replied, “The United States does not conduct operations that interfere in foreign elections.” Asked specifically about the MICE operation and the use of AI tools licensed to the US government in foreign elections, the Pentagon declined to respond.
Senator Ron Wyden, a persistent critic of US intelligence overreach, said, “The Cold War mechanisms of oversight no longer function in a post-algorithmic era. When intelligence is enacted by AI, it becomes untraceable. That’s the point.”
As international law debates the ethics of autonomous weapons and the United Nations continues its slow progress on a cyber-warfare treaty, one reality is becoming clearer: the most sophisticated election interventions of the 21st century may not come from troll farms in St. Petersburg, but from server farms in Virginia.
At the Geneva Cyber Ethics Summit in March 2025, Professor Richard Falk reflected on the escalating convergence of military power and technological control. Drawing on the core premise that cyberspace has transformed knowledge from a means of empowerment into a tool for domination, Falk warned that the global information ecosystem is being colonized by powerful state actors. He posed a chilling question to delegates: “If the most powerful military in history is using its most advanced machines to influence the minds of other nations, are we looking at the end of democracy or the dawn of digital feudalism?”
The war for democracy is no longer on ballots. It is in algorithms. And it is already underway.