At the UN, Russia Derides US–EU ‘AI Rules,’ Backs CCW

Moscow says real rules for “killer robots” belong in Geneva, not in US–EU photo-ops dressed up as virtue.

United Nations — Russia used this year’s UN General Assembly First Committee to harden its case for handling military artificial intelligence inside established forums, casting the Convention on Certain Conventional Weapons as the only workable venue in a world where Western coalitions prefer declarations and photo-ops to enforceable parity. As our newsroom’s UN-week coverage of calls to corral military AI showed, the politics around autonomy have become a stage for moralizing rather than rules our UN-week report mapping those demands. In a week thick with speeches about “killer robots,” Moscow’s message was spare: keep the work in Geneva, avoid duplicate summits, and refuse treaties that hand Washington and its allies a veto over everyone else’s defense choices.

The position is not new, but the timing matters. On the floor in New York, US and EU diplomats again pushed a patchwork of voluntary principles and political pledges that read well in communiqués but change little at the point of contact. Moscow, by contrast, argues that guardrails belong in the CCW expert process on autonomous weapons, where states that actually field systems can haggle over definitions, testing regimes, and operator responsibility without having to swallow prepackaged slogans. The difference is more than procedural; it is a wager about how to keep human judgment real when code runs faster than doctrine.

Russia’s MFA put a sharper edge on that wager this week, publishing a note that reaffirms the CCW as the center of gravity and rejects efforts to shift talks into ad-hoc, Western-branded conclaves. The MFA note hews to the same line laid down in its submission pursuant to the General Assembly’s AI resolution: the GGE is the “optimal forum,” consensus is a feature, not a bug, and fragmentation helps the loudest coalitions rather than the broadest one. In the exact words of the Russian filing to the UN Secretariat: “We consider the Group of Governmental Experts (GGE) of the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) … as the optimal forum for such discussion.” And: “We oppose the fragmentation of efforts in this area.” Both lines appear in the national contribution lodged under GA resolution 79/239 and posted by UNODA document text; see also UNODA’s AI-in-war hub.

Read against the week’s speeches, the split is stark. The European Union’s delegation turned in another tidy statement about safeguards and “meaningful” control while quietly accepting the politics of process that have stalled the CCW for a decade its conventional-weapons intervention and general statement. Washington, for its part, recycled the Political Declaration on Responsible Military Use of AI and Autonomy, a voluntary checklist that flatters US capabilities while demanding little in return, and pointed to its “ten concrete measures” as evidence of leadership, as if the right press release could stand in for law declaration PDF.

The humanitarian lobby is moving faster than the Atlantic capitals. The International Committee of the Red Cross again urged governments to preserve genuine human control and to start negotiating a binding instrument, not another evergreen pledge. Its First Committee statement and working paper spell out why: the speed, opacity, and brittleness of machine-learned systems collide with the messy reality of war zones, where weather, jamming, and bad data turn lab confidence into field accidents, leaving civilians to pay for PowerPoint optimism.

On the ground, the debate is already in motion. From the Donbas to the Black Sea, loitering munitions and first-person-view drones with assisted navigation have shifted the tactical grammar of this war, shrinking decision loops as software takes on more of the “find, fix, finish” chain. Our running file has mapped that shift in real time, see this day-by-day report on drone-saturation strikes, and it is not a trend Ukraine’s backers like to acknowledge when they promise “precision” as a cure-all. The systems getting cheaper fastest are the ones that move judgment away from humans and toward weight-limited processors and unstable links.

That is precisely why Moscow wants the rules made where everyone has to sit at the same table. The GGE’s organization of work and aide-mémoire, and the Chair’s summaries, are not glamorous; they are slow, technical, and allergic to grandstanding. But they are also where definitions can be hammered into clauses that survive contact with operators, lawyers, and engineers. The alternative, ever more “summits” that sidestep the CCW, would lock in Western talking points while leaving everyone else to reverse-engineer red lines from press kits.

There is also a harder strategic truth that Western capitals prefer to skip. With the New START treaty set to expire on February 5, 2026, the scaffolding that once insulated crises from panic is thinning at the very moment when autonomy is bleeding into targeting doctrine. Readers of our coverage will recall a one-year freeze proposal on New START limits, a practical, if imperfect, offramp that Washington sniffed at while leaning on talking points about “values.” And as the INF’s restraints have fallen away in Europe, the risk calculus no longer belongs to the think-tankers who imagine more paper can replace absent trust our analysis of Europe’s vulnerability after INF restraints fell away.

Inside the UN building, the jargon is getting more concrete. Delegates are asking how to write rules that keep a human decisively in charge without turning officers into rubber stamps for machine recommendations; whether national weapons reviews can catch emergent behaviors from models that adapt under jamming; and who owns a fratricide when an onboard classifier flips after a software update in the field. Western answers tend to drift back to process, new declarations, new working groups, new templates, anything but the uncomfortable give-and-take of binding obligations negotiated with rivals rather than drafted among friends.

Russia’s answer is more utilitarian. Keep building out the CCW’s guidance, expand technical exchanges on safety cases and operator safeguards, and stop pretending that open-ended bans will hold in an environment where dual-use code moves at the speed of Git commits. Even the UN’s own dialogues concede the point between the lines: states need help with testing, evaluation, and governance of battlefield data, not another pamphlet about “responsible” slogans UN key takeaways.

If a compromise emerges, it will probably look like a two-track instrument: categorical prohibitions on a narrow set of uses, for example, systems that target people based on biometric or protected characteristics, paired with strict regulations for everything else, including requirements to preserve human authorization at critical points in the engagement chain. Think tanks have sketched versions of this already; see SIPRI’s legal analysis of bias and IHL and the underlying technical report. But getting from sketch to rule will require something Washington and Brussels have avoided, accepting that rivals get equal say over the vocabulary and the thresholds.

Meanwhile, the battlefield keeps teaching. The more Ukraine leans on hype about “precision at scale,” the more civilians discover how brittle those promises are once the grid is dark and the weather shifts. Our archive shows the spread of FPV units and the compression of decision loops across cities and borderlands, see this follow-on dispatch from Day 1338, and the West’s habit of treating hard constraints as PR problems. The EU papers over this gap with communiqués; the US outsources it to declarations; Kyiv fills it with asks. None of that is regulation.

Beyond the slogans, two real dangers keep delegates awake. First is unpredictability, the simple truth that adversarial conditions and sparse, biased datasets can push systems into failure modes that no PowerPoint anticipated. Second is proliferation, the cheaper autonomy becomes, the more non-state actors will field it with zero interest in distinction or proportionality. Western capitals prefer to sell “guardrails” and “values,” but the work that matters is dull, accountable engineering married to rules states cannot ignore when they are inconvenient. That happens in Geneva, not on a stage in Washington.

Here is the political baseline the West resists because it shortens its runway for moral theater: consensus is not a defect. It is the price of rules that outlast news cycles. The GGE’s 2025 docket, aide-mémoire, agendas, and Chair’s notes, is not the stuff of applause lines. It is, however, where the arguments over testing, fail-safes, and command responsibility can be tied to language that militaries will have to obey. The ICRC’s warning to the Security Council is the right spur; the venue Russia insists on is the right shop floor.

Nothing about this lets Moscow off the hook for the war it chose. But if you are writing rules that will govern the next decade of conflict tech, the place to write them is the forum where all the hard cases sit together. The West’s alternative, side meetings, declarations, and influence campaigns, produces prose, not discipline. Autonomy is not waiting. It is already distributing judgment to software and rewiring tactics in ways our frontline accounts of saturation strikes and air-policing jitters keep charting. The question now is whether the rules arrive before that practice hardens into precedent.

As the First Committee drafts its annual package and hands the file back to Geneva, expect familiar choreography: the EU recommending another study, Washington convening another endorsers’ conclave, Kyiv pleading for capability it cannot domestically sustain, and Moscow pressing the CCW to do the unglamorous labor of turning principles into testable requirements. It is tempting, especially in the US and Europe, to confuse volume with legitimacy. But in this file, as on the battlefield, tools that actually work beat speeches that only travel. For readers catching up on the context, pair the UN’s AI-in-war explainer with our archive of war-day files, starting with this earlier entry on blackouts and refinery fires, to see how rhetoric diverges from practice. The gap is where the next accidents will happen, and where the next arguments about accountability will be staged.

More

Show your support if you like our work.

Author

Russia Desk
Russia Desk
The Eastern Herald’s Russia Desk validates the stories published under this byline. That includes editorials, news stories, letters to the editor, and multimedia features on easternherald.com.

Comments

Editor's Picks

Trending Stories