Weapons: AI Equipped Weapons Proliferating

Archives

May 10, 2026: Late last year, an armed forces parade in China was viewed by the Chinese, Russian and North Korean leaders. The parade featured various models of drones that could independently fly alongside combat aircraft.

The demonstration of technical might alarm the Americans, who concluded that the American program for unmanned combat drones was trailing behind China’s. Even Russia was seen to be ahead in developing facilities that could produce advanced drones.

American leaders pressed for national defense firm involvement to step up. Earlier, Anduril, a defense technology enterprise in California, began fabricating A.I.-backed, self-flying drones that appeared similar to the ones shown in China. Production at a factory outside in Ohio, started three months ahead of schedule, part of an effort to close the gap with China.

China’s military display and the American response were part of a growing global arms race over A.I.-backed autonomous weapons and defense systems. Designed to operate by themselves using A.I., the technology reduces the need for human involvement in decisions like when to hit a moving target or defend against an attack.

In recent years, many nations have quietly engaged in a contest of one-upmanship over these arsenals, including drones that identify and strike targets without human command, self-flying fighter jets that synchronize attacks at speeds and altitudes that few human pilots can reach, and central systems run by A.I. that scrutinize intelligence to recommend airstrike targets quickly.

America and China, the world’s largest military powers, are at the center of the rivalry. But the race has widened. Russia and Ukraine, now in their fifth year of war, are looking for every technical advantage. India, Israel, Iran and others are investing in military A.I., while France, Germany, Britain and Poland are rearming amid doubts about the American administration’s commitment to NATO.

Each nation is aiming to amass the most sophisticated technological stockpile in case they need to fight drone against drone and algorithm against algorithm in ways that people cannot match, defense and intelligence officials said.

Russia, China and the Americans are all building A.I. weaponry both as a restraint and for mutually assured annihilation.

The buildup has been compared to the dawn of the nuclear age in the 1940s, when the atomic bomb’s destructive power forced rival nations into an uneasy draw, leading to more than four decades of nuclear weapons brinkmanship.

But while the implications of nuclear weapons are well understood, A.I.’s military capabilities are just beginning to be known. The technology, which does not need to pause, eat, drink or slumber, is set to upend warfare by making battles faster and more unpredictable, officials said.

Precisely which nation is further ahead is vague. Many programs are in a research and development phase, and budgets are classified. Technicians from China, America and Russia watch one another’s factory lines, military displays and weapons deals to determine what the other is up to.

China and Russia are experimenting with letting A.I. make battlefield decisions on its own. China is developing systems for dozens of autonomous drones to coordinate attacks without human input, while Russia is building Lancet drones that can circle in the sky and autonomously select targets.

Even as the specifics of the technologies remain veiled, the intentions are clear. In 2017, the Russian leader declared that whoever leads in A.I. will rule the planet. Two years ago, the Chinese leader asserted that technology would be the primary battlefield of geopolitical rivalry. Earlier this year the America Secretary of War ordered all branches of the American military to adopt A.I. and increase their efforts.

Billions of dollars are being poured into this. The American military requested more than $13 billion for autonomous systems in its latest budget, and has spent billions more over the past decade, though the total is difficult to track because A.I. funding has been spread across many programs.

China, which some researchers said was spending amounts comparable to those of the Americans, has used financial incentives to spur private industry to build A.I. capabilities. Russia has invested in drone and autonomy-related programs, using the war in Ukraine to test and refine drone use in combat.

China has recommended international frameworks for governing military A.I. and called for a practical and sensible attitude toward its development. The American military and Russia’s Ministry of Defense did not respond to requests for comment.

The dynamic forces may resemble the Cold War, but experts cautioned that the A.I. era was different. Start-ups and stockholders now play a role in the military and are as critical as universities and governments. A.I. technology is becoming widely available, opening the door for countries from Turkey to Pakistan to develop new capabilities. What’s emerging is a grinding innovation race without any obvious endpoint.

Ethical questions about ceding life-or-death choices to machines are being overtaken by the rush to build. The only major accord on A.I. weaponry between China and the Americans was reached in 2024, a nonbinding pledge to maintain human control over the decision to use nuclear weapons. Other countries, like Russia, have made no commitments.

Some argued that A.I.’s impact would be bigger than any arms race.

A.I. is a general-purpose technology like electricity, and we don’t talk about an electricity arms race. To the extent A.I. is transforming our military, it’s the way that electricity or computers or the airplane did.

In 2016 at an air show in the southern Chinese city of Zhuhai, a Chinese supplier flew 67 drones in unison. An animated film separately showed the drones destroying a missile launcher, a demonstration of their capabilities.

Russia, too, was building its drone arsenal. In 2014, its military planners set a goal of making 30 percent of its combat power autonomous by 2025. By 2018, the Russian military was testing an unmanned armed vehicle in Syria. While the tank failed, losing its signal and missing targets, it underscored Russia’s ambitions.

In America, a general officer who had previously worked in intelligence at the Defense Department, was evaluating whether A.I. could solve a more immediate problem. The American military was collecting so much data, including drone footage, satellite imagery, intercepted signals, that no one could make sense of it all.

There was nothing in any of the research labs in the military that were capable of generating results in less than a couple of years. There were problems that could not be solved without A.I.

In 2017, Project Maven appeared, a Defense Department effort for the military to incorporate A.I. into its systems. One aim was to work with Silicon Valley to build software to swiftly process images like drone footage for intelligence purposes. Google was utilized to help. But the project quickly ran into hurdles. The Pentagon’s procurement system built around legacy contractors and long timelines, slowed things down.

Project Maven, now a Palantir platform, was designed as part of a Defense Department effort for the military to incorporate A.I. into its systems. It has played a role in the current Iran war.

When word spread inside Google about Project Maven, employees also protested, saying a company that had once swore to Don’t Be Evil should not help identify targets for drone strikes. Google eventually backed away from the project.

In 2019, Palantir, a data analytics company, took over Maven. New defense tech start-ups like Anduril also emerged, supplying the United States government with A.I.-backed sensor towers along the southern American border.

In China, officials pushed commercial tech companies toward defense partnerships in a strategy called civil-military fusion. Private firms were drawn into military procurement, joint research and other work with defense institutions. Companies working on drones and unmanned boats found growing military demand for their technologies.

Russia’s invasion of Ukraine in 2022 turned theory into reality. Outgunned, outspent and outnumbered, Ukraine held off Russia with an improvised arsenal of cheap technology. Hobbyist racing drones were used to attack Russian positions on the front lines, eventually becoming more lethal than artillery and, in some cases, gaining autonomous capabilities. Remote-controlled boats kept Russia’s Black Sea fleet pinned down.

Russia adapted as well. Its Lancet drone, which was initially piloted by humans, has incorporated autonomous targeting features. Four years of ruthlessness on the battlefield in Ukraine has served as a laboratory for the world. In recent months, Ukraine began sharing its troves of battlefield data with Palantir and other firms so A.I. systems can better learn to fight wars.

Across Europe, where governments are aiming to diminish their reliance on the American military, the lessons from Ukraine reverberated. In February, Germany, France, Italy, Britain and Poland said they would develop a joint air defense system to guard against drones.

China also advanced. At the 2024 Zhuhai Airshow, Norinco, one of the country’s main defense manufacturers, revealed multiple weapons with A.I. capabilities. One of its systems showed an entire brigade, including armored vehicles and drones, which were controlled and operated by A.I.

Another aircraft, revealed by the state-run Aviation Industry Corporation of China, was a 16-ton jet-powered drone designed to serve as a flying aircraft carrier that could deploy dozens of smaller drones mid-flight.

A week after American and Israeli forces struck Iran in February, a senior Pentagon official gave a glimpse into what computerized warfare now looks like at a conference live streamed by Palantir.

A satellite feed showed a warehouse. With the click of a mouse, an officer selected a row of white trucks parked outside to target in real time. In seconds, the A.I. software suggested a weapon, calculated fuel and ammunition needs, weighed the cost and generated a strike plan.

It was the present-day version of Project Maven, which was now run by Palantir and powered by commercial A.I. The system analyzed intelligence from various sources, generated target lists ranked by priority and recommended weapons, all but eliminating the lag between identifying a target and destroying it.

Embedded with a military version of Claude, the chatbot made by the A.I. firm Anthropic, Maven helped generate thousands of targets in the opening weeks of the Iran campaign. An American Defense Department official declared that what Maven was doing was revolutionary. Human involvement amounted to little more than left click, right click, and left click.

The claims about Maven’s abilities might be overstated and much of the American advantage came from the scale of data flowing in and the skills of the people using it. It’s not rocket science, and it was believed that China already had something similar.

In a recent report analyzing thousands of Chinese Army procurement documents, it was discovered that China was building systems that mirrored American ones. In one case, China was trying to replicate the Joint Fires Network, an American program set up to link sensors and weapons globally so a drone on one side of the world could cue a strike from the other.

In some areas, China clearly leads. Its manufacturing dominance means it can produce autonomous weapons at a scale the Pentagon cannot match.

Inside the American government, the push for A.I. weapons have taken on an almost evangelical fervor. Last month, the Pentagon labeled Anthropic a security risk, partly because the company wanted to limit its technology’s use for automated weapons.

America will win the A.I. race, declared tech executives, investors and government officials who cheered speakers calling for tech companies to give the military unfettered access to A.I.

Some officials declared that an A.I. arms buildup might prevent major wars. The logic mirrored the Cold War. If both sides knew what the machines could do, neither would risk finding out.

Conflicts between superpowers will similarly deteriorate if you can build the things that deter warfare effectively enough, he said.

Yet deterrence assumes rationality, while A.I. weapons are designed to move faster than human reason. In exercises dating to 2020, researchers explored how autonomous systems could accelerate escalation and erode human control, with some alarming results.

In one scenario, a system operated by the Americans and Japan responded to a missile launch from North Korea by autonomously launching an unexpected counterattack. The speed of autonomous systems can lead to unintended escalation.

There is a risk of an escalatory spiral where America is in danger of fielding untested, unsafe and unproven systems. If America is not careful, it’s often because everyone feels like the other side is hiding something.

X

ad

Help Keep StrategyPage Open

First came Facebook, then came Twitter, and finally, AI has arrived. They have all caused a decline in our business, but AI may be the deadliest innovation. We are currently in survival mode. Our writers and staff receive no payment in some months, and even when they do, it is below the minimum wage for their efforts. You can support us with your donations or subscriptions. Please help us keep our doors open.

Make sure you spread the word about us. Two ways to do that are to like us on Facebook and follow us on X.

Subscribe   Donate   Close