Information Warfare: Secure Combat AI

Archives

December 28, 2025: The military is constantly exposed to new technologies and must accurately determine how a new tech can be used for military tasks. Some new technologies, like smokeless gunpowder, were quickly adopted because of the obvious value and battlefield advantages of it. Other technologies require more work, especially radically new technology. One such example is Generative AI/Artificial Intelligence, or GAI. ChatGPT and Grok are two current and widely used GAI systems. When used with sufficiently complete information, GAI systems can write reports, monitor manufacturing operations and provide a short list of the best future options a company or organization should pursue.

What makes GAI popular is that it can be verified for accuracy and completeness. There are various statistically valid and proven techniques for selecting samples of data to manually check and verify if the GAI is operating reliably. Organizations that fail to conduct these quality control procedures eventually run into problems with unreliable GAI data, and are reminded of how important quality control is. GIGO, or Garbage In/Garbage Out is an old axiom that still applies in the age of GAI.

This is where we face the problem of monitoring GAI when it is used in combat operations. While military planning and operations involves a lot of double checking and quality control, if you take you GAI into combat, you want the highest level of reliability you can get. No existing or previous military weapon, equipment or support activities were perfect. But there was a limit to how much bad data could be tolerated. Russia, which has openly pursued computer assisted command and control systems for decades, tolerates a lot more errors than NATO nations, or most other industrialized nations.

In Ukraine, during the first war Russia had an opportunity to use GAI, they used it mainly to create and fine tune their information warfare. This is also known as propaganda. Russia also used GAI for disinformation campaigns. This is a subset of information warfare that the Russians have always been active in and often very good at it. Another Russian specialty is disinformation operations.

Twitter/X, the popular messaging app, began in 2006 and soon became a favorite tool for Russian dezinformatsiya/disinformation operations. That was because it was easier to conceal Russian involvement. Messages were limited to 140 characters, meaning Russian dezinformatsiya operatives could be convincing even if their written English was not fluent. This aspect of Twitter and its relationship with Russian dezinformatsiya operations received little attention in the West until 2016 and, even then, it was inaccurately described. But often that was because of local politics and the use of disinformation. Meanwhile Twitter became a media powerhouse. Six years after Twitter began, it had over 100 million users who were posting over 340 million tweets a day. At that point, Russian disinformation operations were increasingly using Twitter as their primary international messaging platform. It was cheap, anonymous and Russians with a basic knowledge could convincingly use it.

How this Russian dependence on dezinformatsiya came about went something like this. During the Cold War the communist rulers of the Soviet Union invented or expanded on all sorts of propaganda, deception and indoctrination techniques that are still widely copied and often condemned because they work. At least sometimes. In the end, all that dezinformatsiya did not prevent the Soviet empire from collapsing and disintegrating. Some of those techniques have been updated and continue to serve the current rulers of Russia. One of them involves the Internet and is believed particularly useful or at least thought to be in Russia as well as worldwide.

How Russian dezinformatsiya worked in the United States became easier to understand in October 2018 when Twitter released a 350 GB file containing over 10 million tweets from 3,800 accounts belonging to Russian organizations that engage in media manipulation. There were also one million tweets by Iranian trolls seeking to influence public opinion. These tweets date from 2013. Actually, Russia has been using information war techniques like this for over a decade and Iran followed the Russian example.

It may be a while before we find out how well or not Russian use of GAI worked for their information warfare operations. For all we know Russia may be using GAI for their combat operations. The Russians have not done well since they invaded Ukraine in 2022. But they have had some successes, especially in the last year. Was any of that due to GAI? Russia is more willing than NATO to lose troops while testing new tactics or technology.

Currently, NATO nations, especially the United States, will probably test GAI in simulated combat followed by use with some special operations troops.

X

ad

Help keep us a float!

Your support helps us keep our ship a float. We appreciate anyway you chose to help out. Visit us daily, subscribe, donate, and tell your friends.

You can support us in the following ways:

  1. Subscribe to our daily newsletter. We’ll send the news to your email box, and you don’t have to come to the site unless you want to read columns or see photos.
  2. You can contribute to the health of StrategyPage.
  3. Make sure you spread the word about us. Two ways to do that are to like us on Facebook and follow us on X.
Subscribe   Contribute   Close