February 6, 2025:
Any new technology does not reach wide acceptance and use until it establishes its usefulness and trustworthiness to the user population. Such was the case with the telegraph in the late 1800s, broadcast radio in the early 1920s, and television a decade later. Actually the development of more effective telegraph systems coincided with work to develop commercial radio and television services. In the 1970s personal computers were developed. The idea seemed absurd at first, but as tinkerers and hobbyist developers produced the first PC that actually worked, a new industry was born. By the late 1970s Apple, Radio Shack and a few other firms were selling PCs to what turned out to be a very enthusiastic and rather large audience. Decades of American government and military work on the internet became commercially available in 1995. That made the maturing PC industry a must-have product.
In the 21st century AI, or Artificial Intelligence, became a usable product and as it reached more users, there were more useful and marketable ideas on what AI could be used for. Some of the new uses were illegal, dividing the programmer and user community between good, or White Hat and bad, or Black Hat. Hacking soon became a military and intelligence asset. Many Black Hat programmers turned out to be a national asset once they were hired to protect Americans commercial and government networks from foreign Black Hats. Programmers who performed both Black and White Hat tasks were sometimes called Grey Hats. The color palette grew as programmers came up with new tools and ways to use them. This was especially true with AI software, which was produced by several firms as well as individuals or small groups that modified commercial AI software and offered it on a black market. Those malicious offerings evolved into marketable products and quickly moved from the black market to legitimate, but sometimes restricted markets because of its use against military and intelligence networks.
AI products like ChatGPT and related items made it easy to create and modify this new malware, as malicious hacker software came to be known. ChatGPT also became the major source of antidotes for the malware. The fact that the lights are still on and back accounts have not been plundered is an indication that the While Hats have the upper hand. Some less visible damage does escape public notice. There are several hacks that stole billions of dollars from banks or individual firms. Most of these hacks were carried out by countries at odds with the United States, like North Korea and Iran. These two nations survive under several rounds of increasingly crippling economic sanctions by using the Black Hats to keep their governments well-funded. Their Black Hat hackers are recognized as a national asset and are paid well for their work. In North Korea, where few citizens can travel abroad, successful Black Hats live in relative luxury and can travel outside the country whenever they wish.
Sometimes the North Korean Black Hats need to examine what Western hackers are up to. There are software trade shows with special sections for malware and antidotes for malware. The malware is traded under the table. No one can legally sell malware openly. Malware can be transported on a thumb drive or an even smaller chip (SIMM) used in cell phones. These thumb drives and SIMM chips are easily concealed and transferred to new owners. Payment can be quickly made to and from bank accounts using smart phone, tablet or Laptop computers apps. Trade Shows are the scene of many such transactions, but these transfers can be done anywhere by prearrangement. Trade shows are preferred because there are more people to do business with and many unexpected opportunities.
There are some new developments that are best discovered at trade shows. Hackers working in China, Russia, Iran, North Korea, and several other nations have been using OpenAI Systems. Microsoft and OpenAI believe that these nations used A.I. to help with routine tasks. That quickly escalated to using OpenAI’s systems for cyberattacks. Some hackers with ties to foreign governments are using generative artificial intelligence in their attacks. Instead of using A.I. to generate exotic attacks, as some in the tech industry feared, the hackers have used it in mundane ways, like drafting emails, translating documents, and debugging computer code, the companies said. The countries using AI do so to be more productive.
Microsoft has committed nearly $23 billion to, and is a close partner with, OpenAI. For example, they share threat information to document how five hacking groups with ties to China, Russia, North Korea, and Iran used OpenAI’s technology. The companies did not say which OpenAI technology was used. The start-up said it had shut down their access after learning about the misuse of their AI technology.
Since OpenAI released ChatGPT in 2022, there have been concerns that hackers might weaponize these more powerful tools to discover new and creative ways to exploit vulnerabilities. Like anything else, AI could be used for illegal and disruptive tasks.
OpenAI requires customers to sign up for accounts, but some new users evade detection through various techniques, like masking their location. This enables these AI users to develop illegal or harmful uses for AI technology. For example, a hacking group connected to the Iranian Islamic Revolutionary Guards Corps (IRGC) used the AI to research ways to avoid antivirus scanners and to generate phishing emails used by hackers. One of the phishing emails pretended to come from an international development agency and another attempted to lure prominent feminists to an attacker-built website on feminism. In another case, a Russian-affiliated group tried to influence the war in Ukraine by using OpenAI’s systems to conduct research on satellite communication protocols and radar imaging technology. Russia has long used a large propaganda organization to attack and weaken enemies. AI is now another tool for the Russians to use.
Microsoft tracks over 300 hacker organizations, including independent cybercriminals as well as AI operations carried out by several nations. OpenAI’s proprietary systems make it easier to track and disrupt their use, the executives said. They said that, while there were ways to identify if hackers were using open-source A.I. technology, a proliferation of open systems made the task harder.
When the work is open sourced, then you don’t know who is using the AI technology and whether their policies are for responsible use of AI. Microsoft did not uncover any use of generative AI in a recent Russian hack of top Microsoft executives.
In combat situations AI has been used increasingly over the last decade. As AI improves, it is used more effectively and frequently in combat situations. For example, a Ukrainian firm developed an AI system that can, with great accuracy, determine if a group of soldiers in the distance are Ukrainian or Russian. This reduces the instances of friendly fire. That is when you accidently fire on your own troops and cause friendly casualties. This is an unfortunate aspect of modern war that no one wants to talk about but continues to occur on a regular basis. Targeting using AI makes it less likely for friendly fire incidents to occur.
A final note. This article was written, corrected and edited with the help of AI software. This is a great benefit to writers who now have an easy way to detect and correct problems with style, format, grammar and spelling.