China, Iran and Russia’s hackers using OpenAI system according to report

Hackers working for the governments of Russia and China are using OpenAI’s system when engaging in their cyberattacks on the U.S. and other nations. An article published in today’s New York Times says that Microsoft said a hacking group that is connected to the Islamic revolutionary Guards Corps in Iran used the A.I. system to examine ways to avoid antivirus scanners and create phishing emails. Here is the NYT article: https://www.nytimes.com/2024/02/14/technology/openai-microsoft-hackers.html

Also a Russia hacking group used Open Ai’s systems to research satellite communication protocols and radar imaging technology, says Open AI.

Some of the nation-state actors include the following;

The Russian nation-state group tracked as Forest Blizzard (aka APT28) is said to have used its offerings to conduct open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks. Emerald Sleet (aka Kimusky), a North Korean threat actor, has used LLMs to identify experts, think tanks, and organizations focused on defense issues in the Asia-Pacific region, understand publicly available flaws, help with basic scripting tasks, and draft content that could be used in phishing campaigns. Crimson Sandstorm (aka Imperial Kitten), an Iranian threat actor who has used LLMs to create code snippets related to app and web development, generate phishing emails, and research common ways malware could evade detection Charcoal Typhoon (aka Aquatic Panda), a Chinese threat actor which has used LLMs to research various companies and vulnerabilities, generate scripts, create content likely for use in phishing campaigns, and identify techniques for post-compromise behavior Salmon Typhoon (aka Maverick Panda), a Chinese threat actor who used LLMs to translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, resolve coding errors, and find concealment tactics to evade detection

Microsoft said it is taking measures together with OpenAI to “disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models.”

Microsoft added that the company is deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools to enhance hacking defense efforts everywhere.

Jeffrey Newman is a whistleblower lawyer and his firm represents whistleblowers in healthcare fraud under the False Claims Act (FCA) also whistleblowers under the SEC whistleblower program and CFTC whistleblower program. He can be reached at Jeff@JeffNewmanLaw.com or at 617-823-3217