publish time

06/07/2024

author name Arab Times

publish time

06/07/2024

Microsoft and OpenAI disclosed that hackers utilize large language models (LLMs) like ChatGPT to enhance their cyber-attack strategies. These attempts have been traced to groups backed by Russia, North Korea, Iran, and China, who are using these AI tools to gather information on targets and develop sophisticated social engineering techniques.

In collaboration with Microsoft Threat Intelligence, OpenAI has successfully disrupted five state-affiliated groups that aimed to exploit AI services for malicious purposes. These include two China-affiliated groups known as Charcoal Typhoon and Salmon Typhoon, the Iran-affiliated group Crimson Sandstorm, the North Korea-affiliated group Emerald Sleet, and the Russia-affiliated group Forest Blizzard.

OpenAI terminated the accounts linked to these actors, who were using the platform for tasks such as querying open-source information, translating content, finding coding errors, and executing basic coding functions. Microsoft highlighted that cybercrime groups and nation-state actors are continuously exploring AI technologies to evaluate their potential utility and identify security measures they need to bypass.

Despite the growing interest from attackers in AI, Microsoft emphasized the importance of maintaining robust cybersecurity practices. Essential measures include multifactor authentication (MFA) and Zero Trust defenses to safeguard against AI-enhanced cyberattacks, which often leverage social engineering and target unsecured devices and accounts.