OpenAI disrupts over 20 malicious operations exploiting ChatGPT
Articles by the same author:
1
2
3
4
OpenAI has revealed that it has disrupted more than 20 cybercriminal operations using its ChatGPT conversational agent. Hacker groups, including those from China and Iran, exploited the AI to develop malware, conduct phishing attacks, and spread disinformation on social media. The group “SweetSpecter,” linked to cyber espionage, used ChatGPT for reconnaissance activities and phishing attacks, while “CyberAv3ngers,” affiliated with Iran’s Revolutionary Guard, created malicious scripts to steal data on macOS.
Another group, “Storm-0817,” used ChatGPT to enhance Android malware capable of stealing sensitive information. Additionally, operations manipulating opinions on social media, related to elections in Europe and the United States, were detected. All involved accounts have been banned, and indicators of compromise were shared with the relevant authorities. This follows similar actions taken by OpenAI in August to combat the malicious use of its AI tools.