OpenAI has revealed that it has disrupted more than 20 cybercriminal operations using its ChatGPT conversational agent. Hacker groups, including those from China and Iran, exploited the AI to develop malware, conduct phishing attacks, and spread disinformation on social media. The group “SweetSpecter,” linked to cyber espionage, used ChatGPT for reconnaissance activities and phishing attacks, while “CyberAv3ngers,” affiliated with Iran’s Revolutionary Guard, created malicious scripts to steal data on macOS.

Another group, “Storm-0817,” used ChatGPT to enhance Android malware capable of stealing sensitive information. Additionally, operations manipulating opinions on social media, related to elections in Europe and the United States, were detected. All involved accounts have been banned, and indicators of compromise were shared with the relevant authorities. This follows similar actions taken by OpenAI in August to combat the malicious use of its AI tools.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.