1 min

ChatGPT already has two evil twins

Named “FraudGPT” and “WormGPT”, two chatbots can write fraudulent messages and simple malware.

Antifraud action - August 01, 2023

Investigations led by companies Slash Next and Neten Rich identified two cybercriminal chatbots, based on ChatGPT, but without the OpenAI’s moral constraints. Although named “WormGPT” and “FraudGPT”, they could be one and the same. The bots appeared at the same time and were created by the same individual, only known as “CanadianKingpin”.

When prompted, the two conversational models can write computer hacking software. CanadianKingpin boasts of their ability to come up with custom phishing and carding (illegal credit card use) messages, but they also know how to develop simple malware. Moreover, they can list vulnerabilities, identify black market websites, and draft hacking tutorials.

FraudGPT, with over 3,000 users, can be rented for 200 dollars (181 euros) a month, or 1,700 dollars (1,540 euros) a year. How these tools were developed is difficult to determine, but the most likely explanation is a malicious use of the great number of open-access generative AI publications, including those of OpenAI.

Send this to a friend