OpenAI’s chat tool is adept at crafting phishing and malware messages

On January 6th 2023, Check Point Research issued a warning about the malicious use of ChatGPT. OpenAI’s conversational agent, aka chatbot, is designed to respond to a specific request input by a user and automatically write a text (or even a computer program) for the stated purpose.

Check Point Research, a cybersecurity company, has identified a forum where cybercriminals are discussing how they intend to use ChatGPT. When it comes to programming, most of them are beginners or inexperienced developers. They are therefore using the capabilities of OpenAI’s tool to compensate for their coding weaknesses.

One of them has used ChatGPT to help develop a basic stealer. Another has designed an encryption and decryption tool as a prelude to ransomware. Others use ChatGPT to create online storefronts to sell illegal goods on the dark web.

According to Check Point, conversational agents such as ChatGPT may make it easier for script kiddies to enter the world of cybercrime. In the long run, this is likely to increase the number of active cybercriminals across the globe.

Forum members are using ChatGPT to write fake content (e-books, training courses, etc.), which is then sold on the internet.

But it is in relation to phishing and scamming that ChatGPT is most effective. It can generate a message with flawless grammar and spelling, written in the tone and style of an official letter. It is almost certainly already being used for this purpose by cybercriminal groups.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.