ChatGPT, an AI-powered chatbot developed by OpenAI, continues to impress users with its capabilities. The platform can currently engage in conversation, solve arithmetic problems, write long essays and campaigns for brands, and even review and write computer code. Some hackers, however, have used ChatGPT to write malicious code and create malware. Regardless, the chatbot’s versatility and accuracy (while not always perfect) make it a popular choice among users.
Several underground communities, according to security firm Check Point Research (CPR), indicate that hackers are using OpenAI’s tool to develop malicious applications. Researchers claim in a blog post that the current iteration of malicious tools is basic, but that “it is only a matter of time before more sophisticated threat actors improve the way they use AI-based tools for bad.”
The research firm also spotted a thread named “ChatGPT – Benefits of Malware” in a popular underground hacking forum, where the publisher had disclosed his experience with ChatGPT. The publisher used the platform to create Python-based information stealer that “searches for common file types, copies them to a random folder inside the Temp folder, ZIPs them and uploads them to a hardcoded FTP server.”
A hacker used ChatGPT to create simple Java-based malware in another case. “Of course, this (Java) script can be modified to download and run any programme, including common malware families,” the post says.
Similarly, the research firm also spotted instances where hackers used ChatGPT to create a malicious encryption tool and a dark web marketplace to facilitate “fraud activity”.
The research firm warns that it is too early to tell whether ChatGPT capabilities will become a new favourite tool for dark web participants. However, the platform is gaining traction, and it could help both amateur and professional hackers create campaigns and text that appear on shady websites.
In India, for example, bad actors have used WhatsApp to steal money from users on numerous times. However, in many cases, the malicious campaign used grammatically incorrect English, which ChatGPT can now easily correct. Similarly, a hacker can use OpenAI’s Dall-E platform to create images that do not violate copyrights. Because these tools can help with creatives for free, hackers may find more instances of creating legitimate-appearing campaigns with phishing links to steal users’ personal information and even money.
At the moment, ChatGPT is getting continuous upgrades, and the developer may address the problem of writing malicious code using the platform. The platform is already working on an invisible watermark to distinguish AI-generated text. It may help with checking plagiarism.