Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development
Criminals Are Flocking to a Malicious Generative AI Tool
A 12-Month Subscription to FraudGPT Costs $1,700Cybercriminals are using an evil twin of OpenAI's generative artificial intelligence tool Chat GPT. It's called FraudGPT, it's available on criminal forums, and it can be used to write malicious code and create convincing phishing emails.
See Also: The Anatomy of a Deepfake: What if You Can’t Trust Your Own Eyes and Ears?
Researchers at cloud security firm Netenrich said that on Telegram earlier this month they spotted an AI bot marketing itself as FraudGPT that offers features such as writing malicious code and creating phishing pages and emails.
FraudGPT's author offers access for a subscription fee starting at $200 per month or at a discounted price of $1,700 per year. It has about 3,000 paying users.
A similar tool called WormGPT is also available, researchers from SlashNext found this month. It features unlimited character support, chat memory retention and exceptional grammar and code-formatting capabilities (see: WormGPT: How GPT's Evil Twin Could Be Used in BEC Attacks).
John Bambenek, principal threat hunter at Netenrich, told Information Security Media Group it appears that the same actor is behind both malicious AI tools.
The tools are attractive to criminals despite well-documented efforts to successfully circumvent OpenAI's restrictions on using its natural language model for malicious purposes.
Kayne McGladrey, a cybersecurity thought leader, told ISMG that while there are jailbreaks to work around limitations in commercially available AI systems, they're inconvenient for threat actors to run at scale.
"Jailbreaks introduce friction into software developer workflows, forcing users to periodically adapt their prompts based on changes introduced by the AI toolmaker. One of the potential benefits of using an AI intentionally developed for malicious activities is that jailbreaks are not necessary," McGladrey said.
Yale Fox, another cybersecurity thought leader, told ISMG it is unclear how the author of FraudGPT and WormGPT created a natural language model. They could be duplicating the GPT language model or constructing their own, he said, and added that they would need to have a large enough training set to build a model.
"It is not hard to build your own generative pretrained model consisting of, typically, less than a few hundred lines of code," Fox said.
A recent study conducted by cybersecurity firm SoSafe found that AI bots already can write better phishing emails than humans.
SoSafe's research, which is based on simulated phishing attacks during March, found that phishing emails written with AI are not recognized at first glance and are opened by 78% of recipients. Of those, 21% click on potentially malicious content, such as links or attachments.
Netenrich's Bambenek said that FraudGPT and related tools are simply creating synthetic communication quickly and at scale. Given the volume enabled by this technology, attackers can do sophisticated testing on what phishing campaigns work better against which victims and may allow them to become more precise in their targeting.
"This appears to be among the first inclinations that threat actors are building generative AI features into their tooling. Generative AI tools provide criminals the same core functions that they provide technology professionals - the ability to operate at greater speed and scale," Bambenek said.