It was none other than OpenAI, the business that is currently making ChatGPT available to the general public, that made the initial noises about harmful generative AI several years ago. It asserted at the time that these models might be utilized for evil, and OpenAI was accurate. Cybercriminals are ratcheting up the evil in GPT models even while it and other major tech companies are committed to providing “guardrails” to curb the AI’s worst tendencies. In order to automate hacking and data theft, WormGPT first appeared, and now FraudGPT has done the same.
Any of these daring new AI systems built on transformer models fall under the umbrella title of “generative AI.” Transformers were created by Google years ago, but they were only ever utilized for internal product development and research. Everyone is racing to catch up now that OpenAI has demonstrated what is possible with generative AI, even internet trolls.
According to PCMag, the reactor of FraudGPT recently started advertising the device on hacking forums. The anonymous creator of FraudGPT asserts that the program will permanently alter the way online criminals conduct their business. Like benign AI programs, the hacker only needs to specify their requirements to FraudGPT. Perhaps the wording will persuade clients of a particular bank to click on a dangerous link in a spam SMS.
While Google and other prominent names in AI have strived to ensure that their models cannot produce harmful code, FraudGPT is on the menu. Even if the forum posts don’t include examples of FraudGPT’s code, the assertion isn’t absurd in light of what we’ve seen from reliable generative AI systems. Additionally, it is claimed that the chatbot’s operator sells stolen data, which might be used to train the model. The bot can search websites to identify those that are most open to intrusion, according to its developer.
Since FraudGPT is a subscription service rather than a standalone program, its feature set may even increase over time. To run the service, the creator needs to have a GPU warehouse somewhere. $200 per month is required to use the malware-generating bot, which is far more expensive than WormGPT’s $60 monthly fee. You might want to be on the watch for malicious texts that lack the recognizable poor English that makes many frauds easy to spot. The developer of FraudGPT claims to have made more than 3,000 sales already.