Netenrich researchers discovered a dangerous AI tool known as “FraudGPT” being marketed on the dark web. This program is intended to generate malicious content. It contains malicious code, phishing pages, and scam emails, among other things.
On July 23rd, FraudGPT will be introduced. It costs $200 per month or $1,700 per year. The concerning issue is that bad actors can reproduce this technology in the absence of ethical safeguards. Furthermore, it allows thieves to freely exploit AI.
Although Google initially created transformers for internal purposes, OpenAI’s success with ChatGPT has piqued the interest of many, including malevolent actors.
The creator of FraudGPT recently began advertising it on hacking forums, saying that it will revolutionize online fraud. Hackers can simply produce convincing content with this application to fool clients. And possibly participate in a variety of nefarious behaviors.
FraudGPT, unlike ethical AI systems, is capable of developing harmful code and trafficking stolen data. It may also search for weak websites to identify possible infiltration targets. It operates centrally as a subscription-based business, raising questions about the scope of its impact.
Furthermore, the monthly cost of $200 for FraudGPT exceeds the monthly cost of $60 for WormGPT. With over 3,000 sales, the creator poses a considerable danger.
Furthermore, the emergence of harmful AI technologies like FraudGPT highlights the critical need for more effective controls against future AI exploitation.