The popularity of ChatGPT has certainly caused a lot of hacks recently, with hundreds of thousands of accounts available for sale on the dark web. But at the same time, many copycats have also hit the market and some of them are meant to cause nothing but harm. That’s when FraugGPT kicks in.
AI interested hackers have named the tool FraugGPT because of its capabilities. It’s currently available with a subscription and it’s available on various markets, as well as on Telegram.
Apparently, the subscription starts at not less than $200 per month, but hackers can also opt for a yearly subscription for up to $1,700.
The purpose of the app is to give hackers a helping hand in unscrupulous applications and operations, everything with the prolific capabilities of artificial intelligence. At the moment, FraudGPT has been sold in a few thousand copies.
But this isn’t the only hacking aimed copycat of ChatGPT. A similar tool named WormGPT was released on the 13th of July. All these tools work on the exact same principles. They rely on immense data sources and they provide human-like solutions based on the requirements.
These are currently the first tools designed by hackers to help on the dark web. Prior to these developments, such software programs were only theoretical. These days, they seem to be more active than ever and their popularity threatens to skyrocket.
FraudGPT is described to be a bot with no limitations or boundaries whatsoever. It’s been already introduced to various markets on the dark web, including AlphaBay, Empire, WHM or Torrez, among others.
Chatbots, a helping hand for cybercriminals
FraudGPT is a solid tool, despite looking like nothing but a chatbot. It’s been currently used in phishing operations, but it can also generate messages for victims in terms of blackmailing or even business email compromise actions. And this is only the beginning.
The capabilities of ChatGPT go further than that and may include writing harmful codes, creating malware that modern security software can’t detect, finding bins, developing phishing websites and even building hacking programs.
Some of these chatbots can also help hackers find hacking groups and markets with similar interests.
Despite all these potential threats, it looks like FraudGPT is not used too much in the development of phishing websites, as they’re not too convincing. Promotional material seems to be one of its primary uses at the moment.
No more ethical limitations
Some would say that ChatGPT can be exploited and used with the wrong purposes too. It can also be used to create social emails with phishing purposes, but at the end of the day, its uses are limited due to some ethical safeguards.
FraudGPT and WormGPT prove that you don’t need much to change all these. Instead, the same technology can be implemented in other tools, but with no limitations at all.
As a matter of fact, these two tools underline the fact that ethical limitations represent nothing but a matter of personal preferences. Developing the same robot with no rules is fairly simple. Despite OpenAI battling such issues, the current popularity of FraudGPT and WormGPT shows the opposite.
For experts and security companies, this will always be a struggle. New rules are made to be broken. Once they’re broken, a new set of rules will take over and they’ll also be broken at some point. It’s an ongoing cycle.
For instance, you can’t ask ChatGPT to create a phishing email. But then, you can ask the robot to create an email regarding an invoice update, which can be used with phishing purposes.
Finding protection against AI based threats
From many points of view, generative AI software offers the same opportunities to security professionals and hackers. Everything is about scale and speed. From a hacking point of view, AI can generate phishing emails in no time, but also create more at the same time.
Phishing is currently the easiest way to gain entry into a security system. Therefore, human interaction is still mandatory.
Detecting phishing emails could be a challenge, but definitely not impossible. Besides, many new security systems can identify them as spam too. Building a defense strategy can help any organization identify potential attacks before everything is compromised.
On a different note, other security experts are working hard to provide AI based tools with security purposes. Their role is to battle aggressive AI, but then again, this is likely to become an ongoing battle in the long run.
Bottom line, FraudGPT can and will cause some drama overtime. While a victim’s profile won’t change much, speed and scalability offer hackers access to more resources and successful results over a shorter period of time.
At this point, that’s the only major threat associated with FraudGPT and other similar software, yet further developments may take these threats even further.