In a world where authorities are trying their best to tackle cybercrime, modern criminals are pushing themselves to overcome such limitations and take their crimes to another level.
The growing popularity of artificial intelligence means cybercrime will naturally try to get into this field as well. As a direct result, the dark web is now flooded with malicious alternatives to popular artificial intelligence tools, such as ChatGPT.
Platforms like FraudGPT and WormGPT are now available on the dark web and allow criminals to up their scams by using AI. Classic bots released by Microsoft or Google may seem extremely sophisticated, but they do have limitations to prevent crime. They can’t generate scam newsletters or malware, for example.
On the other hand, FraudGPT and WormGPT have no limits whatsoever, so they’re often advertised to scammers all over the Internet. WormGPT, for example, has no limits for the character count either, not to mention offering code formatting features.
Ideal for novice criminals
Such tools seem to gain more and more notoriety among novice criminals. For instance, many cybercriminals end up gaining nothing because of their poor English skills or grammar mistakes. This is a red flag for anyone receiving emails or messages from apparently legit businesses or companies.
With WormGPT, these issues become history. Sure, most cybercriminals would struggle to create a professional message in English, but AI tools make it simple and straightforward.
Indeed, there are more red flags to pay attention to in such emails, but this was the most significant one. WormGPT is incredibly effective at creating professional emails in a persuasive manner. At the same time, the bot can be cunning and misleading, but it can also trick even the most cautious individuals.
From this point of view, novice criminals would only have to invest in purchasing the bot, as well as software for mass spam or emails. Given the effectiveness of the bot, messages would look authentic and be written in a professional way, causing more and more people to accidentally give out their details.
FraudGPT adopts a different strategy
FraudGPT goes in a completely different direction. While the AI program can also be used to compose convincing messages, its name is self-explanatory. It has the capability to create sophisticated and undetectable malware, meaning it can find leaks and identify vulnerabilities in victims’ systems.
It can create content, develop programs, and succeed in all kinds of scams over the Internet. The program went undetected for a while, despite being widely available for sale on the dark web. It’s been recently discovered by security firms and its effectiveness is shocking.
The program was advertised with a simple video. In this video, the developer used the bot to generate a scam email. The result was incredibly good.
Both FraudGPT and WormGPT can be used for criminal purposes. They can generate content and steal money from naive users, but they can also steal private data and personal information. In other words, online scams are more advanced than ever now and employ AI.
Was this unexpected?
Whether it comes to modern technologies, innovative programs, or just new security programs, the truth is criminals can still find ways to push crime even further.
While WormGPT and FraudGPT are exclusively advertised to criminals on the dark web, such unscrupulous individuals could have also used classic bots developed by Google or Microsoft.
You can’t ask ChatGPT to write a scam email that looks like it comes from a bank. Instead, you can ask it to write an email for your clients, pretending to be a bank. It depends on the perspective, as well as what you ask the bot to do.
Sure, there are more limitations, such as the word or character count, but at the end of the day, you don’t need such emails to be incredibly long. Instead, they require good English and a realistic approach.
Although FraudGPT and WormGPT look like they’re specifically made for criminals, scammers have also focused on classic bots, as they can be just as effective.
What bots like ChatGPT can’t do is the creation of phishing pages, for example. It can’t create malicious scripts either.
Although security companies are working hard to prevent such scams and problems in the future, the dark web is constantly evolving.
Over the last months, hackers have also given out code samples to help others create their own GPT bots, tailored to their unique necessities. Besides, there are hackers who can even develop such bots based on their clients’ needs.
Bottom line, it looks like there are no limits to how far AI can go. And while tech giants are trying to limit their work to prevent it from being used in crime, other alternatives pop up and offer a full package for criminals, opening the door to a new wave of more advanced crime and scams.