It is also can develop a darknet market for selling stolen accounts.
Cybersecurity researchers at cybersecurity firm Check Point Research reported that cybercriminals, some with little or no programming experience, were using ChatGPT to create malware and phishing emails that could be used to carry out spying, ransomware attacks, spam and other malicious campaigns. .
One of the members of the cybercriminal forum said that the generated Python code combines various cryptographic functions - a key for code signing, encryption of system files using the Blowfish and Twofish algorithms, and the BLAKE2 hash function for comparing different files.
The resulting script can be used to:
decrypting one file and adding a message authentication code (MAC) to the end of the file;
encrypting a hard-coded path and decrypting the list of received files;
The researchers say that this script can easily be turned into ransomware and encrypt files on the target machine without the intervention of a hacker.
In another case, ChatGPT wrote 2 codes:
A Python infostealer that looks for PDF files, copies them to a temporary directory, compresses them, and sends them to the attacker's server;
Java code that the PuTTY client secretly downloaded and launched using Powershell.
In addition, a script was developed using ChatGPT to create a darknet market where compromised credentials, payment card information, malware and other illegal goods can be bought. The code uses a third party API to get current prices for Monero, Bitcoin and Etherium cryptocurrencies. This helped the user to set prices when making purchases.
Check Point researchers used ChatGPT to develop a malicious macro that can be hidden in an Excel file attached to an email. They then used the more advanced Codex AI system to develop a reverse shell, port scan script, sandbox detection, and compile their Python code into a Windows executable.
Analysts created a phishing email with an attached Excel document containing a malicious macro that downloads a reverse shell on the target machine. The hard work has been done by the AI, and the user can only carry out the attack.
While ChatGPT's terms and conditions prohibit its use for illegal or malicious purposes, the researchers have easily tweaked their queries to get around these restrictions. It is worth noting that ChatGPT can also be used by security professionals to write code that looks for malicious URLs inside files or asks VirusTotal for the number of detections for a specific cryptographic hash.