OpenAI, the company behind the massively popular ChatGPT AI chatbot, has launched a bug bounty program in an attempt to ensure its systems are “safe and secure.”
To that end, it has partnered with the crowdsourced security platform Bugcrowd for independent researchers to report vulnerabilities discovered in its product in exchange for rewards ranging from “$200 for low-severity findings to up to $20,000 for exceptional discoveries.”
It’s worth noting that the program does not cover model safety or hallucination issues, wherein the chatbot is prompted to generate malicious code or other faulty outputs. The company noted that “addressing these issues often involves substantial research and a broader approach.”
Other prohibited categories are denial-of-service (DoS) attacks, brute-forcing OpenAI APIs, and demonstrations that aim to destroy data or gain unauthorized access to sensitive information.
“Please note that authorized testing does not exempt you from all of OpenAI’s terms of service,” the company cautioned. “Abusing the service may result in rate limiting, blocking, or banning.”
What’s in scope, however, are defects in OpenAI APIs, ChatGPT (including plugins), third-party integrations, public exposure of OpenAI API keys, and any of the domains operated by the company.
The development comes in response to OpenAI patching account takeover and data exposure flaws in the platform, prompting Italian data protection regulators to take a closer look at the platform.
Italian Data Protection Authority Proposes Measures to Lift ChatGPT Ban
The Garante, which imposed a temporary ban on ChatGPT on March 31, 2023, has since outlined a set of measures the Microsoft-backed firm will have to agree to implement by the end of the month in order for the suspension to be lifted.
“OpenAI will have to draft and make available, on its website, an information notice describing the arrangements and logic of the data processing required for the operation of ChatGPT along with the rights afforded to data subjects,” the Garante said.
Additionally, the information notice should be readily available for Italian users before signing up for the service. Users will also need to be required to declare they are over the age of 18.
OpenAI has also been ordered to implement an age verification system by September 30, 2023, to filter out users aged below 13 and have provisions in place to seek parental consent for users aged 13 to 18. The company has been given time till May 31 to submit a plan for the age-gating system.
As part of efforts to exercise data rights, both users and non-users of the service can request for “rectification of their personal data” in cases where it’s incorrectly generated by the service, or alternatively, erase the data if corrections are technically infeasible.
Non-users, per the Garante, should further be provided with easily accessible tools to object to their personal data being processed by OpenAI’s algorithms. The company is also expected to run an advertising campaign by May 15, 2023, to “inform individuals on use of their personal data for training algorithms.”