OpenAI offers bug bounty for ChatGPT — but no rewards for jailbreaking its chatbot
by James Vincent from The Verge - All Posts on (#6AQ68)
Illustration: The Verge
OpenAI has launched a bug bounty, encouraging members of the public to find and disclose vulnerabilities in its AI services including ChatGPT. Rewards range from $200 for low-severity findings" to $20,000 for exceptional discoveries," and reports are submittable via crowdsourcing cybersecurity platform Bugcrowd.
Notably, the bounty excludes rewards for jailbreaking ChatGPT or causing it to generate malicious code or text. Issues related to the content of model prompts and responses are strictly out of scope, and will not be rewarded," says OpenAI's Bugcrowd page.
Jailbreaking ChatGPT usually involves inputting elaborate scenarios in the system that allow it to bypass its own safety filters. These might include encouraging the chatbot...