Article 6AQ68 OpenAI offers bug bounty for ChatGPT — but no rewards for jailbreaking its chatbot

OpenAI offers bug bounty for ChatGPT — but no rewards for jailbreaking its chatbot

by
James Vincent
from The Verge - All Posts on (#6AQ68)
STK149_AI_03.0.jpg Illustration: The Verge

OpenAI has launched a bug bounty, encouraging members of the public to find and disclose vulnerabilities in its AI services including ChatGPT. Rewards range from $200 for low-severity findings" to $20,000 for exceptional discoveries," and reports are submittable via crowdsourcing cybersecurity platform Bugcrowd.

Notably, the bounty excludes rewards for jailbreaking ChatGPT or causing it to generate malicious code or text. Issues related to the content of model prompts and responses are strictly out of scope, and will not be rewarded," says OpenAI's Bugcrowd page.

Jailbreaking ChatGPT usually involves inputting elaborate scenarios in the system that allow it to bypass its own safety filters. These might include encouraging the chatbot...

Continue reading...

External Content
Source RSS or Atom Feed
Feed Location http://www.theverge.com/rss/index.xml
Feed Title The Verge - All Posts
Feed Link https://www.theverge.com/
Reply 0 comments