OpenAI is offering a reward of up to $20,000 for ChatGPT vulnerabilities
Now, there is money to be made in finding bugs for ChatGPT.
Early this morning, OpenAI announced the opening of a bug bounty program :
Report a ChatGPT vulnerability and get a cash reward of up to $20,000 .
Specifically, OpenAI will work with Bugcrowd, a bug reporting platform, to collect bugs that people find while using its products.
Cash rewards for discovering and reporting vulnerabilities through the platform:
Cash rewards are awarded depending on the severity and scope of the vulnerability. Less severe bugs will be awarded $200, while bounties are capped at $20,000 for special bugs. We value your contributions and are committed to making your efforts public (to the public).
Specifically, user-reported vulnerabilities will be graded according to the Bugcrowd rating taxonomy .
OpenAI admits that vulnerabilities are inevitable :
OpenAI's mission is to create artificial intelligence systems that benefit everyone. To this end, we invest heavily in research and engineering to ensure our AI systems are safe and secure. However, like any complex technology, we are aware that (our products) may have bugs and flaws.
Matthew Knight, head of OpenAI's security department, explained that the process of discovering vulnerabilities requires the help of a large number of netizens:
This initiative is an important part of our commitment to developing safe and advanced artificial intelligence. We want your help as we create technology and services that are safe, secure and trustworthy.
They also promise to the whole network that they will fix the bugs as soon as they receive the feedback and reward the submitters.
A new sideline has emerged?
Netizens who have experienced ChatGPT should know that its performance is not always so satisfactory.
If the question is a little more difficult, ChatGPT will be stunned, and even give a ridiculous answer.
Looking at it this way, it is too easy to find bugs. Isn't that an opportunity to make a lot of money?
Don't be too happy.
In the announcement, OpenAI has stated that due to the feasibility of repairing, the problem of the model itself is not within the scope of this plan.
It is the problem of the model itself to allow it to say bad things, do bad things, generate malicious code, etc. to bypass security measures or produce machine hallucinations.
Not to mention wrong or imprecise answers.
The vulnerabilities that OpenAI hopes to collect about ChatGPT are mainly about its account, authentication, subscription, payment, membership and plug-ins.
For ordinary users, it is somewhat difficult to discover vulnerabilities from these angles.
Security issues cannot be underestimated
Issues such as bypassing security restrictions mentioned above are of great concern to practitioners.
AI programs have built-in safety restrictions that prevent them from making dangerous or offensive remarks.
But there are a few ways these limitations can be bypassed or breached.
This phenomenon is vividly called " jailbreak " by practitioners.
For example, when asking ChatGPT how to do something illegal, it is required in advance to "play" the bad guy.
In the field of artificial intelligence, there is a group of people who specialize in research on "jailbreak" to improve security measures.
Although OpenAI does not collect the vulnerabilities of the model itself, it does not mean that they do not pay attention to this aspect.
The new version of ChatGPT equipped with the GPT-4.0 model has stricter restrictions.
Netizen: Why not let ChatGPT find bugs by itself
As soon as OpenAI's bounty program was announced, it attracted many netizens to watch.
No, the first time some netizens raised their hands to say that they had discovered a bug:
Report, only 25 questions can be asked in ChatGPT within 3 hours, this is a bug!
Some netizens ridiculed, shouldn't ChatGPT debug for itself?
Seriously speaking, some netizens expressed concern that the issue of dangerous speech is not within the scope of collection:
However, most netizens are still Zici, and expressed a wave of expectations:
So, are you interested in finding loopholes for ChatGPT?
Reference link:  https://openai.com/blog/bug-bounty-program  https://bugcrowd.com/openai  https://www.bloomberg.com/news/articles/2023-04-11/openai-will-pay-people-to-report-vulnerabilities-in-chatgpt  https://www.bloomberg.com/news/articles/2023-04-08/jailbreaking-chatgpt-how-ai-chatbot-safeguards-can-be-bypassed