More than 80 unreviewed secret plug-ins of ChatGPT were exposed
Less than a day after the ChatGPT plug-in became popular, hackers discovered the secret plug-in.
Local time, at 9:00 pm on the 24th, a technical hacker nicknamed "rez0" on Twitter (also a prompt engineer) discovered a very interesting thing when researching and cracking the new ChatGPT API: Remove specific parameters from the call to get more than 80 secret plugins, including: DAN plugins, cryptocurrency price plugins, Wolfram plugins, Instacart plugins, and more.
The API also exposes the "Model Description" part that should not be shown to the user, which is very security risky: because this part exposes how the model was told to use the plugin.
Previously, rez0 had suspected that unpublished plugins could be used by setting "match and replace" rules through HTTP proxies, only the client would check that you were authorized to use the plugins, and could bypass that censorship. But soon rez0 clarified that this guess was incorrect. Because - the API either returns an unvetted plugin, or it doesn't.
Now the netizen is frying the pot: He obviously has the most powerful technology, but he behaves like an amateur player~~
Rez0 claims that the bug has been fixed so far.
Previously, it was revealed on Monday that ChatGPT had user privacy vulnerabilities, some functions were restricted, and many people's historical conversation titles would appear in other people's chat boxes.
Sam, the father of ChatGPT, also tweeted an apology, claiming that the problem occurred in an open source library, and deleted 10 hours of ChatGPT chat records that day to fix the bug.
Discussion: Risks of ChatGPT plugins, and many more
OpenAI recently launched the ChatGPT plugin, which extends the capabilities of bots by granting them access to third-party knowledge sources and databases, including the web. OpenAI is offering an alpha version to ChatGPT users and developers on the waitlist, and OpenAI says it will prioritize a small number of developers and subscribers to its premium ChatGPT Plus plan before rolling out larger scale and API access.
Web Browsing Plugins: Citing Sources Can Be Misleading
The most intriguing plugin is undoubtedly OpenAI's first-party web browsing plugin , which allows ChatGPT to pull data from the web to answer various questions posed to it. (Previously, ChatGPT's knowledge was limited to dates, events, and people prior to September 2021.) The plugin uses the Bing Search API to retrieve content from around the web and displays any sites it visited while crafting an answer, and lists them in ChatGPT's The source is cited in the reply.
OpenAI's own research also suggests that the risk will not be eliminated. The company built an experimental system called WebGPT in 2021 that sometimes cites unreliable sources and is incentivized to pick data from websites that users would find convincing — even if those sources are objective. Not the most convincing. Meta's defunct BlenderBot 3.0 also had access to the web and quickly veered off course, making conspiracy theories and objectionable content apparent when prompted with specific text.
Real-time data is poorly organized and results can be easily manipulated
Compared with static training data sets, the degree of data organization and filtering and cleaning of real-time networks will only be less. Search engines like Google and Bing use their own security mechanisms to reduce the chances of dodgy content appearing at the top of the results, but those results can be gamed. Nor are they necessarily representative of the entire network. Also, Google's algorithm favors sites that use modern web technologies like encryption, mobile support, and schema markup. As a result, many sites with otherwise good content get lost in the shuffle.
That said, search engines can have a big influence on the answers of language models connected to the Internet. Google tends to prioritize its own services in search, for example, using data from Google Places rather than richer, more social sources like third parties like TripAdvisor to answer travel queries. At the same time, search algorithms have also opened the door for bad actors. In 2020, Pinterest exploited a "quirk" of Google's image search algorithm to show more content on Google Image Search.
Now: OpenAI's promise lives on
OpenAI acknowledged that the web-enabled ChatGPT could potentially perform all types of bad behavior, such as sending fraudulent and spam emails, bypassing security restrictions, and generally "increasing the ability of bad actors to defraud, mislead, or abuse others." But the company also said it had "implemented several safeguards" to prevent this from happening, based on internal and external "red team" notifications. Time will tell if they are good enough for these.
In addition to the network plugin, OpenAI has also released a code interpreter for ChatGPT, which provides chatbots with a Python interpreter that works in sandboxes, firewall environments, and disk space. It supports uploading files to ChatGPT and downloading results; OpenAI says it's especially useful for solving math problems, doing data analysis and visualization, and converting files between formats.
Many of OpenAI's early collaborators built plugins for ChatGPT, including Expedia, FiscalNote, Instacart, Kayak, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram, and Zapier.
As for the purpose of the plug-in, it is self-explanatory. For example, the OpenTable plugin allows ChatGPT to search for available reservations across multiple restaurants, while the Instacart plugin allows ChatGPT to place orders from local stores. Zapier is by far the most scalable of its kind, connecting with apps like Google Sheets, Trello, and Gmail to trigger a range of productivity tasks.
To facilitate the creation of new plugins, OpenAI has open-sourced a "retrieval" plugin that enables ChatGPT to access document snippets from data sources such as files, notes, emails, or public documents by asking questions in natural language.
"We're working hard to develop plugins and bring them to a wider audience," OpenAI wrote in a blog post. "We have a lot to learn, and with everyone's help, we hope to build something that is both useful and safe."
write at the end
Plugins are a rather novel addition to ChatGPT's development timeline. Once limited to the information in its training data, ChatGPT, with its plugins, is suddenly much more powerful — and potentially less legally risky.
Some experts have accused OpenAI of profiting from the permissionless task of training ChatGPT; ChatGPT's dataset includes various public websites. But plugins can solve this problem by allowing companies to retain full control over their data.
In short, I hope that ChatGPT can be improved as soon as possible, and make as few "primary mistakes" as possible. And users should treat and use new things objectively.