ChatGPT chat history can be turned off, but the strongest plug-in is gone! OpenAI Official Announces New Privacy Policy
After Italy banned ChatGPT, OpenAI urgently updated its privacy policy. But it seems that the key plug-ins can't be used?
Since being banned by Italy due to user privacy issues, OpenAI has learned from the pain, and finally officially announced a "new data management method" today——
Users can turn off ChatGPT's "chat history" by themselves.
At this point, all conversations will no longer be used to train and improve OpenAI's models, nor will they appear in the history sidebar.
But the strange thing is that after closing the chat history, the plug-in system was also disabled...
Isn't this an epic weakening?
Close recording → stop training
OpenAI said that the ability to turn off history will be rolled out to all users starting April 25.
Users can be found in ChatGPT's settings and can be changed at any time.
OpenAI says it hopes this feature will provide an easier method of data management than existing exit-only programs.
When chat history is disabled, new conversations will only be kept for 30 days and will only be viewed for monitoring purposes, after which they will be permanently deleted.
Additionally, OpenAI is developing a new ChatGPT commercial package for professionals who need more control over their data, as well as businesses looking to manage end users.
ChatGPT Business will continue the data usage policy of the previous API, which means that end-user data will not be used to train OpenAI models by default.
OpenAI said it plans to launch ChatGPT Business in the coming months.
OpenAI, fancy?
But this goes back to the question at the beginning.
That is, once the user closes the history, the plug-in function is also lost.
Before turning off history:
Taking Browsing Model as an example, this function is not only a lore, but also super invincible and easy to use. After all, ChatGPT's own corpus is only available until 2021, and subsequent new content is not involved.
Previously, when encountering an unknown question, the model would actively indicate that its training data did not have enough information for it to answer the user's question.
After networking, ChatGPT can retrieve the information on the webpage by itself to answer.
This is also one of the biggest highlights of ChatGPT.
Now, users have to make a trade-off between privacy and convenience-turn off the record, the data is only until 2021; don't turn it off, the historical record will then be used for training.
So, the user is being manipulated by OpenAI?
However, at present, most users still prefer to tease ChatGPT.
It seems that turning off the historical records has little effect.
References:
https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpt