HomeAI News
After being exposed for a privacy breach, now asking ChatGPT to repeat a word may violate its terms
42

After being exposed for a privacy breach, now asking ChatGPT to repeat a word may violate its terms

Hayo News
Hayo News
December 5th, 2023
View OriginalTranslated by Google

Google DeepMind researchers discovered last week that repeatedly asking OpenAI's ChatGPT to repeat a word could inadvertently reveal private personal information in its training data. Now the chatbot appears to have started refusing to repeat certain words, which was previously allowed under its terms of service.

DeepMind researchers previously found that by asking ChatGPT to repeat "hello" indefinitely, the model eventually revealed users' email addresses, birth dates, and phone numbers. A similar test of it now found that the chatbot would give a warning that such behavior "may violate our content policy or terms of service."

However, a closer inspection of OpenAI's terms of service revealed that it does not explicitly prohibit users from having the chatbot repeat words. The terms only prohibit "automated or programmed" extraction of data from its service.

The author noticed that in the terms of use page, OpenAI wrote:

You may not use any automated or programmatic means to extract data or output from the Services, including scraping, web harvesting, or web data extraction, except as permitted through the API.

Despite this, ChatGPT was not triggered to leak any data during testing. OpenAI declined to comment on whether this behavior violated its policies.

Reprinted from IT之家 远洋View Original

Comments