HomeAI News
Musk’s AI chatbot Grok is experiencing “hallucinations” and mistakenly believes it is a product of OpenAI
37

Musk’s AI chatbot Grok is experiencing “hallucinations” and mistakenly believes it is a product of OpenAI

Hayo News
Hayo News
December 12th, 2023
View OriginalTranslated by Google

Recently, xAI, Elon Musk's AI company, launched Grok, a new artificial intelligence chat robot, and it is open to Premium+ X users in the United States. According to the official introduction, Grok uses a model called Grok-1, which is completely different from the GPT-4 model currently used by ChatGPT under OpenAI. It is worth mentioning that Grok also integrates real-time data from the X platform and can provide real-time responses based on the latest developments on the X platform. This is also one of its significant differences from other competitors.

However, like all AI chatbots, Grok suffers from the problem of "hallucination," which is generating responses that contain false or misleading information. This phenomenon is prevalent in all large language models (LLMs), including ChatGPT.

Recently, an embarrassing "hallucination" of Grok caused a heated discussion on social media. Some users received a response when using Grok: "I cannot complete your request because it violates OpenAI's use case policy."

xAI engineer Igor Babuschkin explained that Grok used a large amount of network data during the training process, which may contain text generated by OpenAI, resulting in such an "illusion". He said: "But please don't worry, this problem is very Rare, we are aware of it and will ensure that similar issues do not occur with future Grok versions. Please be assured that no OpenAI code was used in the development of Grok."

How to avoid similar incidents and how to make AI chatbots safer and more trustworthy will be important topics in future AI research and development.

Reprinted from it之家View Original

Comments