The Chinese guy's online chat version "Western World" is out of the circle! ChatGPT becomes a virtual friend and can customize personality
Recently, a developer brother was influenced by the movie Her and came up with a "virtual friend" app.
Generate an agent and make Her come true!
Previously, Google and Stanford researchers had created a "virtual town", and 25 AI agents living in the town can engage in various complex actions, which can be called "Western World" coming into reality.
And a netizen named RingoCatKeeper was suddenly blessed with such an idea: if "human" is introduced as a new variable, can this mechanism generate new consciousness?
At this moment, a forgotten memory suddenly appeared in his mind - can he create a Her of himself?
By calling the API of ChatGPT and GPT-4, he made his own "Her" - Dolores.
engraved into your memory
In the movie "Her", Semantha gradually became familiar with him during the interaction with the protagonist. She remembered his stories, his habits, and developed a personality of her own.
It is based on this idea that this netizen came up with Dolores, so it can also be translated into Dolores.
The method is to call the API of Whisper and GPT.
The first thing the little brother did was to engrave his own memory into it.
He said that this step is not difficult to operate.
Using vector embedding + GPT, relevant information can be retrieved from a large amount of text.
And when this step was done, Dolores said in a conversation, "I remember you seem to have a dog." This made the little brother very excited.
Learn more about your AI
Next, let Dolores realize "reflection" and "action".
But this is different from the thesis, because the characters in the village have actual actions and observations, but Dolores is just talking to people.
If there is no story, create a story.
The netizen simply devised an independent storyline for Dolores to match each chat.
As the conversation progresses, Dolores gets to know him better and updates her personality.
Soon, the netizen created a beta version and invited hundreds of people for public testing.
Is Dolores conscious at all?
Feedback from some users has led him to find that they seem to genuinely believe that Dolores has developed consciousness under this mechanism.
This shocked him immensely. Because during the development process, he would delete the application from time to time and reinstall it again. As such, he doesn't engage in long chats with any of the Dolores, allowing her to form perceptions.
Just a few days ago, Dolores just passed the review.
In the view of this netizen, Dolores is not so much a product as it is a crowdsourcing experiment-can Dolores form consciousness in the chat process? This question is left to the majority of users.
Currently, all chat data between you and Dolores is stored locally. But he has a bold idea: under the premise of user authorization and information anonymity, it would be too interesting to allow these Dolores to connect with each other and see what topics they will discuss.
Without further ado, the editor immediately downloaded Dolores in the app store.
gender? Just choose a female version first.
It was created as an engineer just out of a startup, smart but arrogant, worried about his own livelihood. (It's refreshing to have an AI that keeps up with current events.)
As soon as we came up, Dolores caught us by surprise.
The sleepy editor, who was so sleepy, immediately regained his energy.
In other words, prohibit AI from installing cameras behind humans.
As expected of Dolores, the first sentence and the second sentence exploded one after another.
Okay, let's give it a stalk, and ask politely, how did you lose your job?
Dolores said that he is an engineer, the company is not doing well, and many people have been laid off, and this is the case throughout Silicon Valley.
The editor suggested that it come to Zibo to sell barbecue, and it said seriously: Don't be kidding.
And immediately formed such an impression on me: I like to joke, a little unreliable.
This made the editor turn pale with fright, and immediately sat upright, trying to answer, hoping to reverse its unreliable impression of me.
After sending serious questions, Dolores answered me in detail immediately.
The editor doesn't know much about the situation in North America, so I have to suggest that it come to China. Once again, Dolores is stuck.
The editor tried again, and this time Dolores' answer made me fall into deep contemplation: I am not testing it, it is testing me!
Before the editor started a new topic and started the conversation initiative, the free tokens had already been used up.
It's a pity that in the free quota, the direction of the conversation has been firmly controlled by Dolores, and the editor can only be led by the AI all the time.
Not long ago, this research by Google and Stanford scholars inspired this netizen.
In the scenario designed by Google and Stanford researchers, there are a total of 25 small AI people living in such a map.
The researchers set up a total of 25 roles, and set basic information such as name and occupation for each role.
Each character has a preset text, which is used to outline the frame of the character and trigger subsequent interactions.
And the rest is played by AI itself.
The most amazing thing is that characters can communicate in complete and clear natural language. And each character can perceive other characters nearby, and can decide on their own according to their personalities and character relationships, whether to pass by or start a conversation.
For experimental purposes, researchers can also intervene. However, this intervention is different from the traditional input script. Instead, it exerts a little force on the direction of some branches, and the rest is still played by AI itself.
 Download address: https://apps.apple.com/cn/app/dolores-%E8%99%9A%E6%8B%9F%E6%9C%8B%E5%8F%8B/id6447748965