HomeAI News
AI points of the week: Microsoft fully opens the new version of Bing, the era of AI for all is coming
118

AI points of the week: Microsoft fully opens the new version of Bing, the era of AI for all is coming

Hayo News
Hayo News
May 6th, 2023
View OriginalTranslated by Google

In this week, "the big one is coming", Microsoft opened up the new Bing to all users without queuing or paying.

This time, the horn of the era of "AI for All" sounded.

Microsoft fully opens the new Bing

Can speak and draw is also open, GPT-4 can also be used for free

On May 4th (Thursday), Microsoft suddenly announced that it would open the use of New Bing to all users.

Just enter bing.com in the Edge browser (users in mainland China need additional skills), log in to your Microsoft account, and click "Chat" at the top of the page to start a conversation with Bing Chat equipped with GPT-4.

Along with this general opening comes several powerful new Bing updates. First of all, the new version of Bing Chat has three dialogue styles: more creative, more balanced (relatively friendly), more precise (more direct), and can even answer pictures or video search results.

The two new functions of querying chat records and opening third-party plug-in support are even more blockbusters. The new Bing can not only remember more things, but also expand more capabilities in the future through the collective efforts of developers around the world. A Microsoft AI version of the "APP Store" seems to be coming out.

In addition, Bing Image Craetor's Wenshengtu function has also added Chinese, Japanese and other multi-language support. Now you can use Chinese to ask the new Bing to draw the desired picture.

👉Click to go to the "Microsoft Bing" information page

Midjourney is now live v5.1 version

Free Trial Feature Limited Time Back

On May 4th (Thursday), Midjourney announced the start of testing the v5.1 version of the system. Compared with v5, the v5.1 plot is more coherent and clearer, and can more accurately understand the meaning of the prompt words, which means that even a very short prompt word can obtain good plotting results.

The top two pictures are for v5 output, and the bottom two pictures are for v5.1 output

However, unlike v5, v5.1 is somewhat similar to v4, and both of them will be relatively "subjective" in default mode. However, the official also provides a "RAW" mode, in which professional users can weaken the subjectivity of AI and enhance the control over the creativity of the drawing. In addition, a new AI review system has also been launched simultaneously. It will not only evaluate and review sensitive words based on context, but users can also moderately raise objections to wrong reviews and reject reviews.

To celebrate the launch of the v5.1 version, Midjourney also restored the free trial for a limited time this weekend . Although the time is short, friends who have not experienced the charm of Midjourney can seize the opportunity to experience it.

👉Click to go to the "Midjourney" information page

Stability AI open source new drawing, language model

The output effect is comparable to that of Midjourney

Following the open source large language model StableLM, SD developer Stability AI open sourced two new models last weekend: the Vincent graph model DeepFloyd IF and the StableVicuna chatbot. Among them, the DeepFloyd IF model is worth introducing to everyone. This model has achieved two breakthroughs just completed by Midjourney in v5: "can write" and "understand space".

Although it is still based on the diffusion model like its own Stable Diffusion, the part responsible for understanding text in DeepFloyd IF has been replaced with a more powerful model, and a pixel-level diffusion model is used to generate images, realizing "pixel-level" image generation. Through the comparison of the official results, it is not difficult to see that DeepFloyd IF is really strong, and the future is promising.

In addition, Stability AI has also open sourced the chat robot StableVicuna based on the vicuna Vicuna-13B model, which is also the first open source chat robot trained with human feedback reinforcement learning (RLHF).

👉Click to go to the "DeepFloyd IF" data page

👉Click to go to the information page of "StableVicuna"

OpenAI x Andrew Ng official free course

Famous teachers teach you ChatGPT Prompt project

On April 29 (last Saturday), Andrew Ng, a pioneer in machine learning and online education, announced that he has collaborated with OpenAI to produce a new lesson on ChatGPT Prompt Engineering and release it to the public for free.

Wu Enda said that there are too many materials on how to write prompts on the market. But for developers, how to call the large model API to build application software is more important, but there are few materials in this area. And that's the value of their course.

Through this 1.5-hour course, developers will learn the best practices of prompt engineering required for application development, and discover new ways to use LLM, build their own custom chatbots, and gain practical experience in writing and iterating prompts using OpenAI API .

The good news is that an enthusiastic team has translated it into Chinese and reproduced the sample code, which is convenient for domestic developers to learn and practice. Wu Enda Introduction: Wu Enda is a visiting professor at the Department of Computer Science and Electrical Engineering at Stanford University. He used to be the director of the Stanford Artificial Intelligence Laboratory. He has changed the lives of countless people through his work in the field of AI. In 2013, he was listed in "Time" magazine The list of the 100 most influential people in the world selected.

👉Click to go to the "ChatGPT" data page

HKUST Xunfei Xinghuo Cognitive Model Released

October 24th "The general model will be benchmarked against ChatGPT"

On May 6 (Saturday), HKUST Xunfei officially released the Xinghuo Cognitive Big Model, and demonstrated its text generation, language understanding, knowledge question and answer, logical reasoning, mathematical ability, programming ability, multi-modality, etc. in real time. ability.

The official also disclosed the "key milestones of continuous upgrading within the year", saying that it is expected that by October 24, "the general model will be benchmarked against ChatGPT", and strive to achieve the effect of "surpassing Chinese and equivalent to English".

The Xinghuo Cognitive Model is open to the public for testing today, and simultaneously released related products based on the Xinghuo Cognitive Model, involving education, office, automobile, and digital employees.

For example, a learning machine equipped with an "AI language partner" has a human-like "speaking partner training" function, and can chat and practice oral English with children for free 24 hours a day. The "AI language partner" equipped with a large cognitive model of Spark can Correct the child's oral problems in time, evaluate the pronunciation level in real time, and give timely feedback and guidance.

👉Click to go to the information page of "Xunfei Xinghuo"

This week's AI application recommendation

Forefront Chat: Try ChatGPT-4 for free

👉https://www.hayo.com/entry/2580

Immersive translation: intelligently recognize the main content area of the webpage for translation, bilingual comparison, and improve the reading experience

👉 https://www.hayo.com/entry/2646

Scribble Diffusion: Turning Hand-Drawn Sketches Into Beautiful Paintings With AI

👉 https://www.hayo.com/entry/221

Poetry: generate a beautiful poem for your photo

👉 https://www.hayo.com/entry/2645

Duomo Smart: Career documents, mind maps, it can handle all of them!

👉 https://www.hayo.com/entry/2045

Pi: A chat AI with cool animations and a human touch

👉 https://www.hayo.com/entry/2821

Bark: language to text, but you can also add sound effects such as coughing and ambient sound

👉 https://www.hayo.com/entry/2415

From the open AI of commonly used search engines to the implementation of AI in learning machines that guide the next generation of education. This week, the pace of "National AI" is much faster than we thought. It turns out that AI can be integrated into daily life in such a way that it even has a "real sense". It is difficult for us to say that "AI is the distant future".

Because it's there.

Comments

no dataCoffee time! Feel free to comment