Microsoft releases Turing Bletchley v3 visual language model: Bing searches for pictures more accurately
Microsoft issued a press release today, announcing the launch of the third-generation Turing Bletchley visual language model , which will be gradually integrated into related products such as Bing, which can greatly improve the image search experience.
Microsoft released the first version of the Turing Bletchley visual language model in November 2021, and began inviting users to test the Turing Bletchley v3 visual language model in the fall of 2022.
After a long period of polishing, Microsoft continues to actively adjust the model based on user feedback and suggestions, which can provide more accurate image content based on keywords. We learned from the blog post that, taking the Chinese "dog eating ice cream" as an example, the search results are more in line with the keyword content.
Microsoft said it has currently used the Turing Bletchley v3 visual language model to review content on Xbox game services, which can help teams identify images and videos that Xbox players upload to their personal profiles to create a more harmonious community environment.