HomeAI News
Microsoft officially releases AI content review tool Azure AI Content Safety
62

Microsoft officially releases AI content review tool Azure AI Content Safety

Hayo News
Hayo News
October 18th, 2023
View OriginalTranslated by Google

In order to create a healthy and harmonious community environment and reduce negative content related to prejudice, hatred, violence, etc. in pictures or texts, Microsoft launched the AI ​​content review tool Azure AI Content Safety in May this year. After many months of testing, the tool was launched Officially released today.

Azure AI Content Safety provides a series of trained AI models that can detect negative content related to prejudice, hatred, violence, etc. in images or text, and can understand and detect images or text in eight languages ​​as tagged Content severity is scored to indicate to human reviewers which content requires action.

The content moderation tool was initially integrated into the Azure OpenAI service, but Microsoft is now officially launching it as a standalone system.

"This means customers can use it for AI-generated content from open source models and other company models, as well as call upon some user-generated content, further extending the utility," Microsoft wrote in an official blog post.

Microsoft says the product is significantly improved in impartiality and understanding context compared with other similar products, but the product still relies on human reviewers to flag data and content. This means that ultimately its fairness depends on humans. When processing data and content, human reviewers may bring their own biases, so they still cannot be completely neutral and prudent.

Reprinted from IT之家 故渊View Original

Comments

no dataCoffee time! Feel free to comment