wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/05\/23\/microsoft-launches-new-ai-tool-to-moderate-text-and-images\/<\/a><\/br> Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities.<\/p>\n Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect \u201cinappropriate\u201d content across images and text. The models \u2014 which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese \u2014 assign a severity score to flagged content, indicating to moderators what content requires action.<\/p>\n \u201cMicrosoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren\u2019t effectively taking into account context or able to work in multiple languages,\u201d the Microsoft spokesperson said via email. \u201cNew [AI] models are able to understand content and cultural context so much better. They are multilingual from the start \u2026 and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.\u201d<\/p>\n During a demo at Microsoft\u2019s annual Build conference, Sarah Bird, Microsoft\u2019s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft\u2019s chatbot in Bing<\/a>\u00a0and Copilot<\/a>, GitHub\u2019s AI-powered code-generating service.<\/p>\n \u201cWe\u2019re now launching it as a product that third-party customers can use,\u201d Bird said in a statement.<\/p>\n Presumably, the tech behind Azure AI Content Safety has improved since it first launched for Bing Chat in early February. Bing Chat went off the rails when it first rolled out in preview; our coverage<\/a> found the chatbot spouting vaccine misinformation and writing a hateful screed from the perspective of Adolf Hitler. Other reporters got it to make threats and even shame them for admonishing it.<\/p>\n In another\u00a0knock<\/a>\u00a0against Microsoft, the company just a few months ago laid off the ethics and society team within its larger AI organization. The move left Microsoft without a dedicated team to ensure its AI principles are closely tied to product design.<\/p>\n Setting all that aside for a moment, Azure AI Content Safety \u2014 which protects against biased, sexist, racist, hateful, violent and self-harm content, according to Microsoft \u2014 is integrated into Azure OpenAI Service, Microsoft\u2019s fully managed, corporate-focused product intended to give businesses access to OpenAI\u2019s technologies with added governance and compliance features. But Azure AI Content Safety can also be applied to non-AI systems, such as online communities and gaming platforms.<\/p>\n Pricing starts at $1.50 per 1,000 images and $0.75 per 1,000 text records.<\/p>\n Azure AI Content Safety is similar to other AI-powered toxicity detection services, including Perspective<\/a>, maintained by Google\u2019s Counter Abuse Technology Team, and Jigsaw, and succeeds Microsoft\u2019s own Content Moderator<\/a> tool. (No word on whether it was built on Microsoft\u2019s acquisition of Two Hat, a moderation content provider, in 2021.) Those services, like Azure AI Content Safety, offer a score from zero to 100 on how similar new comments and images are to others previously identified as toxic.<\/p>\n But there\u2019s reason to be skeptical of them. Beyond Bing Chat\u2019s early stumbles and Microsoft\u2019s poorly targeted layoffs, studies have shown that AI toxicity detection tech still struggles to overcome challenges, including biases against specific subsets of users.<\/p>\n Several years ago, a team at Penn State found<\/a> that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models. In another study<\/a>, researchers showed that older versions of Perspective often couldn\u2019t recognize hate speech that used \u201creclaimed\u201d slurs like \u201cqueer\u201d and spelling variations such as missing characters.<\/p>\n The problem extends beyond toxicity-detectors-as-a-service. This week, a New York Times report<\/a> revealed that eight years after a controversy over Black people being mislabeled as gorillas by image analysis software, tech giants still fear repeating the mistake.<\/p>\n Part of the reason for these failures is that annotators \u2014 the people responsible for adding labels to the training datasets that serve as examples for the models \u2014 bring their own biases to the table. For example, frequently, there are differences in the annotations between labelers who self-identified as African Americans and members of LGBTQ+ community versus annotators who don\u2019t identify as either of those two groups.<\/p>\n To combat some of these issues, Microsoft allows the filters in Azure AI Content Safety to be fine-tuned for context. Bird explains:<\/p>\n For example, the phrase, \u201crun over the hill and attack\u201d used in a game would be considered a medium level of violence and blocked if the gaming system was configured to block medium severity content. An adjustment to accept medium levels of violence would enable the model to tolerate the phrase.<\/p>\n<\/blockquote>\n \u201cWe have a team of linguistic and fairness experts that worked to define the guidelines taking into account cultural, language and context,\u201d a Microsoft spokesperson added. \u201cWe then trained the AI models to reflect these guidelines \u2026 AI will always make some mistakes, [however,] so for applications that require errors to be nearly non-existent we recommend using a human-in-the-loop to verify results.\u201d<\/p>\n One early adopter of Azure AI Content Safety is Koo, a Bangalore, India-based blogging platform with a user base that speaks over 20 languages. Microsoft says it\u2019s partnering with Koo to tackle moderation challenges like analyzing memes and learning the colloquial nuances in languages other than English.<\/p>\n We weren\u2019t offered the chance to test Azure AI Content Safety ahead of its release, and Microsoft didn\u2019t answer questions about its annotation or bias mitigation approaches. But rest assured we\u2019ll be watching closely to see how Azure AI Content Safety performs in the wild.<\/p>\n
\nMicrosoft launches new AI tool to moderate text and images<\/br>
\n2023-05-23 22:09:45<\/br><\/p>\n\n