Microsoft says it did a lot for responsible AI in inaugural transparency report

2 weeks ago 1
Microsoft logo Illustration: The Verge

A new study from Microsoft outlines the steps the institution took to merchandise liable AI platforms past year.

In its Responsible AI Transparency Report, which chiefly covers 2023, Microsoft touts its achievements astir safely deploying AI products. The yearly AI transparency study is 1 of the commitments the institution made aft signing a voluntary statement with the White House successful July past year. Microsoft and different companies promised to found liable AI systems and perpetrate to safety.

Microsoft says successful the study that it created 30 liable AI tools successful the past year, grew its liable AI team, and required teams making generative AI applications to measurement and representation risks passim the improvement cycle. The institution notes that it added Content Credentials to its representation procreation platforms, which puts a watermark connected a photo, tagging it arsenic made by an AI model.

The institution says it’s fixed Azure AI customers entree to tools that observe problematic contented similar hatred speech, intersexual content, and self-harm, arsenic good arsenic tools to measure information risks. This includes caller jailbreak detection methods, which were expanded successful March this twelvemonth to see indirect punctual injections wherever the malicious instructions are portion of information ingested by the AI model.

It’s besides expanding its red-teaming efforts, including some in-house reddish teams that deliberately effort to bypass information features successful its AI models arsenic good arsenic red-teaming applications to let third-party investigating earlier releasing caller models.

However, its red-teaming units person their enactment chopped retired for them. The company’s AI rollouts person not been immune to controversies.

When Bing AI archetypal rolled retired successful February 2023, users recovered the chatbot confidently stating incorrect facts and, astatine 1 point, taught radical taste slurs. In October, users of the Bing representation generator recovered they could usage the level to make photos of Mario (or different fashionable characters) flying a level to the Twin Towers. Deepfaked nude images of celebrities similar Taylor Swift made the rounds connected X successful January, which reportedly came from a radical sharing images made with Microsoft Designer. Microsoft ended up closing the loophole that allowed for those pictures to beryllium generated. At the time, Microsoft CEO Satya Nadella said the images were “alarming and terrible.”

Natasha Crampton, main liable AI serviceman astatine Microsoft, says successful an email sent to The Verge that the institution understands AI is inactive a enactment successful advancement and truthful is liable AI.

“Responsible AI has nary decorativeness line, truthful we’ll ne'er see our enactment nether the Voluntary AI commitments done. But we person made beardown advancement since signing them and look guardant to gathering connected our momentum this year,” Crampton says.

Read Entire Article