Microsoft announced it will phase out access to a number of its artificial intelligence-powered facial recognition tools. The changes apply to some of its most controversial tech, including a product that seeks to identify the emotions people exhibit based on videos and images.
The company has released an update to its ‘Responsible AI Standard‘ that explains its goals with regard to equitable and trustworthy AI. To meet these standards, Microsoft has chosen to limit access to the facial recognition tools available through its AzureFace API, Computer Vision and Video Indexer services.
Microsoft said that new users will no longer have access to certain features, while existing customers will have to stop using them by the end of the year. Sarah Bird, Principal Group Product Manager for Azure AI at Microsoft, explained the new changes in a blog post. “By introducing Limited Access, we add an additional layer of scrutiny to the use and deployment of facial recognition.”
The proposed changes will “ensure the use of these services aligns with Microsoft’s Responsible AI Standard and contributes to high-value end-user and societal benefit”. Bird added that “this includes introducing use case and customer eligibility requirements to gain access to these services.”
Realizing the risks in trying to interpret emotion
The company will retire facial analysis capabilities that purport to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup.
“In the case of emotion classification, these efforts raised important questions about privacy, the lack of consensus on a definition of ’emotions,’ and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics”, Bird said.
“API access to capabilities that predict sensitive attributes also opens up a wide range of ways they can be misused — including subjecting people to stereotyping, discrimination, or unfair denial of services.”