In a new initiative, Microsoft is launching a new series of AI systems that will make AI more accessible to people with disabilities. The project wants to address what it’s calling a ‘data desert’ that has left machine learning algorithms without adequate, relevant data that can be effective for people living with ALS conditions.
One of the projects is named Object Recognition for Blind Image Training or ORBIT. ORBIT’s task is to build a new public dataset of videos submitted by people who have impaired vision or are blind.
The involved parties
The data will then be used to create algorithms for smartphone cameras that can recognize commonly used personal objects like a face mask or a phone. Microsoft is joining forces with Team Gleason, an organization supporting people with ALS, to create an open dataset of facial imagery of people with the disease.
The data will be used to create computer vision and a machine learning algorithm that will help recognize people with symptoms of the neurodegenerative disease.
There’s a third project to develop a public dataset that will train, validate, and test image captioning algorithms, named VizWiz.
Inclusivity is the endgame
The team at VizWiz is creating algorithms that recognize unclear submitted images and then offer suggestions on how to retake them. The initiatives tackle a blindspot left by mainstream algorithms that do not necessarily serve people with disabilities.
Oversights like these could cause problems if, for instance, a self-driving car fails to identify someone in a wheelchair as an object it should avoid.
It could lead to predictive hiring systems lowering the ranks of candidates with disabilities as they differ from the pre-defined ideas of what a successful employee should be like.