So a new feature has been added to the microsoft’s seeing AI app that lets blind and limited-vision people convert visual data into audio feedback, and it just got a useful new feature. The new feature use touch to explore the objects and people in photos.
It’s powered by machine learning, of course, specifically object and scene recognition. All you need to do is take a photo or open one up in the viewer and tap anywhere on it. Seeing AI lead Saqib Shaikh wrote in a blog post that “This new feature enables users to tap their finger to an image on a touch-screen to hear a description of objects within an image and the spatial relationship between them,” “The app can even describe the physical appearance of people and predict their mood.”
You can even take a pic of your friend and hear a description of everything in the image from what your friend is doing to if there’s a dog in the pic, all this is possible with their in-built facial recognition. The app now lets users tap around to find where objects are, obviously important to understanding the picture or recognizing it from before. Other details that may not have made it into the overall description may also appear on closer inspection, such as flowers in the foreground or a movie poster in the background.
Oh and also the app now supports the apple ipad, which is gonna be pretty good for business because lots of people depend on the apples tablet as their primary interface for media and interactions. You can now even place orders in the app.
So if you know any blind or optically impaired person, i suggest you refer him or her to the app to help them still interact with tech.