Tactile sensory substitution of the kind you describe was first developed in the late 1960s in the form of a chair that would translate TV camera images into vibrating "pixels" on your back. That has now been refined into the BrainPort, an FDA-approved device linking a webcam to a little patch of
$1.
> has this been done before? If not, What are the drawbacks, and what's stopping it from becoming the norm?
People don't want to walk around with a thing in their mouth, and the resolution is very poor if you're comparing it to typical vision. There is no color and no depth information. Also, a few devices do fairly well at their narrow task, but cost is often prohibitive and the devices aren't always covered by insurance. As another commenter said, there is also a family of auditory assistive aids that translates a visual scene to some form of auditory soundscape.
> something like one of those machine learning devices that reads off objects that it sees, describing what they are seeing to the user?
Your general idea is sound; machine learning is, I think, poised to make a really big impact in assistive tech. But accuracy and interface are nontrivial challenges. Consider a typical scene. It contains hundreds of objects. Would it be useful to have a device rattling off, continuously, a list of everything it sees? And the best AIs do...*okay* at identifying objects. Not spectacular. If you identified every 10th object in the wild incorrectly, it wouldn't take long before an error is serious. But incrementally, there are vision-based devices that identify products and read signage using AI. One example,
$1. And generally, accessibility features on smartphones, like iOS' VoiceOver and Android's TalkBack, have been game changers in opening up all the information that apps give to any smartphone user, rather than requiring separate dedicated devices.
>What if it was a series of braille symbols that just had the 10 most important objects in vision, where you could request if objects are visible by asking it, through speech?
What are those 10 most important objects? And are you just asking out loud all the time, "Is there a toilet in front of me?" But computer vision translation to braille is an idea people are exploring.
$1.
There is also a burgeoning market of non-machine vision available in the form of apps and services that make human eyes available. Check out Aira and BeMyEyes, which use human volunteers/workers to guide a blind user remotely. Recently, a blind guy ran the
$1.