Bring your karma
Join the waitlist today
HUMBLECAT.ORG

Blind and Visually Impaired Community

Full History - 2018 - 11 - 21 - ID#9z5f54
13
Has anyone considered setting up a camera that sends black-and-white image to a series of haptical feedback devices over a patch of skin to get a "low-res picture" and thereby become able to see, at least in a limited extent? Is this possible? (self.Blind)
submitted by nyxeka
I know the brain can be trained to do some pretty crazy things - has this been done before? If not, What are the drawbacks, and what's stopping it from becoming the norm?

What about something like one of those machine learning devices that reads off objects that it sees, describing what they are seeing to the user? What if it was a series of braille symbols that just had the 10 most important objects in vision, where you could request if objects are visible by asking it, through speech?
Amonwilde 8 points 4y ago
Check out seeingwithsound and the vOICe: https://www.seeingwithsound.com/

What you're looking for is sensory substitution. People have figured out a few versions of this, like a device you put on your tongue.

https://www.scientificamerican.com/article/device-lets-blind-see-with-tongues/
nyxeka [OP] 3 points 4y ago
sweet, yeah I just found that one
modulus 3 points 4y ago
There's also this: https://en.wikipedia.org/wiki/Optacon

It has a camera you slide over text and you place your finger in a slot. It reproduces the text through a series of vibrating pins. It's a bit weird and you have to get used to it but people who trained on them found them very useful for accessing printed material.
nyxeka [OP] 1 points 4y ago
They definitely looks cool, though I think there might be better options using machine-learning, i.e. scanning pages, even when your head is moving if you have a camera anywhere that could see them. I'm not blind, but I've used lots of text to speech for reading using moon-reader on android, and it was a godsend back when i was doing crappy minimum-wage labour jobs for the first couple years of college
itisisidneyfeldman 2 points 4y ago
Tactile sensory substitution of the kind you describe was first developed in the late 1960s in the form of a chair that would translate TV camera images into vibrating "pixels" on your back. That has now been refined into the BrainPort, an FDA-approved device linking a webcam to a little patch of $1.
> has this been done before? If not, What are the drawbacks, and what's stopping it from becoming the norm?

People don't want to walk around with a thing in their mouth, and the resolution is very poor if you're comparing it to typical vision. There is no color and no depth information. Also, a few devices do fairly well at their narrow task, but cost is often prohibitive and the devices aren't always covered by insurance. As another commenter said, there is also a family of auditory assistive aids that translates a visual scene to some form of auditory soundscape.

> something like one of those machine learning devices that reads off objects that it sees, describing what they are seeing to the user?

Your general idea is sound; machine learning is, I think, poised to make a really big impact in assistive tech. But accuracy and interface are nontrivial challenges. Consider a typical scene. It contains hundreds of objects. Would it be useful to have a device rattling off, continuously, a list of everything it sees? And the best AIs do...*okay* at identifying objects. Not spectacular. If you identified every 10th object in the wild incorrectly, it wouldn't take long before an error is serious. But incrementally, there are vision-based devices that identify products and read signage using AI. One example, $1. And generally, accessibility features on smartphones, like iOS' VoiceOver and Android's TalkBack, have been game changers in opening up all the information that apps give to any smartphone user, rather than requiring separate dedicated devices.

>What if it was a series of braille symbols that just had the 10 most important objects in vision, where you could request if objects are visible by asking it, through speech?

What are those 10 most important objects? And are you just asking out loud all the time, "Is there a toilet in front of me?" But computer vision translation to braille is an idea people are exploring. $1.

There is also a burgeoning market of non-machine vision available in the form of apps and services that make human eyes available. Check out Aira and BeMyEyes, which use human volunteers/workers to guide a blind user remotely. Recently, a blind guy ran the $1.
nyxeka [OP] 2 points 4y ago
Thank you for the descriptive answer!
more response coming as i get back to my computer
Marconius 1 points 4y ago
There is a device I tried out once called the Brainport. It has a camera system on a pair of glasses connected to a big controller/interpreter device that has a postage stamp sized array that sits on your tongue to transmit the feeling of the luminance values from the camera. It was horribly impractical, tasted like licking a baattery, made me drool all over the place, and would take way too long to get anything useful out of it. And the whole thing cost over $15,000! I was really annoyed by it as obviously a lot of work went into creating it, but no thought went into actual usability and practicality.
KillerLag 1 points 4y ago
A few years ago, my wife helped with testing a product called the BrainPort, which did something very similar, but it used electric stimulation on the tongue.

https://www.scientificamerican.com/article/device-lets-blind-see-with-tongues/

Some people were good enough to identify a set of pre-defined objects (banana, cup) and to identify a set of signs. However, one important thing was the people were looking at a set number of items (four items) against a high contrast background, with no interference (a white mug or a plastic yellow banana against a black background). In real world situations, there was too much interference from the background to make it useable. Also, because there was only limited number of items to chose from, it didn't work well if you had to look at an item that wasn't one of those items.

It is possible things have improved greatly, though. The design of the device now looks a lot sleeker, and it's been in development for many years since my wife worked on the project.

https://www.wicab.com/
AllHarlowsEve 1 points 4y ago
Here's the thing about those, they tend to be awkward, expensive, and not incredibly useful.

Imagine going outside and asking every 30 seconds, "Is there a bench?" Eventually, strangers would assume that you're asking them, and guide you to a bench as the device tries to rattle off things it sees. With detail, like raising an image onto your skin, imagine a very low res image, and that's about all the detail you'd be able to feel. It'd have no depth, and you'd probably get more false positives than actual help.
nyxeka [OP] 1 points 4y ago
ok, for sure, that makes sense.

How the haptic feedback thing, though?

Having a video camera on your head, or wherever. It would be connected to a vast array of little nubs stretched out over a patch of skin on your body somewhere. These would create a bit of pressure depending on the luminosity of each pixel - i.e. turn a normal image into black and wide, and just beam that directly onto your forehead.

I also think that it would be very possible to train an AI to know 'relevant' information. Similar to how they are training self-driving cars. When you go out, it might look for specific things like, if you're walking out onto a road, it will let you know. If there's a loose dog, it may tell you. It may know to tell you to contact someone who can help you, and live-stream what you are seeing to them during an emergency so that they can assist you/ tell you what you need to do.

Same thing for people on bikes. Maybe an old lady suddenly falls over nearby.

Now that I think about it, I wonder if there's a business out there that provides a service like that - it would be costly, for sure. It would require one person to man a desk for up to 8 hours a day, and they'd have to be attentive. Get breaks. That's looking at least $150-$200 a day, plus another $100-$200 a month for data charges for constant internet service.

Maybe you can pay for a few hours at a time, say, when you've no choice but to go out alone on a busy day. I could see that costing maybe $20-50, but it could be covered with insurance sometime, on-demand using a cellphone, stuff like that...

AllHarlowsEve 1 points 4y ago
You still have the issue of how much sensitivity you have in the area, whether in your forehead or on your arm, plus it'd be goofy looking as hell. Not all blind people would care, but I'd care about looking like a minion from Dispicable Me.

As far as the camera and support person, most blind people can live independantly without a helper unless they have other issues, but for things like new areas or weird construction changes, ie putting a dumpster on the sidewalk in front of an entryway, strips of siding stacked on the ground, etc, there's Aira, which uses google glass and a hot spot to connect to a worker who can look through your camera. It's still suuuuper expensive, though. They're gonna be rolling out a cheaper version, from what I know, but at this point an app like Be My Eyes or BeSpecular are better.
[deleted] 1 points 4y ago
[deleted]
nyxeka [OP] 1 points 4y ago
maybe something more like $1 or $1 haha

No but for real it could just cover up most of the face since vision isn't an issue (at least for those who are completely blind).
TwistyTurret 1 points 4y ago
My boyfriend and I have fiddled with Microsoft Soundscape a bit and it’s got potential. It’s a free phone app.
vwlsmssng 1 points 4y ago
> My boyfriend and I have fiddled with Microsoft Soundscape a bit and it’s got potential

The mind boggles!
This nonprofit website is run by volunteers.
Please contribute if you can. Thank you!
Our mission is to provide everyone with access to large-
scale community websites for the good of humanity.
Without ads, without tracking, without greed.
©2023 HumbleCat Inc   •   HumbleCat is a 501(c)3 nonprofit based in Michigan, USA.