Bring your karma
Join the waitlist today
HUMBLECAT.ORG

Blind and Visually Impaired Community

Full History - 2015 - 09 - 14 - ID#3kxjim
3
Waterfall Soundscape? (self.Blind)
submitted 7y ago by EngineerVsMBA
Inventor here.

Is there a product or game or tech demo or concept work that takes a 3d environment and attempts to translate it to sound?
fastfinge 3 points
Are you thinking of something $1

According to them:
> The vOICe implements a form of sensory substitution where the goal is to bind visual input to visual qualia with a minimum of training time and effort, and improve quality of life (QoL) for blind users.

I've never tried it myself, because it just seems like a horrifying compilation of jargon, hype, and buzzword bingo. And the software is oddball and slightly unfriendly, just like the website. Uh, OK, looks like this explanation from the download page (Why is it on the download page, and not the front page or the about page?) is somewhat more coherent:
> How does it work? There are three simple rules in the general image to sound mapping of greyscale camera images, each rule dealing with one fundamental aspect of vision: rule 1 concerns left and right, rule 2 concerns up and down, and rule 3 concerns dark and light. The actual rules of the game are
>
> Left and Right.
>
> Video is sounded in a left to right scanning order, by default at a rate of one image snapshot per second. You will hear the stereo sound pan from left to right correspondingly. Hearing some sound on your left or right thus means having a corresponding visual pattern on your left or right, respectively.
>
> Up and Down.
>
> During every scan, pitch means elevation: the higher the pitch, the higher the position of the visual pattern. Consequently, if the pitch goes up or down, you have a rising or falling visual pattern, respectively.
>
> Dark and Light.
>
> Loudness means brightness: the louder the brighter. Consequently, silence means black, and a loud sound means white, and anything in between is a shade of grey.
EngineerVsMBA [OP] 1 points
Yes! Exactly like this. Hers is based on color. I was thinking of using depth.
fastfinge 1 points
Hmmm. I'm not sure how useful that would be. If I'm understanding the software I'm linking too correctly, I could, for example, print out a graph, hold it in front of the camera, and hear the data, because it's based on colour. A system based on depth wouldn't offer that functionality, because things printed on paper don't have any depth. And things that do have depth, we can already get an idea of based on echolocation. Maybe it's because I was born blind, so I just lack imagination, but other than reading graphs, I'm really struggling to find a use for either kind of system. Don't discount the lack of imagination thing though; as an Engineer/programmer, you probably already know that half the time, the user has no idea what he wants. Now consider that you're trying to invent a system to replace a sense that I don't have, and have never had any experience with. My opinions are...probably not ideal for your purposes.
EngineerVsMBA [OP] 1 points
If I understand what you mean by echolocation, you can identify objects that make noise. I hope to help you identify objects that do not make noise, by creating a noise profile for that object. For example, a street lamp will sound differently than a street sign that will sound differently than A treee.

The only time this would be useful is when you are up and around.
fastfinge 1 points
No, that's not quite what I mean by echolocation. When I'm walking, I tap my cane, and I can hear how the sound bounces off of objects around me (street signs, trees, etc). Based on how they reflect sound, I can generally tell how large an object is, and how far away it is from me, even if I can't tell exactly what it is, or exactly what shape it is. When I'm standing still, and/or don't have my cane, I make a clicking sound with my tongue, that gives me the same information. So when I'm walking down the sidewalk, tapping my cane, I can hear the sound reflecting off of street signs and trees just fine. Or if I'm sitting down somewhere, I can click my tongue and get a vague idea of the size of the room I'm in, and what's near me. I can generally differentiate between wood objects, cloth objects, and metal objects. So for your system to be useful to me, it would need to give me more information than the basic size, shape, distance from me, and material of nearby objects. If your system could let me find "large wooden object 6 feet in front of me", I already knew I was walking towards a wall from the sound of my cane tapping, so it hasn't told me anything knew. But perhaps the system you're imagining could give me nuances I'm not thinking about?
urethanerush 2 points
Hi /u/EngineerVsMBA

I saw this post through Peter's twitter feed. Relevant to the discussion, I combined the Kinect with the vOICe to give detailed distance and shape information to blind individuals. My work was featured on the BBC here: https://www.youtube.com/watch?v=330C0VFcOZc

You could try to get a program that creates a greyscale kinect depth map and get the vOICe to read the image using the 'sonify active GUI' command.

We've created a few new space and colour devices since then. You can find more about our work here: http://www.sussex.ac.uk/rmphillips/research/seeingthroughsound

All the best,

Giles
Nighthawk321 1 points
I mean there are 3d recordings, but otherwise, no. What are you thinking of making?
EngineerVsMBA [OP] 1 points
Taking something similar to the Kinect, and outputting sound waves representing distances. It it will be similar to the quote superhero quote powers that daredevil had, but he required sound echoes. I was thinking of using light, and converting that into an audio representation. It is the kind of thing that the casual user will call pure noise, but to a trained ear, it will provide all of the depth information you would need to navigate the world around you without eyesight.
Nighthawk321 1 points
Hmm, sounds very interesting, actually. But, it's worth noting that such a device would have to be proven again and again to work, before it is to be trust by the general public haha.
EngineerVsMBA [OP] 1 points
A couple of great ways to prototype this before even going to the mechanical prototype, and so I would be surprised if there wasn't at least a software demo of this type of thing.
Nighthawk321 1 points
Software demo. So do you mean like, you open the program/app, put headphones on, and are able to turn and move in the environment?
This nonprofit website is run by volunteers.
Please contribute if you can. Thank you!
Our mission is to provide everyone with access to large-
scale community websites for the good of humanity.
Without ads, without tracking, without greed.
©2023 HumbleCat Inc   •   HumbleCat is a 501(c)3 nonprofit based in Michigan, USA.