We are using machine learning to aid the visually impaired in crossing the streets more safely. We would love to hear thoughts from the Reddit community on the design of our solution to see if this is something that would be helpful.
## Background We're a team of five computer science seniors at Dartmouth College currently working on a capstone project. Last quarter, we took a number of weeks to think about a problem that we wanted to solve with technology for the next two quarters. In doing user research, we talked to a friend who is visually impaired and she told us about a pertinent problem in her experience: knowing when it is safe to cross streets (when guide dogs are not trained for this). We decided that we wanted to create technology that would help the visually impaired cross traffic intersections safely.
Just to clarify, this project serves as a proof of concept. We have no intentions of releasing our final product to the public since it would be insanely dangerous for technology to guide people into traffic.
At this moment, we have been working on this project for the past one and half quarters and we have around a month left to finish.
## Implementation The way that we want to help the visually impaired cross traffic intersections is by having a mobile app that a visually impaired user would have downloaded.
* The user would hold up their phone camera at intersections and crosswalks in order to send photos to a Machine Learning model (residing in the app) that will determine whether it is safe to cross the street or not.
* Additionally, we have soldered a haptic feedback belt that the user would wear that guides them in the direction they should walk if they input their desired destination into the mobile application. For example, if the user has to turn left, then the belt will buzz on the left side of their hip.
* Finally, in an ideal world, all drivers would have an OBD ("On-Board Diagnostics," a hardware device that plugs into their car's computer) that would track their car's geo-coordinates, speed, and direction and send it to a cloud server. With this data we will be able to alert visually impaired users if a car is approaching their way.
## Thoughts The five members of the group are not visually impaired, and because of that, we wanted to reach out to see if members of this community had any questions, comments, or concerns about our current implementation. If you were to create technology to help people cross the street safely, how would you solve this problem?
pokersnek4 points4y ago
Hi there.
Unfortunately, you’re not going to get much response from users here. There has been a flood of college or high school project groups asking for feedback (or outright asking r/blind to do the project for them). However, I think yours is way more thought out than most, so I’d like to help.
I am a Certified Orientation and Mobility Specialist. I teach blind people how to travel, including street crossings. The way we teach is that the person has to listen to the movement of vehicles to determine when it is safest to cross. That would be with the traffic surge on your parallel, and when there are no vehicles turning. It’s an interesting skill to master.
Feel free to AMA.
KillerLag3 points4y ago
Also an O&M Instructor. As /u/pokersnek mentioned, any delay would cause an issue. At the speed cars move, by the time a picture is taken and processed, the cars have already moved to a different location.
We often teach people to listen for the Surge (when the light turns from red to green, and all the cars move ahead) for when the lights change. However (at least in Canada), there is a 2 to 4 second time window when all the traffic lights are red, to help clear traffic. If someone was relying on the app, it may assume the traffic is safe to cross, but the lights are actually just about to change.
Would this work on something like bicycles? I would say bicycles are also a big hazard. They drive on the street and on the sidewalks, but often follow the rules of neither. They are fairly silent, and brake much slower than cars.
pokersnek2 points4y ago
I flow like the app has potential, but also has it’s downfalls.
First, the on board diagnostics thing is probably not gonna happen. People don’t want to be tracked. Also, if it’s optional, no one is gonna do it unless they’re altruistic. Also, a warning that vehicles are approaching would be maddeningly redundant in a city scape.
Second, the haptic feedback belt would have to be tied to something like a mapping program to be fully functional. If a person is supposed to turn left, it has to have coordinates to tell you when to turn.
Third, the app itself may pose more of an issue than you believe. It is going to be super reliant on visual processing. Any delay in this could result in a consumer fatality. Your hardware and software stack have to be flawless before it goes public. Then, there’s the assumption that all users have the coordination to point a smart phone camera and pan it across the intersection. I’ve had many students, and only a few who are totally blind could do this. People with low vision could, potentially.
adam-beta2 points4y ago
Thanks so much for all the feedback! I'm one of OP's teammates on the project. To address each of the points:
First, the team agrees that the OBD is unlikely to gain much public traction, but since we were already building out a suite of assistive tech, we thought it might be a nice addition. If anything, we thought it might be good to have an understanding of how something like that *might* work, and flesh out privacy concerns certainly before going to market but not as part of a minimum viable product. I appreciate the point about the redundancy in a city, though. We're in a very rural area and are mainly addressing the problem as far as it's testable around us.
Second, for clarification, the belt is linked to a mobile app. It keeps track of which direction the user should be heading and responds with haptic feedback accordingly.
Third, we're working hard to use state-of-the-art computer vision that's as fast as possible so latency doesn't cause issues. But we absolutely will never release without extensive testing to make sure it's safe. As far as coordination goes, we're giving some thought to using the compass on the phone (in a similar way to how it's used with the haptic belt) to determine whether the direction the phone is facing is parallel to the direction they need to travel. In this case, the camera is pointed at the lights on the other side of the street that needs to be crossed. The model isn't looking for cars in our current implementation, but rather for walk/don't walk signals (it's of course limited to intersections with these signals, but it was necessary to cut down on complexity due to time constraints).
Any other responses to these or different points of feedback are welcome and appreciated! This has already been very helpful.
pokersnek2 points4y ago
Check out this product. It combines an app, a camera mounted on a pair of glasses, and a subscription to a live human assistant (for so many minutes per month.
https://aira.io
BlindOwl121 points4y ago
If I’m using this, I would need to have a phone in one hand and my cane in the other, I’d need to be coordinated enough to hold the phone steady while using my cane and presumably listening out for cars in case, in addition I’d probably be thinking about at least 2 other things. If given the choice I would probably stick to what I’m using unless I needed the technology. The belt seems like it could be useful, though a gps app and a bone conduction headset. Is probably just as good and certainally. Cheaper. I’d also like to mention that the OBD would be finicky, no way people would get that if it cost money and even if It were somehow Made compulsory people would still remove/break them. Good idea in theory though, would also help apps like google maps with traffic predictions etc.
Our mission is to provide everyone with access to large- scale community websites for the good of humanity. Without ads, without tracking, without greed.