Hey people of r/blind!
I'm a product manager at a company that, among other stuff, builds ticket validation machine for transport. The kind you have inside buses, trains or inside stations, and that you tap your transit card (NFC) or scan a QR code, to validate your journey.
We only build the HW and the platform, the actual service and software (user interface and interaction), as well as the installation inside the vehicles and stations is up to the local authority that install the machines and thus outside of the scope for this query as I have no control over it.
The main purpose of this product is to validate your transport ticket, so this product has 2 main features:
\- Scan area for a contactless card: Payment card (Visa/Mastercard etc) or Transport Card. Located on the front, bottom part of the product.
\- Scan area for QR code reading, located on the front, middle part of the product above the contactless scan area.
There is also a small touchscreen, and the product features inbuilt Bluetooth as well as a front facing speaker that can maybe be leveraged for other types of interaction but again I don't believe it is in scope here.
My question is: Provided that the machine is well installed and can easily be located inside a vehicle/station, how can I help visually impaired users to easily know where they should tap their card or scan their QR code to validate their journey?
Obviously I would like to avoid users fumbling with their credit card for minutes on until they find the right spot, so my initial idea was to add physical clues on the product along the respective areas, but I have no real good ideas as to what to print. Added difficulty is that the same clues need to work all around the word/for different languages as I cannot really afford to have different versions for different regions.
Any ideas?
Let me know if you would need more information, I tried to keep it short and succinct to start with (not sure that I succeeded). Thanks for your help!
​
x