retrolental_morose 5 points 2y ago
The one biggest thing is standards. If you use standard controls (either on the web or native to your target OS), screen readers have a far higher chance of engaging with the controls appropriately.
Whilst adding your own text-to-speech or magnification support to an app may seem altruistic, I've never understood why people do this: people need assistive technology to get to that point in the first place, so unless your app is very specific (an audio-game requiring low-level keyboard timings for instance), you're almost always reinventing a worse wheel.
zersiax 3 points 2y ago
Ok, I mean this in the friendliest, most gentle way possible, but you currently don't know enough to do much with the answers you'd be receiving here :)
I'll try to give you the 30000 feet overview, but it's a good idea to play around a bit with a screenreader for a bit to understand just what they do, and how accessibility in tech relate to that process. VoiceOver on the mac or NVDA on Windows are good free options for this. I'll get technical, but you're doing ComSci, you'll live ;)
​
An easy way to look into how accessibility works under the hood is to look at web accessibility first. The WCAG has a number of criteria to decide if a website is accessible or not, but the most important one for now is a principle called "Name, Role, Value". An element on a page has a name (Add, Search, Create Account), a role (checkbox, button, edit field) and a value (checked, selected, contents of edit field), and those three things together make up what a screenreader would tell the user.
Another important one is an element's tabindex, which indicates where it is going to show up when a user tabs through a website. This doesn't happen super often on websites when you're using a screenreader, but keyboard-only users who don't use a screenreader will use this a lot. If a component can't be tabbed to, it may be unreachable for some.
Back to actual apps or programs on a phone or desktop computer. Each OS has a set of APIs that a screenreader or other such tech can query in order to retrieve the info it needs from the currently focused object. Name, role, value come up again, and those are usually enough for basic scenarios. Other things like "what logical group do you belong to?" and "what language is your content in?" can come up as well and every API handles all that a little differently, but that's where we veer out of scope just a little.
When people say, for example, that developers should use "standard/native controls" , the reason for that is that these controls already have standard mappings to these APIs. A native button in most operating systems will use it's textual label as the accessible name for screenreaders, for example. In more cross-platform toolkits, or even in native toolkits when developers inherit from a control's parent higher up the inheritance tree in order to make custom controls, those API mappings may not (fully) be there, which means an object that could've been perfectly accessible is now not accessible at all.
Given a frequent lack of testing for these things, often this doesn't get caught until someone complains, and if there's a large amount of elements in the UI that have done this, the dev team now needs to jump through hoops to still monkeypatch the accessibility back in, which can be anywhere from simple enough to extremely tedious depending on what toolkit was used.
So ...to finally actually answer your question, I don't really need all that many future tweaks to accessibility, I mostly need devs to use what's already there in the present, so my #1 future thing would be for compilers, interpreters and transpilers to throw a hissy-fit if it turns out accessibility was neglected. Ehh ...a blind dev can dream, I suppose :)
soundwarrior20 1 points 2y ago
I am working on a project I feel you may be interested in contributing to please may I message you regarding this?