We can read about disabilities, and study inclusive design as much as we like. But that’s all just theory. One way to internalise and empathise how people with disabilities use technology, is by trying to use the assistive tech ourselves.
In this episode, I’m going to share my experience playing around with assistive technologies; from using only a keyboard to navigate, to colour filters, to screen readers.
Videos of how to use Mac's built in assistive tech mentioned
Links to resources mentioned in the episode:
Each episode, show notes, and transcripts are available on webaccessclub.com.
Support this show, and help me pay the bills on Buy me a coffee.
Sawasdee-ka, kia ora and hello.
Welcome to Web Access Club, a podcast about accessibility for web creators. I'm Prae, a New Zealand based UX engineer, who is learning to make accessible web products.
In this episode, I'm going to share my experience playing around with assistive technologies.
Something doesn't feel right.
I've been consuming media about people with disabilities and web advocates for quite a while. Now, I'm now more aware of different disabilities and coding techniques that may enable or disable different abilities.
But something is missing. When confronted with different technical options. I was uncertain.
Which would be less disabling?
How can I make decisions more confidently?
It's as if I haven't quite internalized all that theory. So I struggled to really apply what I've learned in my day to day work.
This feels kind of similar to when I started learning to cook. I would study a stir fry recipe and even watch my parents make it.
Then I would lay out all the vegetables on the table and just... froze.
In my mind, I know that I need to use a knife to chop the ingredients. I would then need to heat up some oil in the frying hand and then try to fry up those ingredients.
But in practice, my movements were all clumsy and uncertain. I was slow to chop things up and I might have fried the vegetables for a little bit too long.
Back then I didn't have enough experience cooking. So I didn't feel comfortable with the cooking tools and methods that were being presented to me.
But the more I cook, the more I used those utensils to make different types of dishes, the more confident I became trying new recipes became easier and more intuitive. And now I could even identify ways to optimize my cooking process and tweak the recipes a little bit to suit my preferences.
What I was trying to get at is I realized that I need to experience using some of the assistive technologies for real. I think this will connect my understanding between coding an experience, and accessing it with different tools.
But where do I start? There are so many different types of cystic technologies.
But, okay. The easiest thing is to start testing with what I've got; something that I'm most familiar with. I'm going to start by trying to use only keyboard to navigate a website.
Normally, when I use a computer, I would use both keyboard and mouse. I would mainly use a mouse for navigation and keyboard to type in text.
But what if I can't use my mouse? Like someone with a really strong hand tremors. How do I even navigate the whole website with just a keyboard?
Turns out it's not that hard.
Pressing tab takes you to the next interactive element.
Pressing up and down keys scroll through the page.
Press space to open, drop down or select menu, then use arrow key to navigate to an option you want.
Lastly, press enter to submit a form or click on a link.
Just knowing these four basics is normally enough to let me use a keyboard to navigate more sites. Unfortunately, doing this also quickly reveals the sites that are not keyboard accessible.
When I used the keyboard to fill out forms, not seeing the keyboard focus became quite frustrating. Like where in the form am I? And what field am I typing into?
Not being to do basic navigation because the focus state has been hidden from me is now one of the first things that I notice on every site I visit, even when I'm back to using a mouse to navigate.
What frustrates me the most was seeing standard HTML element, having their keyboard interaction and states removed or altered.
As an HTML and CSS coder, I know that those things are built in. It takes additional lines of code to actually go and remove or change those features, which in turns make my experience worse. How is that a good user developer's time?
I always have an option to move my hand to the right, to grab my mouse and navigate from there. But that's often not an option for people with disabilities. People with hand tremors struggle with a mouse because it needs precision. When you handshakes, you can't focus, but pressing a simple keyboard is much easier than trying to fine tune something with a mouse.
Another example of keyboard only user would be fully blind people who probably can't see. So a mouse is pretty useless, and they had to rely on a keyboard to help get them to the next stage because they just don't know where things are on the page.
All this made me appreciate login forms that do work with my password managers and just autofill things correctly. That's actually enabled by simply giving a name attribute to an input field.
Sites also get bonus points for me if pressing enter submits a form. So I don't have to tap all the way down to submit button and press enter to move on to my next task.
After intentionally using as much keyboard as I can. I also found that I became more aware of the convenience shortcuts in my life.
Improving keyboard skills is super useful, regardless of whether you are interested in accessibility or not.
The next thing I tried hardly changed my workflow at all.
I am a Mac user, and there's a variety of visual adjustments that I could play with.
First, I tried the built in color filter, which turns everything grayscale.
I often use this mode when I want to focus on writing. That's because it just gives me less colorful distractions for me to look at when I'm trying to do Google search.
Though if my reference materials are designed or collaboration tools like mural that happen to use low contrast, it could be tricky to interpret the materials properly. I would have to quickly switch back to the color mode when I'm trying to reference those material.
The problem I have with the built in color filter, is that I can't screenshot the experience to show other people. If I found that something has too lower contrast, I actually have to use another tool to convert the screenshot to grayscale, and then post it up later.
Another handy mode that I started to use more and more often is inverting colors. I personally found it hard to read text on bright white background. Yes, I could change the background so that the document is another color, but that is not good on document that I don't own, or that I'm collaborating with other people.
The defaults there are to cater for as many people as I can. If my preference differs slightly, I need to alter my own setup to help myself. I'm thankful that my writing apps display perfectly in inverted color modes. If a site or an app has a dark mode, I don't have to actually keep the colored version mode on as I can just switch on to their built in dark mode. And that will just serve me fine.
An interesting note is that the inverted color modes is terrible for viewing photographic media. Try it out. It reminds me of looking through old film camera through the light or looking at an x-ray scan. So I only recommend using the inverted color modes for text based task.
The next visual assistance feels a little bit more foreign to me. I'm a little shortsighted, but I enjoy wearing my glasses all the time. So I usually have a pretty decent visual experience on my standard digital setup. But after taking the WAI accessibility basic course, I saw a few examples of people who use extreme zoom functionality, and I've always wanted to try it.
Imagine my delight. When I discovered while clicking through all the assistive tech options, that on my Mac, the zooming functionality does exist built in.
Though when I looked at it closely, the Mac version does seem pretty basic. You turn it on, you use your mouse scroll wheel in combination with the control key to zoom in and out.
It zooms everything in your computer, not just an app or a website. This has kind of been helpful for my posture actually, cuz sometimes I find myself accidentally leaning in or slouching closer to my monitor to read an especially small text. I now have an option to just hold down a key and scroll in just to zoom a little bit more without me leaning in. It's pretty convenient.
I also found this especially helpful when my teammates are presenting their whole screen, while they're on a crazy high resolution. When the people are on crazy high resolution and they're presenting their whole screen, you cannot see, or you cannot read what's going on on their screen.
I don't have to ask them to adjust their screen resolution. I just do it quietly, myself. Still it is not a perfect solution. As a selfish reminder for me, please, please, please adjust to your screen resolution when you are sharing your screen. I mean, I'm on a 31 inch monitor and I still struggle to read whatever's going on in my colleague screens.
Zooming is frankly a pretty crude tool.
The zoomed in UI looks pretty pixelated. So even though it lets me read something more clearly, if the UI is truly small, it doesn't help very much because it's all pixelated. There aren't many pixel they can zoom in to see.
It is a good reminder that for a truly inclusive design. Using larger text by default is actually safer. Cuz when you make larger things smaller, it doesn't make them all blurry, and pixelated.
Zoom tools would not be very helpful for those tunnel vision where you can only see a small amount at a time there's only so much zooming out than you can do. So for this crowd responsive design, where they can adjust their browser viewport size to a smaller viewport could be best for them.
They can adjust it so that it fits within their peripheral vision. User who rely on zoom functionality may also have their resolution adjusted to the lowest possible one, as well as increasing their system fonts to something really big. Which reminds me again of how people will use different combination of tools to let them operate their devices, and it is within their rights to do so.
Of all the assistive tech that comes with my Mac. VoiceOver screen reader was one of the most intimidating of all.
It is something that I've had the least exposure with. Since I don't need to use screen reader to operate my devices, nor do I work with colleagues who actually use screen readers on a day to day basis.
Up until now, I rarely had a chance to optimize an experience for screen reader, as not many people actually thought about that use case when creating web products.
So of course screen reader is the big unknown tool for me.
But hey, I mean, so now I've played with all the other lower hanging fruit tools. This seems to be the last main hurdle for me to empathize with assistive tool users. So I finally took the plunge.
My tip for anyone learning to use a screen reader is to learn, to start and stop your screen reader tool.
That's the very first and most important thing you can learn. In your Mac, you can add an accessibility shortcut UI onto your Mac's menu bar, so that if you forgot the keyboard shortcuts, in order to enable and disable VoiceOver, you can also use your mouse to turn it off. It's just providing your more ways to access that.
Next I recommend learning to pause and resume speech. This is super important, because turning the screen reader off could take a few extra seconds. And it also take a few extra seconds to turn it back on again, because it has to index everything that is open on your computer before it announces what is currently active. It is more efficient to just pause the speech. If your colleagues having to be tapping on your shoulder and asking you a question.
Next recommendation, was getting used to hearing things read out as you use the keyboard to navigate normally; like tabbing, arrowing, and entering.
After I learned all of that, the next thing I learned was the truly revealing UX that is actually unique to screen readers: different ways to skim the page.
As a sighted user, I can quickly glance at the webpage to get a sense of what this page is about and see if it has what I'm looking for. But how do you do that with a screen reader? When you can only hear one thing read out to you at a time?
Well, there are actually shortcuts that enable navigation in several modes. Navigating by headings, links, form elements and lists are the most common and useful ones for me.
If your HTML is properly marked up with h1 to h6, screen readers will read out those heading numbers and the heading text together. In a content heavy page like Wikipedia, this is immensely helpful. I could quickly jump into different sections based on the headings that seem to match what I'm looking for.
This is also a good reminder of why headings are so important. Headings, help group sections of related content. This is beneficial for both sighted and screen user experience.
Navigating shopping sites by link is both frustrating and an entertaining experience. Usually I would navigate by link, when I know that this particular page should have a link that takes me to the page I actually need.
These are pages like homepage, site maps, or product listings. They aren't pages that I wanna spend a lot of time on. They're just a gateway to get to a more useful information.
In an online clothing store website, I could be presented with 40 links that are all named quick here, and I have no idea what those links are for. What am I clicking here for? Is that a top? Is that pants? Is that belt? I have no idea. It's actually a strong reminder of why you want to give your links unique and meaningful names.
And huh, now I just can't help. But notice links at work that also are called here and click me. I try to go and change them whenever possible now, but it's always gonna irk me. You just can't unsee it.
I've also learned that I could give accessible names to some HTML tags like nav and it would be quite useful. That's how you help users differentiate between global navigation, footer navigation or the side navigation.
In the world of screen readers, semantic, HTML, and keyboard navigation, just rules it all. Learning to use a screen reader was the thing that really gives me much more confidence when making technical decisions to improve a user experience via code.
Another thing I noticed, was that change is potentially harder to cope with for some screen reader users.
When I changed to my split ergonomic keyboard, it took me a couple of months to get back my typing speed in English. But in Thai it's hopeless. Thai managed to fit 44 alphabets, 22 vowels and 4 tones, into a standard keyboard layout. In this smaller ergonomic keyboard layout, the key maps quite differently. And because my keycaps don't have Thai language printed on them. It's a real struggle for me to figure out new locations of each character.
I've also struggled to use my new keyboard with a screen reader, because VoiceOver key maps slightly different too. It took me a while to relearn that again for my new keyboard.
It made me realize that not everyone is flexible about changing the way they work. Updating or augmenting your setup causes anxiety and fears. I can see why many assistive tech users don't update their software at all. It actually cost them to relearn or remap their setup.
There was once when I hadn't had a chance to use a screen reader for a couple of months. As with anything, pausing for a while, makes me forget some of the shortcut and it took a bit of practice to relearn anything.
These days I use screen readers for reading long blog posts, and read my own content back at me to check if it makes sense at all. I find that integrating it with my workflow help keeps the skill fresh.
The one thing I've noticed people usually joked about when they say they're picking up a screen reader, is that they'll have to close their eyes in order to learn it. I have very mixed feelings about that, because the whole point of a system technology is to assist the abilities that you have. Pretending to be blind while using a screen needle kind of feel slightly disrespectful.
But Hey, if it is easier for you to empathize with the blind users that way, then go ahead. Close your eyes and use a screen reader.
But remember that a screen reader flow should also match the visible flow. This is one of the WCAG criteria for good reasons. It lets people who are blind and using a screen reader, and sighted person to actually work together side by side and refer to the same thing with a common shared understanding when the flow was exactly the same.
So now that I got over the screen reader learning curve. I still have a few more assistive tools that I haven't learned yet. Those are voice dictation, and any physical or motor assistant tools.
I've always thought that knowing how to navigate my devices with my voice would be mighty useful. So surely I'll get into it at some point.
I've occasionally tried to use speech to text, to do some writing. In the past, I've been so frustrated with its low accuracy. I mean, I do have an accent, but it's hard to pinpoint where exactly the accent is from. So it makes it a little bit harder for AI to actually understand some of my pronunciations.
Though lately I've noticed that it did get a little bit better. It also helps that I'm consciously trying to learn to speak more clearly and slowly for podcasts and presentations. So I guess that is me trying to be more inclusive in the way I verbally communicate. Good experience for all, if you're looking at it that way.
As for things like sticky keys or head pointers, I'm uncertain if I'll ever go as far as learning those, but Hey, never say never. Cuz who knows. I might get into an accident and do need to start learning how to use that in order to operate my devices.
Anyone can become disabled at any time. So some sort of disability is inevitable as we age.
Using any technology is a skill, and people pick up skills for different purposes and at different rates. For anyone wanting to learn, to use assistive technology, my advice is to see what comes with your machine. Be patient and curious and explore them at your own pace.
With all of these assistive tech, I realize that they simply helped me consume things differently.
On a website that has simple UX, good contrast, and fully accessible keyboard. I can customize my experience without forcing other users on the site to comply with how I use their product. When I use different visual assistance, I could keep them on, even when I'm sharing my screen with my colleagues. So my colleagues don't even know that I'm using any assistive technologies to view the same content in a different way.
Sites that have complicated UX, low contrast, and remove keyboard support actually prevents me from using my assistive tools. I mean, luckily I can opt out, but what about those who can't?
Trying to incorporate some assistive technologies in my life actually helps with my focus and workflow. It also helps me internalize the potential pitfalls of certain designs.
As a coder, I'm now much more aware of UX decisions, which could impact my ability to use certain tools.
As I gain more confidence with combination of assistive tools, I'll probably get better at identifying UX patterns for most of those use cases, too.
Ah, if only I could incorporate some of these things into the web products I'm involved in eh?
Well actually, the story of that attempt is coming up in the next episode.
You can follow Web Access Club on Twitter, Facebook, and Instagram.
Show notes, resources and transcripts are available on webaccessclub.com.
If you like this episode, please tell a friend, leave a review, and subscribe wherever you get your podcast.