How technology can help people with dyslexia

Date of webinar: 
29 Sep 2020 - 13:00


Image of Dafydd Henke-Reed

This webinar took place on Tuesday 29 September 2020, at 1pm BST.

Dafydd Henke-Reed, Principal Accessibility and Usability Consultant with AbilityNet shared his expert advice about dyslexia and technology.  

In this webinar, Dafydd champions how much technology has enabled him with his dyslexia. Mixing personal stories and professional experiences, his talk went beyond spellcheckers and explored the benefit technology can have on people with dyslexia. It also examines digital barriers to avoid, alongside good practice for enabling dyslexic users online. 

Assistive technologies, such as dictation software and voice assistants, have become mainstream. But digital communication has become less text oriented, with reactions, emojis, voice chat, and multimedia messaging being commonplace.  

If you're dyslexic and are looking for advice about how technology can help you, learn from Dafydd about how technology has revolutionised his experience of dyslexia. 

Who will benefit from this webinar?

This webinar is for anyone with dyslexia, or for those who support someone with dyslexia.

This is particularly relevant for web editors and developers working to make digital accessibility improvements.

The webinar included an opportunity for you to pose your questions about the topic, which we will answer on this page shortly.

Webinar recording, slides and transcript

All our webinars are recorded and a captioned recording of the session is now available below, alongside a transcript and slides used in the session.

For additional information read answers to frequently asked questions about AbilityNet webinars.

Find out more about our AbilityNet Live webinar series.

Useful resources mentioned in the webinar

Follow up questions and answers from the session

Dafydd has responded below to some of the questions that were posed by attendees in the Q&A panel during the webinar.

Q: Can Dyslexia affect numbers as well, or is that known by a different name? Discalculus?

My personal experience mirrors this article, Neurodiversity and Co-occurring differences, from the British Dyslexia Association.

It notes that “co-occurrence [is] believed to be a consequence of risk factors that are shared between disorders, for example, working memory”.

One of the main manifestation of my dyslexia is a short-term working memory problem. As you can imagine, this has a significant impact on my ability to perform mental maths.

There are also some studies that having investigated the neurological overlap. For example, Dyscalculia and dyslexia: Different behavioral, yet similar brain activity profiles during arithmetic.

Dyslexia and Discalculus are not interchangeable. Not everyone with dyslexia has Discalculus, or vice versa. However, there is a lot of noteworthy overlap as explained above.

Q: I have trouble pronouncing words, and articulating myself - would this be also classed as dyslexia? Also processing two sets of questions straight after each – I have trouble with this but some can fluidly answer both - another dyslexia / dyspraxia factor?

I am not qualified to give any diagnostic advice.

If you’re in education, your school or university may have an assessment unit. As well, organisations such as the British Dyslexia Association have an assessment service.

Q: Would the speech to text function work with different accents? I am a Ugandan and in most cases computer systems miss out my accents and it can be very frustrating.

Many assistive technologies support many different languages. For example, NVDA supports of 80 languages. This does include dialects. VoiceOver supports English (US), English (UK), English (Australia), English (Ireland), and English (South Africa).

However, these are screen readers. They output speech. Software that can interpret human speech has traditionally struggled with accents and dialects. Voice recognition has evolved greatly over the last few years, particularly with Google Home, Siri, and Alexa. However, there is definitely still work to be done. What I would suggest is trying Siri and Google Assistant.

If Siri works for you, the chances are that Voice Control and Dictation (available on iOS and macOS) would work well for you. Likewise with Google Assistant and Voice Access on Android.

On Windows, you can try Windows Speech Recognition, It is a free and built into Windows.

As with Dragon Naturally Speaking (which is a paid solution on Window), the more you use the software, the better it understands you. This is called “training” the software.

Q: Are there any tools to help my friend who has dyslexia read her emails please? Not Text to Speech. Does changing the background colour of the email help please?

I cannot comment on what will definitely help. As discussed, people can experience dyslexia in very different ways and find different things more or less helpful.

If your friend is in work, I would encourage them to have a workplace assessment. If they are a student, I would suggest that they liaise with the student support services.

In terms of what might help, if these emails are being sent by coworkers, teachers, family member, and so on, I would encourage them to review the dyslexia style guide.

You friend may benefit from changing the background colour, fonts, and font sizes of messages. For example, see Outlook - Change the default font or text color for email messages.

As well, I personally find Flux helps my read my emails. It adjusts the brightness of my monitor of the day, which helps reduce eyestrain.

Q: From a WCAG 2.1 view: Where you create learning materials by multiple methods (i.e. different formats). If you had a very short video-only video (no audio) do you still need a transcript if you have text and/or a handout versions?

Strictly speaking, the relevant WCAG success criteria are out of scope “when the audio or video is a media alternative for text and is clearly labelled as such”.

For example, this can be seen on Audio-only and Video-only (Prerecorded), which is the success criterion that would be used to review your above example.

You would make it crystal clear that the video contains the same information. As well, you can make sure that users know the they are not missing information. For example, adding the caption “No audio” allows users that they are not missing out on anything.

One small point is that you note that the video might be an alternative to a handout.

A choice between a physical handout and video would several user groups. So I would make sure that if you have a video, that you have the same information digitally available as text.

Q: “Use semantics” - how does this help? Does the text to speech (TTS) pick up on the embedded semantics, or is it that it’s easier to differentiate text when there are variances in the layout (assuming that this hasn’t been overridden with aggressive cascading style sheets (CSS))?

Strictly speaking, the technologies demonstrated do not interface with the accessibility tree. The accessibility tree is where assistive technologies and semantics coincide.

Software such as iOS Speech treats everything as plain text. However, other text-to-speech solutions to interface with semantics.

For example, iOS Speech will announce “Welcome to ACME”. However, VoiceOver, an iOS screen reader, will announce “Welcome to ACME, heading level one”.

Different TTS tools have different target audiences. The tools I demonstrated target sighted mouse users who struggle with language input and output.

Screen readers users who want to access software, websites, and apps, entirely through audio, which is particularly useful for blind and partially sighted users.

That is not to say that the usage of the tools always falls into these discrete camps. I have sighted dyslexic friends who use screen readers, particularly on iOS.

More generally, I encourage people to use semantics because structure is incredibly helpful. I can much more easily parse information when it uses headings, tables, lists, and so on.

Now, I would get the same benefit from a HTML heading as a pure CSS heading. However, other assistive technologies do hook into these semantics.

So using semantics makes content more readable for me, and more usable for those using assistive technology. Basically, proper semantics allows you to kill two birds with one stone.