Breaking barriers: how Apple's new voice accessibility improvements address critical needs
Guest Blogger | 22 May 2024Guest blog: Colin Hughes
Colin is a former BBC producer who campaigns for greater access and affordability of technology for disabled people. Colin is a regular contributor to Aestumanda.
Last August, I wrote an opinion piece for influential tech industry news site The Register, highlighting a significant issue affecting 250 million people worldwide: the lack of support for non-standard speech in Apple’s voice recognition technology.
This challenge is faced by individuals with acquired or progressive conditions such as cerebral palsy, motor neurone disease, those recovering from strokes, and my own condition, muscular dystrophy.
Despite the advancements in AI and voice recognition technologies, these individuals are often excluded, unable to benefit fully from innovations meant to make life easier.
Issues with voice recognition systems
In my Register article, I identified several key problems with existing voice recognition systems:
Lack of adaptability
Most voice recognition systems were designed with a narrow focus, failing to account for the diverse speech patterns of users with non-standard speech. This made the technology inaccessible to many who could have benefited from it the most.
Insufficient customisation
Voice control systems lacked the ability to incorporate custom vocabularies or handle complex words, a crucial feature for those whose speech patterns differ significantly from typical users. This limitation often rendered voice commands ineffective and frustrating.
Exclusion from innovation
Despite the promise of AI, individuals with speech impairments were often excluded from the benefits of technological advancements. The failure to consider their needs perpetuated a digital divide, leaving a significant portion of the population without access to transformative tools.
In response to these issues, I have been a vocal advocate for more inclusive voice technology, consistently calling for enhancements that would make voice recognition accessible to everyone, regardless of their speech patterns. This advocacy included direct appeals to tech giants like Apple, urging them to lead the way in creating more inclusive technology.
In March, here in a blog on the AbilityNet website, I expressed great hope for artificial intelligence and how it can be leveraged to help with making voice technology more accessible on Apple devices.
In the webinar with AbilityNet speakers alongside BT Group, you'll discover:
- Simple ways to learn how to use your devices
- Tips for assistive technology that can help with hearing, visual, motor and cognitive impairments - including accessing and using the tools that are already present on many common digital devices
- How to get help where you live from an AbilityNet or BT Group volunteer
Come along on Thursday 6 June 2024 between 1pm - 2pm BST.
Register for the webinar
Apple's latest Accessibility announcements
I can't read the future but last week, my persistence paid off. Apple announced significant voice accessibility improvements that directly address the concerns I raised. Here’s how:
Listen for Atypical Speech
Apple's new "Listen for Atypical Speech" feature uses on-device AI to recognise the unique speech patterns of users with acquired or progressive conditions. This groundbreaking technology ensures that people who have non-standard speech can now have their voices accurately understood and responded to by their devices. By leveraging on-device AI, Apple ensures privacy and efficiency, making this feature both secure and effective.
Enhanced Voice Control
Voice Control has also received a major upgrade with the ability to support custom vocabularies and complex words. This is a development I have been advocating for since Voice Control first launched five years ago. The ability to customise vocabularies ensures that users can communicate effectively with their devices, regardless of the specific words or phrases they use. This enhancement should be a game-changer for those whose speech patterns include unique or complex terminology.
A step forward for inclusion
These improvements mark a significant step forward in making technology more inclusive. Apple’s commitment to addressing the needs of users with non-standard speech is commendable and demonstrates the power of advocacy and the impact of listening to user feedback. As someone who has been at the forefront of highlighting these issues, I couldn’t be happier to see these changes come to fruition.
Other developments
Apple also announced a slew of other accessibility improvements coming to Apple devices later this year, including:
- Eye Tracking: This new feature will allow iPhone and iPad users with physical disabilities to control their devices just by looking at them, using a new iOS gaze system similar to Vision Pro.
- Vocal Shortcuts: Users can assign custom action phrases to launch shortcuts and perform complicated multi-step actions when those phrases are spoken aloud.
Additionally, Apple announced a series of smaller accessibility enhancements, such as new voices for VoiceOver narration, a Hover Typing option that enlarges the current text field content during editing, access to the Magnifier app’s Detection Mode with the iPhone 15 action button, and improvements to Braille input.
The importance of persistent lobbying
The announcement of these new voice accessibility features is a testament to the importance of advocacy and the impact of persistent lobbying. Few have been more vocal about these issues than I have, and it is gratifying to see that my efforts have helped bring about meaningful change. Apple's new voice accessibility improvements are not just a win for those with non-standard speech; they are a win for inclusivity and innovation.
I’m one of millions of people who most need the benefits of voice control, but my atypical speech has sometimes made it hard to make myself understood by Siri and Voice Control.
The set of voice accessibility enhancements announced last week are the biggest breakthroughs I have seen to date, and I’m really grateful to Apple for listening.
These changes will literally change my life, and the lives of others who have faced the same challenges.
As we celebrate these advancements, it is crucial to continue pushing for further improvements and ensuring that technology serves everyone. For now, I’m celebrating, thrilled that Apple has listened and taken significant steps to make voice recognition technology accessible to all.
You won’t be able to try the new features just yet. They will first become available when iOS 18 public beta is released, likely at the beginning of July. They will be fully available to everyone in the autumn with the full release of Apple’s updated operating systems, including iOS 18, iPadOS, and macOS.
Apple hasn’t said whether some of the new features requiring AI will be restricted to newer Apple devices.
Read more about Apple's accessibility announcements here: Apple Newsroom.
This article was written by Colin Hughes. All views are his own freely expressed opinions. Colin is a former BBC producer who campaigns for greater access and affordability of technology for disabled people.