How Artificial Intelligence can revolutionise the world of accessibility

Artificial Intelligent (AI) tools, such as Chat GPT and Bard, can think and act in ways that previously only humans could. AI can analyse the world around them, absorb and learn from information, make decisions based on what they’ve learned, and then take appropriate action—often without the need for human intervention. This is why it's being referred to as the "Artificial Intelligence Revolution".  

In this blog article, we reflect on conversations from The AbilityNet Podcast with Joe Devon, Co-founder of Global Accessibility Awareness Day (GAAD), and Mike Buckley, CEO of Be My Eyes, who both discuss how AI could and will make the world more accessible. 

Free Webinar: How will Artificial Intelligence change accessibility testing? 

Guests from AbilityNet and Deque discussed how Artificial Intelligence (AI) can help improve accessibility testing, and much more!

Catch up on the recording 

How AI could improve the accessibility of TV and film 

A group of 4 people sat on a sofa watching TV together.Most individuals enjoy watching television because it allows them to relax and unwind after a long day at work. However, not everyone can enjoy their favourite TV series and films because not all of them have audio descriptions or are sign-interpreted. 

However, AI can break down these barriers. 

Joe Devon suggests that AI might be able to transform content and reality into audio for blind people and visuals for deaf people. 

Joe also mentions that by using AI, it may be possible to pause a TV show or a film and ask, "Who are you?", "What episode were you in?" or "Replay me the last scene you featured in?". Questions that would benefit people with cognitive disabilities as well as those with poor recall. 

“It's revolutionary...AI and accessibility are hand in hand. AI is accessibility. And what I mean by that is when you think about disability or impairments, and you think about what artificial intelligence is trying to do, is you've got sensory input and AI is trying to understand the sensory input.” - Joe Devon


Catch up on the full podcast episode featuring Joe Devon

Can AI help make code more accessible? 

AI tools may be able to assist developers in creating quality code that is accessible and compliant with the Web Content Accessibility Guidelines (WCAG). 

According to Joe's experience, if you specifically query and ask tools such as Chat GPT to generate accessible code, it will be more accurate than simply asking it to generate code. 

Joe also mentions GitHub Copilot, an AI-powered code completion and suggestion tool and commented when “the day that that spits out code that's accessible by default, that [will be] a huge game changer”. 

How will AI benefit visually impaired or blind people? 

AI holds the promise of transforming the lives of visually impaired or blind individuals by offering innovative solutions that enhance accessibility.  

A close up of a flight departure board in an airport.Mike Buckley, CEO of Be My Eyes, is working with Open AI to develop the Be My Eyes Virtual Volunteer Tool, which includes a dynamic new image-to-text generator. 

The new tool, which is currently under beta testing, will allow users to take a picture and, within a matter of seconds, get a full description of the image. Users may, for example, take a photo of a departures board at an airport and receive the latest update or gate number for their flight in real time. 

"When you talk to the beta testers, they use phrases like life changing. One beta tester said, "Wow, I have a chance to get my independence back." Another beta tester got emotional, incredibly emotional when he said, "This is the first time in four years I can go on my Instagram feed and enjoy it with my family and friends, because I'm getting descriptions of these images." - Mike Buckley


Catch up on the full podcast episode featuring Mike Buckley


Not only does the Virtual Volunteer Tool's visual recognition provide depth, granularity, and accuracy, but you can also converse with it. After you take your first picture, you can use voiceover to ask it a question: "Tell me more about this." "Where can I buy this?", "How much is it?" and so on. Giving you the option to go back and forth within the image and discover additional information. 

If the Virtual Volunteer Tool fails and the AI is unable to obtain an accurate answer or satisfactory answer, it will call one of the Be My Eyes volunteers. Offering a seamless rollover mechanism for when it’s not performing well. 

Further resources