Whether we’re quite ready for it or not, artificial intelligence is already shaping how we live, work and learn. And for disabled people in particular, tools such as ChatGPT are proving genuinely transformative. They can simplify online tasks, explain complex ideas in plain language, and act as a kind of endlessly patient expert assistant - or even companion.
For people who may find digital content inaccessible - whether due to vision, motor, cognitive or learning differences - the impact can be remarkable. Tasks that once required specialist software, painstaking manual effort, or someone else’s assistance can now be completed simply by typing or speaking a few sentences.
Many of us have realised how massively helpful AI can be, and find ourselves using it on a daily basis both at home and at work. Even if you’ve provided employees with access to Microsoft Copilot (a secure but expensive option that ensures that all chats and business data is kept private), chances are your employees will also be using the likes of ChatGPT on the side.
Block it on work machines, and they’ll just use it on their phones. So while some organisations are still deciding how to “manage” the arrival of AI in the workplace, it’s probably fair to say that use of tools like ChatGPT is inevitable.
The real question, then, is not if people will use it, but how they can do so safely, confidently and to best effect.
Free webinar playbacksEveryday and workplace transformations
ChatGPT is perhaps best known as a writing assistant - helping to draft letters, summarise reports or translate complex information into something more digestible. But its value for disabled users runs far deeper.
For example, someone with sight loss might ask it to describe a complex chart, graphic or spreadsheet. A person with dyslexia could ask for information to be reworded in shorter sentences or simpler language, or to check their written text for errors or flow. Someone with a motor impairment might use it to compose and edit text by voice, or to automate repetitive written tasks.
AI can also serve as a safe, non-judgemental companion for exploring sensitive subjects such as health, benefits or personal wellbeing – topics that can be stressful or inaccessible to research alone.

Free webinar: Digital help for carers: what’s out there and how to use it (including AI tools) - with Carers UKAvoiding the pitfalls
The most important thing to remember is that ChatGPT, clever as it is, doesn’t actually know anything in the way humans do. It predicts what words should come next, based on patterns it’s learned from huge amounts of data. Most of the time this produces strikingly accurate and fulsome results, but often it still gets things confidently and catastrophically wrong - what’s often called a “hallucination”.
How many of us routinely and conscientiously follow up the sources cited in ChatGPT’s responses to ensure that everything generated is correct and up-to-date? Not many of us. But there are some tricks you can use to help maximise accuracy and minimise hallucinations.
Make sure it uses the best information when building its response. I’m sure you know that you can upload documents of your own, but you can also say things like, “Please only use authoritative, reputable sources when doing your research” - and don’t forget that you can tell it to undertake ‘Deep research’ (in that drop-down menu by the text box) where it will take a lot longer before coming back with a more thoroughly thought-through response.
You can also ask it to check its work (“Are you sure about that?” or “Please explain where you found this information”) or to show the reasoning behind its answer (“List the steps you used to arrive at that conclusion”).
Having said the above I need to stress, however, that it’s still important to check any facts yourself - particularly if you’re relying on them for health, financial or business-critical tasks.
Another common pitfall is neglecting due diligence on data privacy. It’s easy to forget that anything you type in could, in theory, end up being stored or analysed to improve future models. That’s why it’s important to know your way around ChatGPT’s settings – something you only get access to once you create an account.
The power of personalisation
Once you’ve created a free ChatGPT account, you can access the all-important Settings area. These settings are, of course, also available in paid tiers, but many more of us may still be using it informally and (in the case of the workplace) unofficially.
Here, under Data Controls, you can choose not to share your data to help train future models – a sensible precaution if you’re entering anything sensitive or work-related.
Equally important is the option to tell ChatGPT a bit about yourself and how you’d like it to respond. This can go a long way to giving you the output style and format you prefer (for example, “Always use UK spelling and avoid American turns of phrase”), but can also make a huge difference for accessibility. You might, for instance, add:
- “I am blind, so please describe any visual content in full detail.”
- “Please use simple, clear language as I have a learning difficulty.”
- “I use speech recognition, so asking me follow-up questions in a way that I can say ‘Yes’ or ‘No’ to, or choose a numbered item, is preferred.”
Once saved, these preferences will automatically shape how ChatGPT responds in all future chats.
Having an account also brings another major advantage - the ability to revisit your previous conversations. This means you can return to earlier work, refine it over time, or simply remind yourself how you arrived at a particular idea. For anyone juggling multiple tasks or coping with memory or attention difficulties, that alone can be a game-changer.
Taking charge of your AI assistant
It’s tempting to see ChatGPT as a kind of magic box that simply “does things” for you. But the real power lies in how you use it. The clearer your requests, the better it performs. Be conversational but specific, and don’t be afraid to refine your prompts. The best users treat ChatGPT like a knowledgeable but occasionally overconfident colleague; one whose ideas can be brilliant, but still need a quick fact/sanity-check before sending anything out the door.
For disabled people, though, this technology represents something even more profound. It’s a powerful ‘leveller’ - a way of working, learning and expressing ourselves that reduces many of the barriers built into the digital world. Whether it’s helping you write a policy, understand a medical form, or simply explore a new topic without fear of judgement, AI has the potential to make life fairer and more inclusive.
The key, as ever, is awareness. Understanding its limits, protecting your data, and setting it up to work for you will ensure that tools like ChatGPT remain exactly what they should be – powerful assistants that amplify human capability rather than replace it.
Need accessibility support?