Ethics, Machine Learning and Disabilities

Techshare Pro 2019 brought together people from all aspects of the accessibility sphere including academics, designers, coders and teachers; technophiles and technophobes, idealists and sceptics and everyone in between. It was an event where you could be an audience member one moment and an active participant in a discussion during a tea break with someone you later find out is a giant in the accessibility scene.  

Ethics, Machine Learning and Disabilities panel at TechShare Pro 2019There are many topics generating interest within accessibility and disability but the subjects of Artificial Intelligence (AI) and Machine Learning (ML) and the potential for the emergence of (unintended) bias towards disabled people is clearly something many people are thinking about. Techshare Pro 2019 held a panel on this subject; “Ethics, Machine Learning and Disabilities” which was chaired by AbilityNet’s Abi James and included Reema Patel; Head of Public Engagement at the Ada Lovelace Institute, Christopher Patnoe; Head of Accessibility Programs at Google, Sarah Herrlinger; Director of Global Accessibility Policy & Initiatives at Apple, and Anja Thieme; Senior Researcher in Human-Computer Interaction for Microsoft. The following is a summary of the session.

Ethics Washing and Ethics Shopping

Washing hanging on a rope between buildingsReema Patel began the discussion highlighting the role of the (relatively new) Ada Lovelace Institute and its mission; “to ensure data and AI work for people and society”. Reema was keen to highlight that the term ‘ethics’ in itself is contested and that it was important to avoid what she referred to as 'ethics washing' or ‘ethics shopping’; arbitrarily picking ethical principles from a list. Instead, she proposed a focus on being responsive to diverse voices and perspectives.

Bias and discrimination

The panel highlighted that the groups currently developing AI and data-driven systems are not typically representative of the general population and although this has attracted widespread criticism in terms of gender diversity, there is also little representation of disability as well as socio-economic diversity in these development teams. It was also suggested that there should perhaps be a goal for over-representation of marginalised individuals, whether that be race, gender, religion or disability in order to ensure everyone has a voice. For the people involved in driving accessibility forward, it is about getting into the conversation and ensuring you remain present in these conversations. The accessibility team at Apple, for example, engaged at the point at which Face ID was being discussed and was therefore able to help develop the algorithm to ensure all types of face were considered; people with prosthetics, people who may have their eyes closed all the time; a diverse dataset of users and use cases that may not have been considered without this input. 

Public instances of the failure of AI such as the tragic incident of the self-driving car hitting and killing a woman pushing bike across a junction at night shine a spotlight on the risks associated with machine decision-making in safety-critical situations and the additional concerns associated with not having a diversity of voices involved in the development process. Following the incident, one of the initial scenarios examined was that the car had not detected the woman as a person as the presence of the wheels of a bike confused it. Although this theory that a self-driving car did not detect a person pushing a bike because of the wheels was discounted it did serve to highlight concerns about the safety of wheelchair users and although this concern was quickly addressed, it was, very rightly, a question that needed to be asked and required a satisfactorily reassuring answer. 

The importance of engagement and consultation

Speaking from personal experience, Reema explained that as someone with a hearing impairment there have been moments when she felt involved and engaged in shaping her support and that the items that still lie unused on the shelf are the things she was told by other people that she would need to use.

Microsoft's SeeingAI was highlighted as an instance of a project driven by the people who knew, first-hand, what it needed to achieve; programmers who are blind were able to provide their lived experience and to input into resolving everyday barriers. The resulting product therefore has huge value and utility for blind users (and also considerable additional value beyond).

So why not regulate?

When it comes to the regulation of AI, it was highlighted that AI is an iterative process and regulations tend to be written once and stick for a long time. It was suggested that although regulation may play a role in articulating a standard for people designing and developing technology, over time regulation becomes less valuable. AI may be in its infancy, but it is developing incredibly quickly; regulation may serve an initial function, but it is more important that moving forwards the companies developing AI take more responsibility and uphold their own core values.

Do we trust companies to do the right thing?

Earlier this year the Ada Lovelace Institute carried out a survey of 4,109 adults concerning their attitudes towards facial recognition. The report; Beyond face value: public attitudes to facial recognition technology highlighted a number of key areas of concern amongst the participants, with the majority highlighting a lack of trust in companies to use the technology ethically.

Google's Project Euphonia that also featured at Techshare Pro uses Machine Learning to create speech models for people with non-typical speech patterns, therefore improving the diversity of voices understood by speech assistants, translation services, etc.

The project invites open participation and you can submit your own voice profile to Google's Project Euphonia, if you consider your voice to be 'difficult for strangers to understand'. The Euphonia Project was highlighted by the panel as an example of the benefit of creating clarity and transparency regarding data use and helps to address the misconception that every company developing Machine Learning is of the opinion that more data is always better.

So what if we are doing things for the right reasons?

An interesting argument that surfaced during discussion was whether the use of Machine Learning tools to identify individuals who are using Assistive Technology (AT) as this could potentially enable an AT user to have additional technology or adjustments signposted. The ethical concern associated with this was described by Anja Thieme as; "...functionality that is the trade off between desire for a person to have access to and the [privacy] cost associated with this". On the one hand, if you are able to determine if someone has a need for AT that they were unaware of, it would be helpful to be able to disclose this to them in real time; for example someone who increases the font size on documents could be presented with the question; "Did you know you could have a magnifier in your OS?". However, at the same time this functionality raises the potential for people to be 'tracked' and identified and create the situation where a person may be targeted due to disability. The goal is therefore to ensure that ML is able to understand what a person needs, but to do so in a private way that allows them to learn about any additional technology available and give the individual an opportunity to have a better experience.

There is also the concern that if a person is disabled and uses technology more frequently, or is more likely to use technology as a means of interfacing with public services, the fact that they do so may be valuable data to the provider of the service. This again raises additional ethical concerns, something explored in more depth by Virginia Eubanks in her book 'Automating Inequality'.

But I don't want my data sent anywhere!

Many of the tech companies are addressing privacy concerns by moving the data processing required for machine learning on-device so your smartphone becomes where all the hard work takes place rather than having the data sent to 'the cloud' for processing. This, in effect, will mean that what your device 'knows' about you will allow it to perform helpful functions, but will not need this information to be shared anywhere (or with anyone) else. However, this brings with it its own ethical conundrums. Being able to process data requires powerful smartphones, powerful smartphones have largerprice tags and therefore privacy risks becoming accessible to those who can afford to pay this premium.

What about other people?

Ethical considerations surrounding ML and data-driven systems often focus on the rights of the individual user of the technology. It was highlighted that it is not always this clear as we operate within social contracts; shared understandings with other people as to what we consider acceptable and where. A super-powered hearing aid for example that is able to block out certain things and tune-in to others, would be of tremendous value to a person with a hearing impairment, but in order to be useful it would need to be able to process the conversations of others, conversations taking place with third parties that may be sensitive and/or private. 

Google Glass was highlighted as a fairly public example of a failure of a product due to a mismatch of social contract and, like the imagined hearing aid above, it did not impinge on the privacy of the primary user, but rather the people around them. Typically people objected to the thought of being recorded by the tiny inbuilt camera, but there were other studies that suggested it was a mismatch in data availability; "My glasses are giving me information about you, that you are unable to see." It highlights the need to give others the ability to opt-in or opt-out, in a world of loosely defined public and private spaces, it was recognised by the panel that this will present significant challenges.

Apps that support people with visual impairments such as Be My Eyes, SeeingAI, and Aira although performing similar functions to Google glass (recording images and providing information about the environment using humans or machine learning algorithms) have not received the same negative backlash experienced by Glass owners and Christopher Patnoe pointed out that he hopes that accessible use cases might be used to lead the charge in terms of using sensors to provide contextual awareness in order to improve algorithms generally.

The way we react, what we consider to be invasive or an erosion of privacy changes over time and the panel highlighted that designing and implementing AI responsibly will mean that the sorts of things we find acceptable will change accordingly; show yourselves to be trustworthy and to value our data and we will be less resistant when you tell us what you'd like to develop next. It was also suggested that by making privacy a core value at the forefront of any development, you will force developers to innovate.

So what about the future?

Concluding the discussion, the panellists were asked to give predictions as to where the next big benefit of AI was likely to be felt or where they consider AI to have the potential to remove accessibility barriers.

Sarah Herrlinger stated that she was not sure if there was likely to be a 'single-something' but rather an increased use of AI in a more general sense. 

Reema Patel highlighted the fact that there were interesting clusters, one of which being language/translation/learning, and she explained that as someone who is hard of hearing, she found learning languages very difficult and she can therefore see a gap for a 'Duolingo equivalent'. She also predicts the development of truly personalised services and although she recognises the risks associated with this (as had been discussed) she also sees the potential and feels it is important to acknowledge this potential.

Christopher Patnoe considers computer vision to be the area for exciting development, specifically around contextual awareness. Currently computer vision struggles to distinguish between a door and a refrigerator and it is contextual awareness that will improve this ability and make these systems more useful. 

For Anja Thieme, it was the notion of collaboration and the use of the AI and ML in a more fluid and dynamic way, if we have these resources, how can they be used in different situations in a way that augments our own capabilities. Christopher Patnoe added that it was important not to do this in a way that was creepy. These systems need to be good enough to be useful, and not risk putting people off using them!


Debates surrounding AI are set to continue. We need to voice opinion, discuss, change minds and have our assumptions, biases and prejudices challenged. Technology companies need to be transparent and we need to hold them accountable, but we also need to be involved. It is an opportunity for companies to develop inclusively, to hire inclusively and ensure the teams working to develop ML and AI are representative of the world in which the AI will exist. We need to ensure that datasets are diverse and the voice of disabled people is heard; 'nothing about us, without us' has arguably never been so critical.

Further reading

TechShare Pro 2019 was hosted by Google and supported by some of the biggest technology names on the planet. The conference offered two days to connect, learn and share with people from all over the world who are building a more accessible and inclusive digital world.

Group shot of attendees at the end of TechShare Pro 2019, standing together on stage smiling at the camera

Panelists and workshop hosts included Apple, Google, the International Association of Accessibility Professionals, Barclays, RNIB, Uber, Disability Rights Advocate (USA), Disability Rights UK, European Disability Forum, Aira, BBC, Sony, Scope, Fraunhofer, Verizon Media, Amazon, Netflix and Channel 4.

Subscribe to The TechShare Procast - your audio guide to the conference, including highlights and interviews with some of the speakers at the event.

Request the livestream from TechShare Pro 2019 to watch recordings of the main conference sessions online.

Join our mailing list to be kept up-to-date with the latest disability and technology news, plus more from TechShare Pro 2019 post-event.