How Stephen Hawking communicates

Prof. Stephen Hawking is one of the most recognisable people on the planet, partly because of his synthesised speech. As well as featuring in the Opening Ceremony at the London Olympics he's so famous that he's played himself in The Simpsons four times! But what is the technology behind that voice? AbilityNet's Head of Digital Inclusion Robin Christopherson looks at how Professor Hawking controls his computer, as well as some of the advances that are on the horizon which may help him continue to push the boundaries of scientific thinking.

Despite his global fame many people do not realise that Prof. Hawking is the UK’s longest lived individual with motor neurone disease (MND) and continues to amaze the medical profession with his longevity as much as he does the scientific world with his contributions to cosmology and theoretical physics. His ability to control his communication device, however, has deteriorated over the years - but fear not, technology will always keep pace with his needs.

This video of Prof. Hawking explaining how his technology works and how he quickly builds up phrases to be spoken for everyday conversation, delivering lectures or writing papers.

More recently you may have seen news about how he has had to recently update his method of interacting with his technology. The ‘Hawking talking with his blinks’ article explains how he has lost his ability to control his tech using a switch he presses with a finger and so some adapted glasses include an infra-red sensor that is triggered when he blinks. The definite blink (rather than the sub-conscious ones we all do many times a minute) causes the infra-red beam to be broken long enough to register a switch-press and he’s back in business.

Wearable tech is everywhere these days and, whilst Prof. Hawking uses it for a very special and important purpose, these additional sensors will undoubtedly find their way into more mainstream wearable devices such as Google Glass. Already the numerous methods of interacting with your phone or wearable tech like Glass are providing convenience for users (issue a voice command if your hands are busy, or have your texts spoken out to you as you drive) and those same options are opening doors for disabled users who are permanently unable to touch or see the screen.

Adding sensors

But what if, like Prof. Hawking, you can't use your voice? Based on his current tech solution it's easy to see how adding further sensors such as infra-red switches to take a photo or flick through menus without touching or talking to the device may well be coming to a gadget near you soon. And we’re already seeing software that tracks eye movement (pausing video playback when you look away from the screen) and gesture control to be able to wave at your TV or tablet to change the channel or flip a page.

Meanwhile Prof. Stephen Hawking is helping developers with a project that should soon be accessing our thoughts directly. The iBrain interface is one of several commercial projects that promise to cut out the need for any external user interaction whatsoever. Obviously this will have a very significant impact on people with extreme disabilities but, like with all these advancements, one could argue that they’re being embraced by mainstream manufacturers and accelerated as a result.

Will they reach the escape velocity required by a particle orbiting a black hole? Only the Prof knows the answer to that one but, thanks to advances such as these, he’ll never be without a voice to tell us.