How AI could become a new, accessible extension of your mind
Robin Christopherson | 13 Feb 2020Virtual assistants like Alexa and Siri are now part of our everyday lives. It’s as natural to talk to a device and receive a spoken (or on-screen) nugget of info, as it is to whip out our phone to check the weather, Google that thing or listen to music. A new project from the clever guys at MIT (Massachusetts Institute of Technology), however, cuts out all that unnecessary need for spoken phrases and responses – that’s so last decade - and, for those who physically can’t speak, this evolution is truly a revolution.
Meet your AI alter ego
Students from MIT have created a prototype device, dubbed AlterEgo, that can recognize the words you mouth when silently talking to yourself—and then take action based on what it thinks you’re saying. No sounds, no lip movements are required. Imagine an interface to your computer, smartphone or virtual assistant that you can seamlessly use wherever you are and however noisy your surroundings – neat. Now imagine that you actually can’t speak (like the very funny and award-winning Lost Voice Guy) – this tech suddenly opens up all the options of natural language commands, text dictation and super-high productivity that is currently available to everyone else.
One of the people we need to thank for this magical new method of human-computer interaction is Arnav Kapur, a master’s student at the MIT Media Lab — a division of the Massachusetts Institute of Technology that focuses on the intersection of people and their tech — and author of the paper that outlined their work to date.
Kapur stresses that the device doesn’t read thoughts or the random, stray words that just happen to pass through your mind. “You’re completely silent, but talking to yourself,” he says. “It’s neither thinking nor speaking. It’s a sweet spot in between, which is voluntary but also private. We capture that."
Electrodes on the face and jaw pick up otherwise undetectable neuromuscular signals triggered by internal verbalizations. A bone conducting speaker then speaks the AI output directly into your head, leaving your ears free. Let’s digest that last bit for a second; not only can it pick up unspoken phrases, but the response from your computer, phone or virtual assistant of choice can be silently piped right into your head without your even seeming to be wearing any sort of headphones or earbuds. This truly is the first ‘internal’ AI.
AlterEgo on the TED stage
Enough talking, silent or otherwise, let’s hear direct from the man himself, speaking on the TED stage in June 2019.
Pretty amazing; a swift, silent user interface for all your computing needs.
My take on this transformative tech
Not yet on the market, it’s unclear just how the MIT boffins (or more likely their venture capital investors) will choose to position this tech. One could readily imagine that a few tweaks would enable it to present itself as a standard Bluetooth headset – it has the usual elements; a microphone and a speaker. Bluetooth speakers and headsets are two-a-penny so this would be supremely simple to do if they chose to.
Instead of the typical physical buttons for play/pause, volume up/down, skip ahead/back and to invoke Siri, say, it could readily employ verbalised commands instead. The device is already processing your silently spoken commands to send as text output to your computer or phone, so why not keep some few phrases aside for the common operations above and, instead of sending the text, just send those equivalent standard Bluetooth device commands instead? Easy.
Then you could use AlterEgo with Siri or Google Assistant on your phone (and control media etc, of course) just as you can currently do with any Bluetooth device. Amazon certainly wouldn’t want to be left out of the party - so Alexa integration would surely be close behind.
Whether the feedback from your phone in response to your unspoken commands is purely audio, or whether (as with Siri or the Google Assistant etc) there’s stuff going on on your screen too, is again up for grabs in the implementation.
There’s such a strong argument for a completely hands-free, eyes-free use-case, however, that one would imagine that would be the primary, futuristically sci-fi application of AlterEgo we’d all want out of the box.
The power of voice-first technology
I’d like to wrap up this brief post with a few links to some of the more recent articles I’ve written on the power and potential of voice technologies and the coming age of ambient computing. There is huge potential to help those with disabilities, or with less support or confidence, to more fully engage with the digital world and be more connected with family and friends. Check them out and, if you want more, there’s plenty more in my post feed:
- Amazon helping the blind with new Echo 'Show and tell' feature
- How the rise of voice is revolutionising our digital lives part 1
- Five ways to stay well with Alexa
- Finally, full hands-free mobile and landline calling from your Echo
- A new simple way to Alexify everything heralds an inclusive future for all
- Hear, hear! Here's to the woman behind making Alexa inclusive