Thought controlled robotics for people who are paralysed

For people who are completely paralysed, a new non-invasive technology that allows them to control robots with thoughts alone provides new freedom of movement without the dangers of brain surgery.

Mind-control over matter

People who are paralysed or have neurodegenerative conditions and are not able to move their limbs to perform everyday tasks will soon have a significantly simpler option to enable them to influence their environment using thought-controlled robotic helpers.

Ground-breaking new research by the University of Minnesota has resulted in the development of a new generation of thought-controlled interface to assist those who are paralysed to control robotic limbs with their minds – as we see in the video below.

The key to unlocking locked-in syndrome

Others experiencing a state commonly known as ‘locked-in syndrome’ and who are unable to control any part of their bodies can face huge challenges in communicating with the outside world.

Before the advent of MRI scans it was assumed that there was no brain activity whatsoever – whereas quite the opposite is the case and these individuals are fully conscious and aware of their surroundings but not able to move a muscle or make a sound.

For these individuals, the ability to control robotic limbs or a cursor on a screen would be life-changing indeed.

Research into thought controlled robotics

Research subjects at the University of Minnesota fitted with a specialised non-invasive brain cap were able to accurately move the robotic arm just by imagining moving their own arms. This intuitive and direct way of controlling the robot means that little training or effort is required and the process feels natural and obvious.

“This is the first time in the world that people can operate a robotic arm to reach and grasp objects in a complex 3D environment using only their thoughts without a brain implant,” said Bin He, a University of Minnesota biomedical engineering professor and lead researcher, on the study. “Just by imagining moving their arms, they were able to move the robotic arm.”

Further work at ‘translating’ direct thoughts in this way will result in effective and straightforward methods for people who are unable to control their bodies to communicate and interact with the world around them.

“Three years ago, we weren’t sure if moving a more complex robotic arm to grasp and move objects using this brain-computer interface technology could even be achieved,” Bin He says. “We’re happily surprised that it worked with a high success rate and in a group of people.”

Saying goodbye to brain surgery

For several years, technology such as the iBrain has existed that enables people who are completely paralysed to control robotic limbs – as we see in action in the video clip below.

However the surgery required to implant the connections directly into the patient’s brain is expensive, not easily altered or upgraded, and not without risk. The time is approaching where such procedures will be superseded by safe, wearable devices used to interface with the robots or computers on-hand to help.

A future of thought-driven robots

While this sounds like a somewhat scary prospect, for people with disabilities for who using their limbs and walking unaided is currently impossible, it is a fantastic vision of a more mobile future.

We already have bionic exoskeleton suits that enable people such as Claire Lomas to walk the London marathon - as we see in this video.

For Claire Lomas, as well as for many others with even greater mobility challenges, the next generation of fully thought-controlled bionic suits or robotic helpers around the house holds the promise of ever more autonomy for people with paralysis.

Read the research

More information on this research can be found on the Nature Scientific Reports website.

 

Got a life changing tech idea? Nominations are open for the AbilityNet Tech4Good Awards 2017

Nominations are now open for the AbilityNet Tech4Good Awards 2017 - an annual showcase for the amazing people who use digital technology to make the world a better place. And, this year will be the first time that the awards include an Africa Award, sponsored by Comic Relief.

Organised by AbilityNet and sponsored by BT, these are the only awards to highlight the wealth of charities, businesses and volunteers across the UK that use the power of technology to improve the lives of others.

The AbilityNet Tech4Good Awards 2017 categories are:

AbilityNet Accessibility Award

BT Connected Society

BT Young Pioneer

Comic Relief Tech4Good for Africa Award

Community Impact

Digital Health Award

Digital Skills Award

Tech Volunteer of the Year

Anna Easton, Director Sustainable Business at BT said at the launch this morning, held at the BT Tower, London: "Over the last six years we've been amazed at how innovative people can be, and the incredible power of communications to make a better world".

The new Africa award reflects the growing number of entries in past years from tech projects developed for Africa, including last year's BT Young Pioneer Award winners eWATERPay - a system which checks when water pumps installed in African villages need maintenance. 

Tech4Good diary dates

2017 entries are open until 5pm on 8 May and anyone can nominate themselves or someone else in any of the categories.

There will be a networking event for finalists on 13 June and the winners will be announced at a glittering ceremony hosted by BT on 11 July at BT Centre, London.

Judges include business people, charities, academics and journalists and others with specialist knowledge of how technology is used for social good.

Mark Walker, organiser of the awards and AbilityNet' Head of Marketing and Communications, said: "These awards celebrate winners' success and share their stories to inspire others, while recognising entrants' hard work and creativity. Winners have included everything from a kit that helps people with asthma to understand, diagnose and medicate for the condition, to an app that allows sex workers to alert one another of dangerous perpetrators of crime.

To enter the awards visit: www.tech4goodawards.com

To see previous winners: www.tech4goodawards.com/winners


 

Lip-reading with Google’s DeepMind AI: what it means for disabled people, live subtitling and espionage!

Lip-reading is difficult. Many deaf people can do it, but there are situations when it is a struggle... but now, Artificial Intelligence like Google's DeepMind is getting its virtual teeth into the challenge - and doing an even better job than humans. So what does this mean for disabled people, TV subtitling and the shady world of cloak and dagger espionage...?

The biggest TV binge-fest in history

Researchers at Oxford University used Google's DeepMind to watch more than 5,000 hours of TV including shows such as Newsnight, BBC Breakfast and Question Time for the 'Lip Reading Sentences in the Wild' study. The AI analysed a total of 118,000 sentences – a much larger sample than in previous pieces of research such as the LipNet study for example - which only contained 51 unique words.

Man eating popcorn in front of TV looking surprised

The sample used in this DeepMind study comprised no fewer than 17,500 unique words, which made it a significantly harder challenge, but ultimately resulted in a much more accurate algorithm.

Tweaking the timing...

What added to the task was the fact that often the video and audio in the recordings were out of sync by up to a second.

To initially prepare all samples to be ready for the machine learning process, DeepMind first had to assume that the majority of clips were in sync, watch them all and try to learn from them a basic relationship between mouth shapes and sounds, and using that knowledge, rewatch all clips and correct the audio of anywhere the lips were out of sync with the speech.

It was only then that it was able to go through all 5,000 hours once more to do the deep analysis of learning exactly which words related to which mouth shapes and movements.

A deeply impressive result – not just lip-service

The result of this research and development was a system that can interpret human speech across a wide range of speakers found in a variety of lighting and filming environments.

The system successfully deciphered phrases such as “We know there will be hundreds of journalists here as well” and “According to the latest figures from the Office of National Statistics”.

Here is an example of a clip without subtitles: 

A close up image of a woman speaking clearly

And now the same clip with subtitles created by the DeepMind algorithm:

close up of woman speaking with GoogleDeep Mind live subtitling underneath

The result of this research and development was a system that can interpret human speech across a wide range of people speaking in a variety of lighting and filming environments.

DeepMind significantly outperformed a professional lip-reader and all other automatic systems. Given 200 randomly selected clips from the data set to decipher, the professional translated just 12.4% of words without errors. The AI correctly translated 46.8% - and many of its mistakes were very small, such as missing an 's' off the end of a word.

So what do technological lip-reading advancements mean for disabled people?

Going mobile with DeepMind’s lip-reading smarts

For people with hearing loss the benefits of such tech are obvious. There's long been voice recognition and this can aid with the real-time translation of speech-into-text – as we see here in this video of someone using Google Glass to subtitle a conversation with a colleague.

This approach, however, relies on someone being able to speak clearly into a microphone (in this case a linked smartphone) but what about a noisy office or hallway? In such a situation, the ability to use the head-mounted camera (which is unaffected by noise or the proximity of the speaker) combined with lip-reading software would give a similar result without the restrictions.

Could DeepMind's lip reading skills also help blind people and those with sight loss?

As a blind person I’d also find such a set-up extremely useful as, for me, hearing people in a noisy environment is twice as hard as it is for someone who can see the speaker’s lips. Sighted, hearing people lip-read, however subconsciously that is. If you fit this description and don’t believe me, next time you are in that situation, try closing your eyes and see if you can still hear the person next to you.

The boon of Google Glass synethic speech at a party

It’s assumed that a blind person’s hearing must be twice as sharp and, while we do certainly pay more attention to the sounds around us, this inability to hear people in a noisy place when everyone else can, is an ironic twist to being blind. So Google Glass, feeding me a clearly-spoken synthetic speech interpretation into the bone-conducting speaker behind my ear, would be a boon at noisy parties where I know I’ve got several hours of hard listening ahead.

brightly coloured lips cartoon style

A new world of real-time subtitling on offer

While many programmes are currently subtitled, the broad range of live television doesn’t allow for pre-written subtitling. Instead, professional transcribers have to listen and watch and rapidly transcribe on the fly (using a combination of voice recognition and stenographer-style keyboards), which is costly and often means programmes are neglected. 

This new advance will make real-time subtitling much more efficient – helping deaf people but also aiding everyone watching TV in a noisy office, café or bar. And if you’re a spy,  you too of course can benefit from this remote and clandestine ability to understand what people are discussing! 

What does new lip reading tech mean for Youtube videos?

This automated approach can also help to subtitle the thousands of hours of video uploaded to Youtube every day – and also help it sync the audio to the speech too. Let’s look forward to that time when every video’s spoken content is readable, and thus searchable, by everyone who would find either option helpful.

Related articles:

6 top tips for a dementia-friendly website

An estimated 800,000 people in the UK have dementia. Most are over 65. This age group is now increasingly likely to be online, for a whole host of reasons - including staying in contact with friends and family and keeping up-to-date with current affairs, which in turn helps reduce isolation. Online shopping, banking and filling in government forms are also important services for older people. It's therefore essential that websites are accessible for people experiencing memory, organisation and orientation issues.

Lilianna Williams, an accessibility and usability consultant for AbilityNet, works with big name companies to ensure their sites meet legal inclusivity requirements, and are accessible to bigger audiences, she offers some of her top tips on designing a website for people with dementia.


Some simple ways you can make your website more accessible to people with dementia:


1 Links and buttons

Make sure links and buttons clearly indicate their purpose. Ie, they should make sense in their own right, not just in conjunction with surrounding text. For example, rather than a link or button saying 'click here for more information', it should say 'click here for more information about speaking to the bank' or 'speak to the bank here'.
 

2 Make essential navigation items obvious

Important parts of a page/ site ie, the Home button, the search box and a site map should be very easy and clear to locate consistently across a website.


3 Don't split one piece of information over more than one page

Splitting forms and information across several pages can lead to disorientation. Put the whole form or text on one page so a visitor can easily scroll up and down to see what they've already filled in / read. 
 

4 Help orientation for people with dementia by using breadcrumb links

Use 'breadcrumb' links (the ones with the > arrows) in an obviously visible place on the page, so it's clear for someone to be reminded of the route they've taken to get to a page, and to see which section they're currently in. Ie current account> outgoings>today.
 

5 Fonts and aesthetics

Use a consistent font to minimise distractions and confusion, along with plain backgrounds and well-contrasted colours. Relevant photos on the page can be very useful for comprehension, allowing a user to understand content without disorientation.
 

6 Words and text

Use short sentences and avoid abbreviations and jargon.   

 

How movie-watching marathons will help machines improve lives of people with hearing loss

Siri and the Amazon Echo are already pretty good at understanding speech, but new machine-learning techniques mean our devices will be much smarter at recognising other everyday sounds. How will they do this? By binge-watching videos.

Machine learning advances match sounds to images

applications for facial technology are entering the mainstream but offer huge benefits to disabled people

Thanks to the wealth of labelled data available online, computers are already pretty smart when it comes to recognising pictures. Both Google and Facebook can distinguish many hundreds of individual objects in images, including specific people, pets, cars and foods. They use this capability to ‘auto-tag’ your photos and videos.

As for sounds, the likes of Siri and the Amazon Echo might be great at understanding language and many thousands of spoken commands. But, when it comes to other everyday sounds such as a doorbell, police siren or dog’s bark, the tech isn’t nearly so advanced.

Using software to recognise laughter

That’s soon set to change. A team at the Massachusetts Institute of Technology (MIT) has been using advances in machine-learning to detect what’s happening in a video to match it with the associated sound. So, when someone laughs for example, facial recognition algorithms spot the expression and ‘learn’ that the accompanying sound is laughter.

Yusuf Aytar, who was part of the research team at MIT told New Scientist: "We thought - 'we can actually transfer this visual knowledge that’s been learned by machines to another domain where we don’t {currently} have any data, but {where} we do have this natural synchronisation between images and sounds.”

You might think that we should already be able to teach computers to recognise someone laughing. But, just as it took a huge number of pet pics uploaded to Facebook feeds to teach the software to distinguish a boxer from a bulldog, it takes hundreds, or even thousand, of samples of chuckles and guffaws brought together into a massive data set for it to become proficient at recognising laughter. Computers are dealing with a variety of types of laughter in a wide variety of aural environments.

Separating speech from surrounding sounds

our busy lives require us to check our phones in all sorts of environments

When the software can easily recognise sounds, it can also more effectively ignore them. This will really come in handy when virtual assistants like Siri or the Echo are trying to understand speech when there are a lot of other noises going on.

Like me, you might have tried to talk to your phone on a busy street and almost ended up sending someone a garbled text or setting a timer for four and a half minutes.

As a result of this machine-learning our virtual assistants should more effectively ‘tune out’ that noise - so we’ll see a rise in recognition and a fall in frustration.

Sound support for deaf people

For people who are deaf or have hearing loss, the ability to be alerted to everyday noises may be helpful or, in some cases, crucial. Being informed of a fire alarm by urgent vibrations on your smartphone, or of a warning car horn by a cascade of taps from the smartwatch on your wrist, could be a life-saver.

In the not-too-distant future, subtitles on television and online videos may well be generated ‘on the go’ by speech recognition. This will be assisted in no small part by the software’s ability to recognise, and then filter out, noises.

Moreover, the myriad of sounds in each movie will be automatically recognisable and flagged on-screen as additional subtitles.

Safe and sound with added security

Soon security systems will be able to listen out for sounds such as breaking glass or splintering wood and automatically alert the owner or agency for a quick response.
Again there are obvious benefits for people with hearing loss. From baby monitors to security systems, smart sound-recognition will help provide vital information in a noisy world.

Making sounds searchable

Using Google, Bing or a virtual assistant such as Siri, we all search for phrases on the internet every day. You may also search for images or video - although what you’re really doing is searching for text labels that have almost invariably been supplied by humans. Soon, images and videos will be routinely scanned for objects, and videos and audio-recordings for sounds. When this happens, you’ll be able to do a search for a specific sound within any audio or video, or across the entire internet.

In the not-too-distant future it will feel completely normal to be able to search for ‘DeLorean lightning’, for example, and be taken to the exact point in Back to the Future where Marty McFly is struck by lightning as the DeLorean hits 88MPH. The search for ‘DeLorean’ matched to the image and ‘lightning’ matched to the sound.

It’s not just people with a hearing impairment who will find sound-recognition beneficial. As a blind person who prefers audio as opposed to video, the idea of a universal sound search seems pretty exciting – although smarter object recognition will undoubtedly help me no end in fathoming the very visual world that is today’s internet.

Regardless of your disability the future’s coming fast - and it sounds like it’s going to be good.

More information

BBC Disability Works explores the world of work for disabled people

According to the World Bank, there are at least 1 billion people who have some sort of disability around the world - and having a disability means you are more likely to be unemployed than the general population. From 20-24 February BBC News will be exploring how disabled people fare in employment and as consumers all over the world.The week of programmes will highlight the work that is going on around the world to help people into work, and some of the ingenious solutions that people use to continue doing the job they love.

disability works BBC story homepage

You can catch up on all the BBC coverage by following the hashtag #disabilityworks and the main day for programmes will be on Wednesday 22nd February, where programming will be right across the BBC Network including radio, TV and online.

Seven ways AbilityNet can help disabled people in the workplace

AbilityNet can help you or your staff continue to do the job that they love. 

  • A good first step is to go to the Clear Talents at Work website and fill your profile in and answer some easy questions. This will help us get a clearer picture of your situation.
  • Or you can call our free Helpline. Our friendly, knowledgeable staff will discuss any kind of computer problem and do their best to come up with a solution. We’re open Monday to Friday from 9am to 5pm on 0800 269 545.
  • If you are in work your employers have a responsibility to make Reasonable Adjustments to help you do your job.
  • For more details on this have a look at www.abilitynet.org.uk/ctod and www.cleartalentsatwork.com.
  • Arrange a home visit from one our friendly police-checked ITCanHelp volunteers who can help with all sorts of technical issues with your home ciumputer, tablet, i{Pad or smartphone. They can come to your home, or help you over the phone.
  • We have a range of factsheets which talk in detail about technology that might help you. These can be downloaded for free. You might find our factsheets talking about voice recognition and keyboard alternatives particularly useful.
  • My Computer My Way is our free interactive guide to all the accessibility features built into current desktops, laptops, tables and smartphones.

All my health info accessible through Amazon's Alexa? I’d Echo that.

Communicating with the NHS can often be frustrating, slow and paperwork heavy. Now, new NHS standards and guidelines on making our health information more accessible could usher in a new era of choice. The NHS is currently reviewing the new NHS Accessible Information Standard (AIS) which it introduced last summer, to assess its impact and ensure that the guidelines are ‘fit for purpose'.

If you prefer to receive your information by text, by email or by chatting with your favourite virtual assistant, come and discuss it at our event later this month. We want you to be part of the process.doctor's stethoscope

Giving NHS patient communications a boost

Many disabled people find communicating with their doctor, dentist and other NHS professional difficult, frustrating or at times impossible. People have a wide range of different needs when it comes to their preferred format for communication -  for some it might be email, text, audio or even Braille. In the past there hasn’t always been the choice that people need.

AbilityNet is bringing together experts and hosting a workshop in central London on Monday 28 February to consider the role that digital technology plays in the newly published NHS Accessible Information Standard which will kick-start a transformation in NHS communications and help people receive information in a format that suits them.

Speaking out for the future of NHS communication

Do you feel that you’ve something to say on shaping the way that people will interact with medical professionals? If so, make sure you get your voice heard and register for this free event.

I’ll be there and speaking about my preferred way of communicating with my GP or dentist – one element of which will undoubtedly involve the option of being able to talk to Alexa - a favourite virtual assistant found inside the Amazon Echo.

Among the proliferation of new digital platforms and channels of communication, the Amazon Echo stands out as being of huge potential benefit to people who want an easy, natural and intuitive way of interacting with the digital world.

Amazon Echo

The helpful healthmate in your kitchen

It's fun and easy to use. It has much of the power of a computer and the internet but without the user actually needing to own a computer or know anything about the internet. To that end, it’s perfect for older or less digitally-confident users.

I’ve recorded many quick demonstrations of just how useful the artificially-intelligent Echo is and how you can use it to tap into a world of information, as well as play amazing games on the Echo, use the Echo to keep fit and get local travel info - to touch upon a very few of the things you can do by voice without needing to touch a thing.

However, while it’s possible to keep on top of up-coming dentist or doctors appointments with the help of Alexa, wouldn’t it be infinitely more powerful if you weren’t first required to enter those appointments yourself?

Never miss an appointment again

In a more digitally-inclusive future, where all my health information is available to me in all my preferred places, my medical info would be read to me by Alexa piped directly from my GP surgery. My up-coming appointments with the doctor, dentist or specialist would be automatically entered into Alexa and I could always ask her to confirm when those appointments are if I needed reassuring. And of course she would pipe up and remind me to make sure I never missed a single visit.

And what about adding in the expertise of an automated NHS Direct ‘111’ service into Alexa’s AI armoury? In the natural conversational style that Alexa has, you could discuss your health issues with her, without needing to tell a human those embarrassing details.

If the result of the exchange was a recommendation that you seek medical help, then your doctor can be instantly informed with a simple spoken agreement from you. OK – so this might be a little further off (although all the technology is here today), but for now let’s at least smarten up our communications with those human professionals who have our health at heart.

More information

A thank you letter from a happy client: "Wonderful AbilityNet tech volunteer sorted my Windows update and streamlined my computer systems"

A client of AbilityNet's ITCanHelp service (for older and disabled people to get free support with their computers and technology systems) wrote this lovely letter to thank volunteer Colin Hill who's helped her with Windows update issues, back ups, de-junking and more. We thought we'd share the love. 

January 2017

AbillityNet volunteer Colin Hill

Dear AbilityNet / ITCanHelp

Please accept my most sincere gratitude for your kindly support. Colin (pictured on right) has been marvellous. He came to my home and had a look at my computer as I had issues with a Windows update and needed some help - he was able to sort out the problems for me. 

He also installed an application to avoid to having too much junk on my PC, so it performs faster - and he showed me how to use this. He also synchronised my technology so that my larger and smaller computer work together.

Colin is now working remotely, connected into my computer from his computer, to fix a few more things, and we are getting on top of it all.

He has been terrific. I don’t have words to describe all his expertise and how wonderful he is with assisting me. Everything is now backed up for the first time and he's showed me how to do backups myself.

I can highly recommend AbilityNet as a great organisation to all disabled people. They have been magnificent, right from the person who registers your phone call to the people like Mr Hill and previously Karthik, to whom I give my most heart full, deepest thanks.

I also wanted to take the opportunity to say Happy New Year.

Thank you, Maria
 

Do you need tech help like Maria? click here for details on our ITCanHelp volunteer tech service. 

Read the story: My 20 years of service as AbilityNet volunteer.

Free workshop to explore how digital tech can help make NHS more accessible for disabled people

Many of the estimated 12 million disabled people in the UK find communicating with large organisations like the NHS challenging at best, frustrating or even impossible. The NHS Accessible Information Standard (AIS) was introduced to make information more accessible, and from 1 August 2016 it became a legal requirement for all NHS and social care providers.

  • 11 am – 3 pm, Tuesday 28 February 2017
  • British Computer Society (BCS), 5 Southampton St, London WC2E 7H
  • Book your free place now

The Accessible Information Standard is legal requirement for NHS health and social care providersThe AIS aims to ensure disabled people have a better experience when accessing information from the NHS and adult social care providers, and that that they are provided with support to help them communicate effectively.

NHS England is currently reviewing the implementation of the AIS and AbilityNet is hosting a free workshop in association with the British Assistive Technology Association (BATA) to consider the role that digital technology plays in the AIS.

Free interactive event in London

This interactive half-day event takes place on 11am - 3pm on Tuesday, 28, February at the British Computer Society’s London office. Sponsored by Panlogic ltd, it will bring together disability patient groups, IT professionals, accessibility experts and NHS and social care providers to discuss:

  • Where digital accessibility fits within patient communications
  • Share the results of disabled user-testing on GP booking systems and the lessons learned
  • Learn first-hand from disabled people’s experiences and identify best practice for the design and delivery of digital communications that meet the AIS.

AbilityNet and BATA will be producing a White Paper report with the output from the workshop to help inform the NHS England’s AIS review.

Discuss the issues with the experts

The event will focus on discusion and information-sharing by the people attending. In the final session a panel of expert speakers will review issues raised, such as how well AIS has been implemented, hhow digital tech can help deliver its goals and the accessibility issues to be considered. Expert speakers scheduled to take part include:

  • Sarah Marsay, Public Engagement Account Manager, NHS England
  • Hugh Huddy, Policy & Campaigns Manager, RNIB
  • Andrew McCracken, Head of Communications, National Voices
  • Antony Ruck, Chair of BATA
  • Nigel Lewis, CEO, AbilityNet
  • William Makower, MD, Panlogic ltd

Antony Ruck, Chair of BATA, said:

“The AIS is critical for making sure NHS and social care providers can communicate effectively with disabled people, and visa versa. The review NHS England is currently undertaking of the AIS is vital to make sure it is fit for purpose, and to understand how it works in practice, plus what they key challenges are.

“This workshop will not only help those who attend to get to grips with all the implications, but it will also help to inform NHS England’s review through the White Paper that will be produced from the output of the workshop.”

Numbers are limited

Please book your place now using our online booking form.

WEBINAR Designing Accessible Carousels, 23 March 2017

Carousels are often found on the home page of a website and are commonly referred to as “slide shows” or “sliders”. They display a series of images one at a time as way of highlighting specific items within the site, for example, a series of news headlines. This webinar will identify the accessibility issues which need to be addressed when designing carousels and how to deal with them.

As wella s beng image-based many carousels use animations to move from slide to slide, which can be distracting for some users. They may also move so fast automatically that their content is hard or impossible to grasp, which is why every carousel should have a function to pause the animation. They can also create traps for keyboard users, not allowing them to use the carousel, or even get stuck inside, without a chance to leave the carousel and read the rest of the page.

Although inaccessible carousels can be a major obstacle for many website visitors if they're designed well they can provide more effective access for many users including:

  • People using keyboard navigation and voice input software.
  • People using screen readers .
  • People who are distracted by movement.
  • People who need more time to read.

This webinar will be delivered by one of AbilityNet's most experienced cnsultants and will highlight the accessibility considerations for such carousels, including structure, functionality and user controls.

Register for the webinar now

A captioned recording of the webinar will be shared with everyone who registers

Pages