Context-Sensitive AI Improves Lives of People with Motor Disabilities

Artificial intelligence has been used by scientists to decrease the “communication gap” for nonverbal people who suffer from motor disabilities and depend on computers to communicate with others.

Image Credit: University of Cambridge.

The research group from the University of Cambridge and the University of Dundee has designed a new context-aware technique that decreases this communication gap by removing 50%–96% of the keystrokes to be typed by the person to converse.

The system has been developed particularly for nonverbal people and includes a range of context “clues”—such as the user’s place, the time of day, or the identity of the user’s speaking partner—to support in suggesting sentences that are very appropriate for the user.

In general, nonverbal people with motor disabilities make use of a computer with speech output to converse with others. But even without a physical disability that impacts the typing process, such communication aids are error-prone and very slow for useful conversation, with standard typing rates of 5 to 20 words per minute, whereas a typical speaking rate is in the range of 100 to 140 words for each minute.

This difference in communication rates is referred to as the communication gap. The gap is typically between 80 and 135 words per minute and affects the quality of everyday interactions for people who rely on computers to communicate.

Per Ola Kristensson, Study Lead Author, Department of Engineering, University of Cambridge

The technique that has been designed by Kristensson and his collaborators make use of artificial intelligence to enable a user to rapidly recover sentences that they have typed in the past.

Previous studies have demonstrated that people who depend on speech synthesis, just like everybody else, reuse several similar phrases and sentences in daily communication.

But recovering such sentences and phrases is a long process for users of current speech synthesis technologies, decelerating down the flow of conversation further.

With the new system, as the person types, information retrieval algorithms are used to automatically recover the most appropriate earlier sentences based on the typed text and the context of the conversation the person is engaged in.

The context contains information regarding the conversation like the location, time of day, and automatic identification of the speaking partner’s face. The other speaker is determined using a computer vision algorithm trained to identify human faces with the help of a front-mounted camera.

The system was designed based on design engineering techniques normally utilized for medical devices or jet engines. Initially, the scientists determined the critical functions of the system, like the sentence retrieval function and the word auto-complete function.

Following the determination of such functions, the scientists simulated a nonverbal person typing a huge set of sentences from a sentence set that represents the type of text a nonverbal person would like to converse.

This analysis enabled the scientists to realize the best technique for recovering sentences and the effect of a variety of parameters on performance, like the precision of word-auto complete and the effect of utilizing several context tags.

For instance, this analysis showed that only two reasonably appropriate context tags are needed to offer most of the gain. Word-auto complete offers a positive contribution but is not vital to achieve the majority of the profit.

The sentences are recovered by employing information retrieval algorithms, like web search. Context tags are added to the words typed by the user to raise a query.

The research is the first to combine context-aware information retrieval with speech-generating devices for people suffering from motor disabilities. This illustrates how context-sensitive artificial intelligence can enhance the lives of people with motor disabilities.

This method gives us hope for more innovative AI-infused systems to help people with motor disabilities to communicate in the future. We’ve shown it’s possible to reduce the opportunity cost of not doing innovative research with AI-infused user interfaces that challenge traditional user interface design mantra and processes.

Per Ola Kristensson, Study Lead Author, Department of Engineering, University of Cambridge

The study was financially supported by the Engineering and Physical Sciences Research Council.

Journal Reference:

Kristensson, P. O., et al. (2020) A Design Engineering Approach for Quantitatively Exploring Context-Aware Sentence Retrieval for Nonspeaking Individuals with Motor Disabilities. Proceedings of the 38th ACM Conference on Human Factors in Computing Systems (CHI 2020). doi.org/10.1145/3313831.3376525.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.