Posted in | News | Consumer Robotics

Nuance, DFKI Join Forces to Advance Cognitive and Conversational AI Innovation for Autonomous Systems

Today Nuance Communications, Inc. announced it has expanded its partnership with Deutsches Forschungszentrum für Künstliche Intelligenz, the German Research Center for Artificial Intelligence (AI) and world’s largest research center, dedicated to the development of AI methods and applications.

Nuance opened an office at the DFKI campus in Saarbruecken to further advance cognitive and conversational AI innovation across several joint research initiatives, including the relationship between humans and in-car systems, as well as AI applied to healthcare systems and omni-channel customer care.

Nuance and DFKI have already begun work addressing the application of conversational and cognitive AI for autonomous systems for self-driving cars, where automotive assistants in these environments must be intelligent and collaborative to effectively engage passengers when it’s their turn to drive.

Autonomous cars will provide more hands-off time for people, where they’ll now spend their time being entertained or productive – either watching movies, catching up on the latest news, or making better use of long commutes through work productivity apps and services.  However, control of the vehicle needs to be handed back to the driver quickly and seamlessly when hands-on engagement is needed. To better understand the many scenarios where transfer of control will need to happen and how, Nuance and DFKI conducted a human-based usability study to identify the most effective ways to grab a passenger’s attention.

Participants of the study were placed in a simulated autonomous car environment in a variety of situations, such as reading, listening to music, writing an email, and watching a movie.  While participants were engaged in activities, the autonomous system would alert them through vibration (haptic), visual and auditory cues to see which of the senses responded the fastest to take the wheel.  The study included a variety of scenarios such as inclement weather conditions, system diagnostic warnings, sensor defects, traffic jams and general rules of the road. Following the scenarios, participants were asked to rate their experience on the pleasantness, usability, information trust and information usefulness in each of the scenarios.

Key findings of the study were:

  • Integrated, multimodal user interfaces leveraging voice, touch and visual cues were preferred by the majority of participants. Drivers weren’t as responsive or pleased with notifications using the same modality as their current activity, and would rather a combination of alerts adapted to their current activity. For example, if someone is reading a book, participants preferred to be alerted by a sound or vibration.  And when engrossed in e-mails or work activity, drivers preferred to be alerted by sound.
     
  • Systems must have contextual data and information from the car and the car sensors, including information about the current activity of the driver, and the best alert cue to ensure faster reaction and better user experience.
     
  • Independent of the current driver activity, sound is considered more pleasant and effective than visual cues, leading to faster reactions than simply vibrations or haptic alerts.
     
  • Drivers trust audible and haptic responses from the automotive assistant more than visual cues alone.
     
  • Data indicates that the reaction time is the lowest when the driver is engaged in a listening activity, such as listening to a book or music.

The DFKI study is complementary to a recent survey conducted by Nuance in US and the UK among 400 drivers looking at the type of activities that drivers are planning to do as passengers in an autonomous car.

If alone on a longer trip, respondents cited their top five activities in the car would be listening to the radio (64%) relaxing (63%), talking on the phone (42%), browsing the Internet (42%) and messaging (36%) -- all representing a combination of visual, auditory and haptic tasks.

If driving with others, people naturally engage more in activities with the co-passengers, like having conversations (71%) or listening to the radio (58%), rather than talking on the phone (only 19%) or messaging (23%).

“Our partnership with DFKI is focused on advancing the state of the art of AI solutions for physicians, healthcare organizations, automakers, and enterprises,” said Vlad Sejnoha, Chief Technology Officer, Nuance Communications. “Nuance and DFKI share the vision that advanced personal assistants and cognitive technologies can effectively amplify human intelligence, and in the process, transform patient and customer care, and make our experience with smart devices and the connected car vastly more productive and rewarding.”

Cognitive and conversational AI are the key technologies driving the second wave of digitalization, that is based on deep machine understanding of digital data. Partnering with Nuance allows us to put innovation into action – taking our research and directly applying it to the systems that people use every day, and addressing the real-world challenges and complexities of connected smart service platforms.  With a Nuance office at the DFKI Campus in Saarbruecken, where research groups on autonomous driving, deep learning, multimodal dialogue, and language understanding are located, we will continue to push boundaries on the conversation between humans and smart environments, and ultimately bring to market the next generation of cars, bots, assistants and smart objects that simply make everyday life better and safer,” said Prof. Dr. Wolfgang Wahlster, CEO of DFKI.

The partnership between Nuance and DFKI is also advancing NLU’s mapping of words to meaning to further develop conversational and cognitive AI interfaces between people and virtual assistants across a number of vertical markets, including healthcare and enterprise omni-channel customer care.  Nuance already supports NLU across more than 40 languages today and is continuously expanding through the application of “Deep NLU,” which goes beyond traditional approaches by extracting deeper linguistic information, a prerequisite for understanding the subtleties of human language. The DFKI team will help take on a project to localize Deep NLU into German, leveraging their existing experience and technology components in this area.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.