Jun 3 2014
Robots may be poised to enter a new frontier in the workplace—but that doesn't mean the public is ready for it.
After taking root in factories in the 1960s, robots gradually moved into jobs requiring rudimentary thinking such as data processing and banking services. The next wave of the invasion, however, is going to include jobs that are highly social and emotional, with robots driving taxis and serving as real-estate agents, library assistants, sports referees and health-care assistants.
That could be a problem. Our research shows that while people have accepted "botsourcing" for traditionally thinking-oriented jobs, the thought of robots doing work that requires feeling, compassion and interpersonal awareness leaves many unnerved.
For social scientists, the challenge is figuring out what, if anything, can be done to mitigate those concerns.
We identified at least one thing that might help: Create robots that seem more like real people.
Put a Face On It
That finding may come as a surprise to those familiar with a long-held belief known as the "uncanny valley" hypothesis, which suggests people generally are repulsed by robots that seem almost humanlike.
We found, however, that the uncanny valley hypothesis is overstated and that when emotional jobs such as social workers and preschool teachers must be "botsourced," people actually prefer robots that seem capable of conveying at least some degree of human emotion.
What does a warmer and fuzzier robot look like? The emerging science of human-robot interaction combines insights from robotics and psychology to suggest five crucial design features:
First, faces help. Not all faces convey emotion to the same degree, however. Robots such as Nexi from MIT's Media Lab that have more of a "baby face" (round head, small chin, wide eyes) appear more capable of feeling than robots with longer chins, which appear more professorial. Nexi can also change expressions to show emotion.
We found that people prefer baby-faced robots for emotional jobs such as a therapist, while other research indicates people also are more willing to take medical advice from baby-faced robots than from long-chinned ones. Additionally, child-faced robots are less likely to threaten the autonomy of elderly individuals, who will probably be the primary users of robotic health-care assistants in the future.
A robot that users can "take care of" is more likely to engender positive responses than one that seems bossy. One therapeutic robot used successfully in nursing homes is Paro, a small robotic seal that users care for like a pet.
Second, voice is key. In one study, people trusted and enjoyed self-driving cars—the taxi drivers of the future—far more when the car had a voice than when it drove intelligently but silently. Diabetic adults respond much more favorably to a computerized health-care assistant that inquires aloud about their blood-glucose levels rather than via text.
And like faces, the kind of voice matters. People prefer airline-reservation robots with humanlike speech patterns rather than synthetic speech patterns, and feel emotionally closer to robots whose voices match their own gender.
Third, just as people prefer other people who mimic their behavior—leaning in when we lean in—they prefer robots that nod when they nod and blink when they blink. Some robotic gestures—such as when a robot touches its face or folds its arms—can engender mistrust, but mimicking gestures build rapport. Rather than feeling annoyed or disturbed by a robot that subtly tracks and copies facial movements, people feel a sense of empathy.
Fourth, the type of empathy that mimicry provides is critical and can be conveyed even by robots that don't have a physical presence. Online "robots"—such as recommendation agents that have taken over the jobs of salespeople (Amazon.com) and travel agents (Kayak.com)—can be programmed to appear more empathic, and hence more appealing. One study showed that people rated online travel booking and dating services more positively when the service communicated clearly that it was working for the consumer (e.g., "We are now searching 100 sites for you") than when they simply provided search results. Surprisingly, having to wait 30 seconds for results but also receiving this communication of effort slightly increased users' satisfaction, compared with receiving results instantaneously. Being made aware of the website's willingness to work on their behalf made people feel that the service was sympathetic to their needs.
Make It Unpredictable
Finally, the most counterintuitive way to enhance robot acceptance is to make them unpredictable to some degree. A critical advantage of robot workers is their relentless consistency, so how could adding inconsistency help? Because real people are up and down—they have good and bad days. In a five-month study of toddlers' responses to a robotic child-care assistant at an early-education center, children interacted most positively with the robot when it behaved with some variability. When the robot behaved predictably, their interactions deteriorated. Why? Just like unpredictable people capture our attention (why is he so nice some days and so mean on others?), unpredictability makes us want to better understand the robot. For the rote tasks of the 20th century, people might prefer unwavering robots; for the emotional jobs of the 21st century, people are likely to prefer some unpredictability.
Faces, voices, mimicry, empathy and unpredictability. These five design features offer an early glimpse into the likely shape of the coming robot invasion, and how it can be made less scary for those who aren't quite ready for it.
- Adam Waytz in an assistant professor of management and organizations at Northwestern University. Michael Norton is a professor of business at Harvard University.