Can We Learn Empathy from Robots?

Many people familiar with science fiction tend to have an ingrained fear and repulsion at what are seen as cold and unfeeling robots.

The idea of widespread artificial intelligence brings to mind terrifying visions from films such as The Terminator or The Matrix, both of which present an apocalyptic future where artificial intelligence turns on mankind with disastrous results. The basic concern seems to be that robots lack any sense of empathy towards their human creators.

However, many humans already struggle with empathy, and this problem is especially poignant in the field of medicine. Unfortunately, many patients struggle to effectively communicate their pain to doctors, the very people who are able to treat it. Granted, pain is a difficult thing to communicate, but there is some evidence that doctors are even worse at recognizing it than the general population.

This may be born out of necessity, as medical professionals are required to distance themselves emotionally from patients in order to conduct treatments in a scientific and objective fashion. That said, it creates problems in trying to understand and diagnose pain conditions.

Dr. Laurel Riek, a professor of computer science at the University of California, San Diego, actually sought to test whether doctors could properly recognize emotional expressions in their patients. In fact, when medical experts and laypeople were exposed to digitally simulated facial expressions, the clinicians proved to be much less accurate at recognizing pain.

While the study analyzed various emotions, including anger and disgust, recognition of pain represented the starkest disparity between the groups. Only 54 percent of medical professionals successfully identified pain as opposed to an 83 percent success rate for laypeople.

This experiment managed to simulate facial expressions, not from images of actual humans, but from computer generated imagery and an actual robot. This robot was created by analyzing a vast video archive depicting human expressions and using face-tracking software to graft those expressions onto the uncannily realistic rubber face of the robot, named Philip K. Dick.

Now, Dr. Riek is trying to use robots like Philip K. Dick to teach doctors how to better understand emotion. There is some precedent for this, as clinicians have often used robots as practice dummies for learning medicine.

But she has pointed out a major flaw in the use of these robotic training tools: “These robots can bleed, breathe, and react to medication… They are incredible, but there is a major design flaw – their face.” She explains that facial expressions are critical in communicating pain to doctors, not just in interacting with the patient but also in quickly diagnosing strokes or adverse reactions to medication.

This entire enterprise may strike many readers as highly ironic, given the cold, calculated image that science fiction has given us for artificial intelligence. Even the robot’s namesake was a prolific writer who dealt with the problem of robots’ lack of empathy. However, Dr. Riek’s work demonstrates how many varied applications such a powerful technology can have on better understanding emotions and facial expressions.

For more research on empathy and facial recognition, check out our past blogs here and here.

One thought on “Can We Learn Empathy from Robots?

Leave a Reply

Your email address will not be published. Required fields are marked *