Body Language Speaks Louder than Facial Expressions? A Critical Look at Aviezer’s Study

Credit: H. Aviezer et al., Science

Several people have asked us to comment on a recently published and publicized study entitled “Body Cues, Not Facial Expressions, Discriminate Between Intense Positive and Negative Emotionsby Hillel Aviezer, et al.

The Paradigm

The paradigm used in Aviezer’s research involved judgments of emotion when viewing stimuli that combined different emotional reactions in different parts of the body and face to examine whether one part of the body influences emotion judgments more than the other.


For his study (from a Wired Magazine article), Aviezer and his colleagues showed photos of professional tennis players to 45 Princeton University students, randomly divided into three groups of 15. Each tennis player had just won or lost an important match, and the participants rated the players’ contorted facial expressions from negative to positive on a scale from 1 to 9, with 5 marking the neutral midway point. One group of participants looked at head-to-toe photos of the players, the second group looked at only the players’ bodies, and the third group looked at only their heads. Only the final group had trouble making the correct identification.

Interpretation of Data

Aviezer and his colleagues interpreted that data to mean that facial expressions alone didn’t tell them whether the players were joyous or in despair. They also suggested that people read intense emotion more effectively by looking at a person’s body language than by watching their facial expressions.

Taking a Critical Look

Humintell Director Dr. David Matsumoto is an expert in the field of culture, emotion, and nonverbal behavior. He has published over 400 works on these topics. We asked him to comment on the study and share some of his thoughts.

While Dr. Matsumoto states it is an interesting paradigm that Aviezer uses, he also sees two major problems with the research.

1.    The research involved the use of Photoshop and modern technology to alter photographs of individuals. Research participants were  shown three different types of photographs:  Head -to-toe photos of athletes, photos of only the athlete’s bodies, and photos of only the athlete’s head.  The researchers cut and pasted different heads to different bodies as well as just displayed the bodies and heads separately and asked participants to decided what emotion was being displayed.

This type of research doesn’t consider whether those stimuli mean anything in real life; where you would encounter just a human head creating an expression, or just a human body gesturing without seeing their face in a real life situation?

This brings up a valid question of, if the items being shown do not occur in real life, what is it that is exactly being studied? What data is being generated? Dr. Matsumoto believes that we need to address questions about the technology before jumping to conclusions about what the data means.

2.    During this experiment, the researchers showed research participants several faces of athletes who had immediately won (or lost) something in an achievement or competitive context. Unfortunately, the researchers base their study upon the assumption that the expression people make at the time they scored or lost a point is a universal expression of happiness or anger when in fact, there is no data to support this statement.

Recent research has shown that the face of an athlete (at the moment of a win or loss in a competitive context) used in this study may or may not be representative of one of the seven basic emotions. In fact, some research suggests that the emotions that are being felt at that exact moment of winning in a competitive context may best be called triumph and not happiness or joy.

Dr. Matsumoto states if the image that was shown during the study is not representative of one of basic emotions, then showing that particular face says nothing about basic emotions. Moreover, the results of the study also cannot be used to argue against the universality of the seven basic emotions of anger, contempt, disgust, fear, happiness, sadness and surprise.


It is important to note that there is a major misconception that the seven universal facial expressions are the only expressions that occur on the face, which is simply not true.

As mentioned in Dr. Matsumoto’s book Nonverbal Communication: Science and Applications,

The face is arguably the most prominent nonverbal channel, and for good reason. Of all the channels of nonverbal behavior, the face is the most intricate. It is the most complex signaling system in our body and it is the channel of nonverbal behavior most studied by scientists.

However, this does not mean that the face is the only channel of expression for all emotions.



One response to “Body Language Speaks Louder than Facial Expressions? A Critical Look at Aviezer’s Study”

  1. Sinay says:

    I agree that facial expressions often overrated by their significance in displaying emotional state.Mostly because people are much more aware about their face and can control their expressions to some degree. E.g. I can fake a full smile even when I’m completely mad.
    Controlling body posture and gestures on the other hand is way harder – because we’re not used to think about it, especially about the lower part of our body.

    I do believe however that face expressions can be a reliable tool in identifying common and universal emotions like happiness, anger, fear, sadness, surprise and disgust.
    The Aviezer research talks about ambiguity only in peak-state emotions like defeat and victory – emotions that’s quite hard to demonstrate with our face. Can you make a victory face? what’s a victory face anyway? a wide smile?
    But if we talk about a happy face – it’s easy to describe and imitate it – and therefore a more reliable indicator of emotion.

    Thank you for these critical review – it does help to highlight some ideas that the research prefers to ignore.

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © Humintell 2009-2018