Computers Reading Your Emotions
Stephen Shankland, a technology journalist, wrote an August 2010 article for CNet.com regarding the development of emotion recognition technology, which would enable computers to detect what the user is feeling in real time.
The World Wide Web Consortium (the organization which standardizes many Web technologies) is attempting to standardize and formalize emotional states using a vocabulary that computers can handle. This vocabulary is called Emotion Markup Language (EmotionML) and is “designed to provide a more sophisticated alternative to smiley faces and other emoticons…to improve communications between people and computers.”
The engineers behind EmotionML argue that the technology they are developing would avoid any ambiguity in the ways people interact online. Shankland quotes Mark Schroeder, editor of the EmotionML standard, who uses several examples of situations where this technology would be useful: “avatar faces could depict what the person behind them is feeling, the play intensity of computer games could be adjusted based on the player’s reactions, and customer service representatives would be able to tell when their customer is upset”.
However, even Schroeder admits that there are shortcomings in emotion recognition technology, including erroneous readings, which could do more harm than good. This technology is designed to “improve communications,” yet there seems to be the possibility of a large margin of error. Even humans have a difficult time deciphering one another’s emotions, so how can someone program a computer to recognize them accurately? While this sort of technology could be developed to be accurate in reading human emotions a good percentage of the time, but when programming a machine that is designed to do something even humans have trouble with, it’s hard not to question whether or not the margin of error is worth having this technology. Do you think the general public would benefit from using EmotionML?
Furthermore, the engineers that are building the technology in their systems are going by the very general categories of happy, angry, and sad. Research has shown that there are seven basic universal emotions: anger, disgust, contempt, fear, happiness, sadness, and surprise. These emotions are expressed by people all around the world in the same way. Because of this, a computer could quite easily be programmed to detect these emotions. However, there are other emotions that are not expressed the same way all across the board: guilt, pride, love, shame, etc. Because of this, the engineers behind Emotion ML can’t even agree on one vocabulary to use for representing emotions. Therefore, EmotionML is being designed to provide a set of ‘recommended vocabularies,’ and the user must state which vocabulary set they would like to use.
However, if this technology could be implemented in an effective way, it could be helpful in some situations. It would be much easier to convey emotions while instant messaging, for example, since it can be difficult at times to differentiate a joke from a serious statement. Many companies have live customer service online, and it would be so much easier for the customer to convey just how upset they may be. The world of video games would change drastically if games could be programmed to adjust based on the player’s reactions. Furthermore, this technology could be utilized to make law enforcement more effective when they are interviewing a suspect.
Is it really a good idea for the person behind the screen to know every single emotion you are feeling? What about online advertisements? Could a company read your every emotion, and thus cater their advertising to your mood? It would feel a bit like “Big Brother” watching over each computer user, and most people seem to prefer more privacy during online interactions. Do you think the benefits outweigh these negatives?