Robots today simply act and react according to the perceptions they obtain of their surroundings. We are currently able to interact with them by speaking, by gestures and through the use of peripherals like a keyboard and a mouse. Yet, is a robot able to detect people’s emotions accurately? This is a form of interaction that is intrinsic and intuitive for people, but that is not the case in the world of robotics. Thanks to algorithms for detecting characteristic facial feature points (Cootes and Taylor, 2004), we can train statistical models (Sohail and Bhattacharva, 2006) that, based on the distance between those points, are able to classify the emotional state of a person from a picture of his or her face.
The result of this perception is qualitative information that can be used in many fields. One of them might be a situation in which the robot needs to take decisions depending on the emotional state that the person transmits. An example would be a robot caring for the elderly which would be able to learn from its surroundings whether its actions are perceived positively or negatively by the person it is attending. Another situation in which this new form of interaction might be applied would be a shop where cameras detect the customers’ degree of satisfaction regarding the products they view or knowing how they react when they visit the establishment.
These applications, and possibly other ideas that still have not been proposed, open up a range of options about interaction with robots or even monitoring activities within the “quantified self” movement.
In sum, we can say that this new way of perceiving and interacting with robots will “humanise” them as they learn to interpret unambiguously such simple non-verbal messages as a wink or a smile.