The branch of robotics has advanced significantly over the years, having applications in the industry, domestic services, medical field, among others. However, have you ever wondered why humans are afraid of some robots? Is humanity prepared for robotics driven by artificial intelligence?
Take a look at the following images. How do you feel about them? Maybe neutral? Maybe uncomfortable?
And what about these? Perhaps the sensations are better than the previous ones.
The strange but fascinating phenomenon of the Uncanny Valley
The sensations people feel with those images, including discomfort and calm, are caused by a strange phenomenon called Uncanny Valley. This concept was first introduced in the 1970s by Mashiro Mori, who attempted to describe his observations and feelings to robots that looked and acted almost like humans. He hypothesized that as robots appear more humanlike, they become more appealing, and people have more empathy with them, but only to a certain point [1]. When we reach that point (uncanny valley), the person’s response would abruptly shift from empathy to revulsion, causing a tendency to be scared of the robot or causing feelings of strangeness or a sense of unease [2]. Besides robots, the phenomenon could appear in virtually created characters, such as the new metaverse.
For more information about the metaverse, you can read this article.
Figure 1 shows the uncanny valley graph. The horizontal axis represents the robot (or virtual character) human likeness, and the vertical axis represents our affinity towards it. Nevertheless, the relation between those parameters is strange as the human likeness increases and becomes more evident when the robot moves.
Figure 1. Uncanny Valley graph
Why is it important to avoid the uncanny valley?
Nowadays, the usage of robotics in our daily life is booming [3]; however, whole interaction with machines has not been achieved due to the low affinity and empathy between humans and robots. Their limitations lead to an inevitable rejection when interacting in terms of expressing emotions. Therefore, it is a great challenge for this industry to naturally make robots part of the environment without reaching the uncanny valley.
One of the most visionary applications of robotics is related to education and rehabilitation. These applications are achieved with robots capable of simulating human emotions, which leads to an increase in the man-machine relationship. By accomplishing this, the education or rehabilitation of disabled people is possible since the patient will be able to create a specific empathic bond with his therapist. At the same time, the robot or virtual character will be able to develop his work in a more effective way [4].
Can AI help us to avoid the uncanny valley?
Yes, the short answer is yes. And you may be thinking that the answer is obvious, but it’s not. Many companies related to this field continue creating super technological robots with considerable sensors, complex computer vision algorithms, advanced joints, etc. But forgetting one crucial but straightforward phenomenon, the uncanny valley, where many robots fall in, even the most advanced [5].
The problem is that it is thought that the more, the better, and this only applies to the use of artificial intelligence, not to the design of the robot. So, for example, when a robot with a complex design tries to emulate emotions with many facial expressions or large movements of its joints, no matter how complex its artificial intelligence models are, the result? The famous but unwanted uncanny valley.
The best example could be Ameca, a robot released in 2022 that uses Automatic Speech Recognition to recognize people’s voices, Computer Vision to recognize faces and objects, among others [6]. However, in my opinion, it is a robot that will fail in its interaction with human beings due to its way of expressing emotions.
On the other hand, let’s imagine a simple robot that can express emotions with just a few joints or by simple sounds, the empathy towards the robot will increase a lot. But that’s not enough.
If we want to reach maximum familiarity with the robot, we need to use artificial intelligence, how? For example, using Automatic Speech Emotion Recognition to listen and understand what people are saying, Facial Emotion Recognition, or Computer Vision.
At the same time, robots could use emotions to accompany what they are saying, giving a personalized treatment depending on that emotion.
To have good examples, we need to find robots that emulate living things’ emotions (animals or humans), with minimalist details or even without any human or animal trait. One of the best-known examples could be WALL-E, an animated robot that looks like a robot and can emulate emotions with just his movements and sounds. Another good example is Vector, a little AI-powered robot capable of emulating emotions without any joint, just an LCD screen.
What’s next?
As mentioned before, the branch of robotics has grown in recent years; although its most considerable daily use continues to be industrial robots, the other branches of robotics have also had significant growth.
What’s the problem? Before creating any robot, we need to know its purpose well. If the tasks robots have to perform are related to human interaction and its design and technology aren’t focused on that, the only thing that will happen is that there is a repulsion towards the robot by the human, and therefore, robot mission will fail.
To conclude, if you think that robots in the future will look like those in Terminator, I robot, Transformers, Surrogates, or anyone else who might cause discontent, the answer, in my opinion, is no. Instead, they will look like the robots of Star Wars or WALL-E, which show emotions simply and without causing us, terror.
REFERENCES
- Caballar, R. D. (2019). What Is the Uncanny Valley? IEEE Spectrum. https://spectrum.ieee.org/what-is-the-uncanny-valley
- Mori, M. (2012). The Uncanny Valley: The Original Essay by Masahiro Mori. IEEE Spectrum. https://spectrum.ieee.org/the-uncanny-valley
- Patiño, P., Moreno, I., Muñoz, L., Serracín, J. R., Quintero, J., & Quiel, J. (2012). La robótica educativa, una herramienta para la enseñanza-aprendizaje de las ciencias y las tecnologías. Education in the Knowledge Society (EKS). https://www.redalyc.org/pdf/2010/201024390005.pdf
- Kaspar’s journey. (2017). Kaspar the Social Robot. https://www.herts.ac.uk/kaspar/meet-kaspar/kaspars-journey
- Rankings-Creepiest-Robots – ROBOTS: Your Guide to the World of Robotics.. Robots.Ieee. https://robots.ieee.org/robots/?t=rankings-creepiest-robots
- Engineered Arts Ltd. (2021). AI vs. Human Intelligence. Engineered Arts. https://www.engineeredarts.co.uk/software/artificial-intelligence-vs-human-intelligence/
Juan Casas – Data Scientist
VISIT OUR KNOWLEDGE CENTER
We believe in democratized knowledge
Understanding for everyone: Infographics, blogs, and articles
Let’s tackle your business difficulties with technology
” There’s a big difference between impossible and hard to imagine. The first is about it; the second is about you “
Marvin Minsky, Professor pioneer in Artificial Intelligence