Empathy’s Uncanny Valley: Ubiquitous Computing

Empathy’s Uncanny Valley: Ubiquitous Computing

05 Jan, 2018    

Humanity’s affinity with the human-like follows a particular growth curve: while affinity grows steadily as the observed entity becomes more human-like, there is a pronounced dip and a rejection of affinity when that entity comes close enough to be mistaken for human only some of the time, intermixed with distinct non-human-like aspects. This is what philosophers and researchers call the uncanny valley: inspired largely by Freud’s concept of Unheimlich, it is a particularly disconcerting feeling, a feeling of misplacement, an alien otherness that mimics something that is intimately known. Freud’s preferred example is walking down an unknown street that resembles closely a very familiar place. Our psyche tries and fails to overlap known patterns to an unknown entity, resulting in a jarring disconnect.

In today’s world, computer-generated graphics and robotics are uniquely good at provoking the uncanny. Humanoid robots that look and act very human until they don’t make us profoundly uncomfortable; realistic video game graphics yank us out of suspended disbelief when gazing at the glassy, dead eyes of CGI characters. Just like nausea caused by a moving vehicle is our brain’s way of correcting for contradictory sensory input, the unheimlich is our instinctive reaction to jarringly conflicting patterns of familiarity.

Ubiquitous computing tends to insert human-like interactions and behavioural patterns into physical and situational realities that we expect to be neutral; incapable of knowing. Fridges that recommend food purchasing habits, phones that impose fitness regimes, even cars that break for us, all run the risk of rocketing us down uncanny valley’s slopes and into discomfort. Yet, ubiquitous computing has firmly positioned itself as the third computational revolution after mainframes and personal computers: companies are investing heavily in ubiquitous computing, and advances in machine learning provide for its behavioural pattern-matching motor. The problem is in the incentives: this is the first computational revolution that is crossing boundaries of affinity in actual physical space, and it is being driven by financial gain without a complementary critique and analysis of behavioural effects on human interaction with spaces perceived as behaviourally neutral.

Entities that try to pass for human struggle to cross the extremely high threshold of humans’ ability to recognize other humans (down to subconsciously noticing inadvertent micro-twitches of facial muscles). In ubiquitous computing, the struggle lies within empathy: humans are asked to accept that non-human entities and objects can achieve and enact anthropo-equivalent behaviours. For that to work, we need to be able to believe we can create an empathetic bond with these objects: trust that they will act, emotionally and rationally, like we would expect other humans to act. Now, we humans have no issues with anthropomorphising objects and natural events. However, we are not used to those objects and events actively interacting back: anthropomorphisation “works” as passive observation and post-facto assignment of intention to uncontrollable events. With ubiquitous computing, we are not only asked to suspend our atavistic disbelief in self-agency of inanimate objects — we are also, at least in some minimal part, aware that these objects have been coded to behave in certain ways by other human beings; nothing more than automata.


Photo by unsplash-logoSamuel Zeller on Unsplash