On Mimicry and Aliens - Figuring the Human in AI

On Mimicry and Aliens - Figuring the Human in AI

10 Nov, 2017    

Suchman touches upon a crucial aspect of what it means for an AI to be human: how is being human figured, according to which perspective, and how does our transposition of this figuration onto machines influence our understanding of the human in artificial intelligence? The opening paragraph touches upon mimicry — which implies a notion that a human-like AI will necessarily involve the creation of AI that replicates human-like behavior, such as autonomous rational intelligence, or awareness of self. I am incredibly fascinated by this dichotomy of purpose when analysing the I in AI — what does “being intelligent” mean? As humans, we have a very limited understanding of intelligence: our own. In fact, we even measure intelligence in animals with the same stick we use to measure our own intelligence. This puts us in an uncomfortably restrictive space when trying to understand what intelligence would mean for an entity that doesn’t share any of the biological, neurological, cultural, or affective underpinnings that form our understanding of intelligence. That is why figuring the human in AI signifies transposing a certain set of values onto a radically alien entity, simply because it is the only set of values we can understand. In the sci-fi novel “Blindsight,” Peter Watts tackles intelligence from the perspective of non-humans: in his world, self-awareness is seen as a hindrance rather than a benefit for higher evolution — if we’re to consider evolution from a perspective of survival. It is not impossible that self-awareness is an evolutionary dead end, a fluke that worked this once, on planet Earth, to make us into apex predators. We have no proof, or way of knowing, if the same evolutionary process resulted in self-aware entities on other planets or planes of existence.

The advent of AI is the first time we are confronted with the potential of a non-human evolutionary process and we’re somewhat at a loss as to how this evolution will play through. In “Superintelligence,” Nick Bostrom paints a number of potential scenarios with one common thread: humanity will be superseded faster than we think. Although Bostrom’s analysis falls short in many aspects, and his STEM maleness leaks through whenever he decided to tackle cultural or social aspects of intelligence, it is hard to poke holes at his narrative logic. Especially when we look at superintelligence, or achievement of a state of intelligence that is higher than ours, two factors are arguably inescapable:

  1. it will happen at a speed we won’t be able to comprehend, and
  2. we won’t be able to comprehend it anyway because we don’t have the intelligence required.

Embodiment, Emotion, Sociality

Of the three “necessary perspectives” that Suchman chooses to figure the human in AI, two are intrinsically connected (embodiment and sociality), while the third (emotion) is best looked at individually. Embodiment and sociality are two sides of the same coin: just as the bodied self acts as a generator and receptor of external stimuli, the interconnections between bodied entities act along the same principle. The parallel of embodiment between human and AI is, in my opinion, both radically true and fundamentally misleading. The radical truth lies in the affect that embodiment has on embodied entities: both uncontrollable and controlled, unknown and self-created. As humans are multilayered embodied entities, threading “read-only” emotional states with controlled cognitive responses where neither can escape the other, so machines have an imposed infrastructure that limits what it can or cannot do: software code and algorithmic systems are externally encoded, hardware has its own systems of control and override. The misleading aspect is the fact that, while we know of humans’ limits of control over their embodied selves, we can only theorise that the same will apply to machines. In fact, we cannot grasp whether machines will develop systems of complete control and manipulation of their own source code, especially if we take into account the superintelligent turn.

The vision of AI and robots as “emotional entities” is intrinsically linked to mimicry: what scientists and futurologists describe when talking about machines as “thoughtful, observant collaborators” is nothing more than coded behavior that tries to respond to external stimuli in a way that is perceived by humans as appropriate. Two things: firstly, we keep imposing our concepts of emotion onto alien entities whose emotional complexity will probably be fundamentally different than what we can understand. Secondly, we have seen multiple examples in which our best efforts in creating emotional machines work 90% of the time, and then fail in utterly alien ways. Perhaps we can hone that number to be much higher — however, we are still imposing a processing structure onto machines that is evaluated only based on output. We don’t have any control over how those outputs are created within the machines (check out my thesis, or the wonderful aiweirdness.com). The problem with creating machines that interact emotionally with human beings is that emotional attachment implies trust — trust is hard to create, and extremely easy to break. Even if a machine “behaves” perfectly 99% of the time, we will still kick it to the curb as soon as it acts in way that is so uncannily alien and different, we can’t bear to invest emotional energy to recreate the trust relationship.

Even the cited examples of socialised, emotionally driven robots Cog and Kismet, ultimately enact mimicry of human emotional states. While their development is driven by the interest in understanding whether robots can learn like children, through enacting progressively more complex behaviors and creating their own identities, the issue at hand is that we are still imposing a human-like growth process over a fundamentally alien concept. We need to find ways to foster AI and robotics to understand what intelligence, self-awareness, agency mean from their perspective - and it will probably mean not being human at all.


Thoughts on “Figuring the Human in AI and Robotics,” by Lucy Suchman

Photo by unsplash-logoFranck Veschi on Unsplash