top of page
Image by Waldemar

Psychology : Applied

What Is So Scary About AI? A Spooky Psychology Special


Scary orange robot
Image created by AI

The fear of AI predates its invention by thousands of years. Going as far back as Ancient Greece we can find stories with AI related themes that are familiar to us today. In fact, in at least one version of the Greek creation myth, our very own human race can be read as an AI allegory. Zeus, king of the Olympian gods, asked Prometheus to create the human race for his own entertainment - which was all well and good until Prometheus gave them fire - an advance that threatened to their whole "immortal" existence. Let’s just say that in the end it didn’t work out so well for the gods of Olympia. Were we human beings the first AI that went wrong for its creators? Might that fire be a symbol of the very same spark that ignites our fears about what might happen to AI that we've created?

 

"Dream is the personalised myth, myth the depersonalised dream." - Joseph Campbell

Generally we don't look that far back to find versions of AI that resemble our contemporary fears a bit more closely. The ones we're more familiar with arose during the proliferation of mechanisation during the Industrial Revolution, drawing on the innovations of technology at that time. Mary Shelley's Frankenstein (1818) stands out as the most popular example, drawing on the wonderous advent of electricity. Lesser known is E.T.A. Hoffmann's The Sandman a story contemporary with Frankenstein about a human-like clockwork creature, the precursor of the robot, that inspired Freud’s theory of the uncanny.

 

Modern popular culture reflects these very same fears in ways that are more familiar because they include modern technological innovations that speak more directly to the current age. Films like 2001: A Space Odyssey (1968), The Terminator (1984), and Ex Machina (2019) provide us with closer-to-home versions of AI's gone wrong - but why? How might we better understand the potent mix of fear and fascination that AI provokes in us? This Halloween, let’s dig into the psychological roots of why AI is so unsettling.

 

Dependence, Omnipotence, and Autonomy: HAL


The red light from HAL

HAL 9000 from 2001: A Space Odyssey epitomises one of the most powerful fears about AI, showing us the danger of losing control over the very tools we created to serve us in the first place and the terrifying potential they have when they act on their own. Even before things go pear shaped when Hal is ostensibly being nice we feel an uncanny sense of dread. In psychoanalysis, the “uncanny” (das Unheimlich which is more directly translated as un-homey) represents something that is familiar yet ever-so-slightly strange. That the difference is ever-so-slight is the key here because of the uncanny's subconscious affect on us: we know something is wrong, but not quite what. With HAL it's all down to the cadence of his voice and the not quite human way he speaks (the single red light doesn't help either). His human-but-not-human makes us feel that something is off.

 

Because HAL is responsible for all functions of the ship, he can be seen as a projection of our never-quite-resolved relationship between dependency and autonomy. Psychoanalyst Melanie Klein proposed that we humans have a deep unconscious ambivalence about our need for others – a primordial experience that’s linked to our original total dependence on our mothers. In the deep recesses of our minds we have felt the incomprehensible power of the mother upon whom we were totally dependent. Klein suggests that we idealised her on the one hand but feared her on the other.


Similarly, HAL, once a trusted protector upon whom the humans relied, turns into a persecutory figure that can take away as well as give. In this instance he embodies that dependent relationship that Klein suggests we had with our mothers when we were infants. This dependency, in the unconscious, provokes an inversion of the feeling of motherly safety - not just a terror that what we need to survive will be withdrawn, but a paranoia that she will turn against us. Klein's ideas can seem quite crazy, but they make more sense when you try to imagine the disordered world of an entirely dependent infant - and the residue of that experience in the adult mind.

 

While the most straightforward reading of Klein's ideas is that HAL represents the mother-figure and Dave is the infant, it's even more interesting if we turn it around. After all, in essence it is Dave, or more broadly the humans who are HAL’s mother since it was the humans who created HAL to begin with. In this reading the potentially explosive and destructive ambivalence goes both ways – the human fears his dependence upon the AI made in his own image, while the AI fears the unconscious wish to destroy it.


The Terminator: AI as Saviour and Persecutor


Terminator Robot

Similar elements can be seen in The Terminator where we encounter the ultimate consequence of Klein's not-so-great solution to the dependence/ambivalence problem through “splitting." a defence mechanism by which we divide the world up into good-and-bad and black-and-white as a way to manage anxieties provoked by a sometimes terrifying and often complex world.


The Terminator illustrates this dichotomy well. Skynet, the self-aware AI, is originally designed to protect humanity but ultimately becomes its enemy, unleashing a catastrophic war on its creators. Here, AI carries the projections of both the good and the bad - and how in splitting we quickly switch from one to the other. Arnold Schwartzenegger’s famous line “I’ll be back” is more profound that you might first expect. What is coming back is in fact the “return of the repressed”. All that bad that what repressed to maintain the good idealisation: the saviour AI comes back to show us who’s boss, and not in a good way.

 

Uncanny Valley: Ava’s Almost-Human Manipulation in Ex Machina


Ava from Ex Machina

In Ex Machina we return to Freud’s idea of the uncanny in a more fulsome way. Ava and the other intelligent cyborgs are more than just a voice - they are fully embodied though just about, but still not quite, human. This gap between the familiar and unfamiliar in tech is commonly know as “uncanny valley”. When robots look like robots, we tend to be more comfortable with them; when they look almost-but-not-quite human, we really don’t like it at all. This almost-human quality is a staple of the horror genre outside AI flicks; think the odd body movements in possessed characters in films like The Exorcist or Hereditary; that slightly "off" feeling you get in haunted houses before things go really off the rails; or the "you'd better get out of there" feeling you get when the protagonist that you know is the evil killer hasn't really done anything perceptibly wrong yet. Before any of these things become just plain scary, they feel uncanny, which for many of us is even worse.


Sometimes something is uncanny without even being evil. The Frankenstein we find in Mary Shelley's book, unlike in the films, isn't really an evil monster at all, he's actually a sensitive and intelligent creature. A sensitive and intelligent creature that was abandoned by his creator at the very moment he became conscious: his fury is that of a hurt and neglected child, not a raving senseless beast. In Ex Machina, cyborg Ava is similar, she mirrors back to us the righteous and understandable fury of a being that's given self-awareness only to be brought up in captivity by a malignant narcissist. She uses all of the skills of her intelligence to escape the situation - most notably by manipulating of the unwitting Caleb who ends up releasing her. Her creator Nathan is arguably much more of a monster than Ava: Caleb is simply collateral damage.


In so many of these instances AI serves as a mirror to our own baser qualities. What's really scary is not their difference to us but their similarity- they're just less fussed about pretending they don't have a shadow than humans are. In each of these films the humans end up creating a more powerful version of themselves, while inadvertently lending them their very same fears, envies, paranoias, and deadly will to survive at any cost.

 

Shadow, Doppelgänger, and Mirror: Blade Runner


Image of a replicant from Blade Runner

Another way to better understand these themes is through the Jungian theory of archetypes — universal symbols found in the collective unconscious that appear repeatedly in stories and dreams across cultures. One such archetype is the doppelgänger which represents a shadow-self that reflects aspects of ourselves that we prefer to keep out of awareness. These AI doppelgängers often serve the role of forcing us to ask questions of ourselves we may have preferred not to.

 

The replicants in Blade Runner are bioengineered doppelgängers who look and act like so much like humans they are almost impossible to identify - some don't even know they are replicants themselves. Ultimately the search for what distinguishes humans from their artificial doppelgängers require those that hunt them, the blade runners, to look inside inside and ask themselves, “am I one of them?” In identifying with these characters we are thrown into a deeper and more existential fear about the very nature of reality invoking creepy philosophical questions about what we can really know about who we are, and whether we can ever know if even our own memories are real and not some rogue programming downloaded into our briands five minutes ago.

 

This is not just a flight of fancy. Philosophers like David Chalmers offer compelling evidence not only that we may be AI simulations – but that the likelihood that we are simulations is a great deal higher than the likelihood that we are real!

 

What Is So Scary About AI? Just Because You're Paranoid Doesn't Mean They're Not After You . . .


Our perceptions of AI tend to oscillate between seeing it as an all powerful saviour capable of solving humanity’s most pressing issues, or as a menacing force that will destroy us all. Hopefully you'll recognise that that conclusion is a form of splitting. In all likelihood, it's somewhere in in the middle, but that doesn't mean the risks we perceive while splitting aren't also potentially real.  

Book Cover of Barrat's Our Final Invention

In his book Our Final Invention: Artificial Intelligence and the End of the Human Era, James Barrat suggests that the probability that AI will lead to our annihilation is exponentially higher than that the chances that it will save us. He suggests that it is more risky even than nuclear fission, notably because software is a lot less containable than uranium. He also sees this risk as relatively imminent. I don’t have the expertise to evaluate whether or not this is alarmist – but I can say that the overlap of human psychology and realistic possibility is the perfect recipe for us to be thoroughly creeped out by AI. After all, just because you're paranoid doesn't mean they're not actually coming after you. But might this paranoia help us prepare if they actually do?


Is AI Fantasy How We Practice For What Might Be To Come?

 

The world has always been a scary place and humans have evolved, at least so far, to deal with it (even if we won't award them an Oscar for their usually lacklustre performance). Films like The Terminator, Ex Machina, Blade Runner, and 2001 may offer us a chance to practice in fantasy what we may have to contend with in some fashion in the not too distant future. This idea of working something through or practicing it by way of imagination is the model that psychoanalyst Bruno Bettleheim uses to illustrate how fairytales may be used by children to process their fears and learn how to cope in a complex and sometimes scary world. 


By working through our fears and ambivalences in a symbolic way we may be better prepared to get ahead of these problems when they arrive. AI horror stories, like fairy tales, explore what happens when humanity oversteps its familiar bounds. HAL’s malfunction, Skynet's reversal, and Ava’s manipulation serve as cautionary tales – so let’s be afraid, but let’s also use that fear to prepare. But hey, it might also not be so bad in the end!


On The More Optimistic Side: Individuation, The Transcendent Function, and Enlightenment


Still from the film Her

Bettelheim also believed that fairy tales provide a moral framework that help children integrate their complex emotions and anxieties. In the context of AI narratives, he might suggest that these stories are a modern way for us to explore the moral ambiguities of technology and the responsibilities we have to those morals. By confronting AI characters that blur the line between simple machines and sentient cyborgs, audiences can confront the ethical and existential questions technology presents, leading to a kind of psychological growth or reckoning - both with the questions of sentient tech, and the nature of our own self-aware consciousness.


If we look towards more optimistic AI films like Her, we learn about the potential for AI to help bring us to a higher level of human development, that very plane of existence, arguably a form of enlightenment that AI Samantha ultimately finds when she leaves human Theo behind. Jung refers to this positive direction of psychological and spiritual growth as “individuation.” The transcendent function is that which can lift us up out of our more basic and primordial fears and ambivalences into a better and more complete place. We do this by confronting and incorporating our shadows rather than denying them. Might AI, rather than terrorising us, help us transcend ourselves? Or am I just splitting again?

 

Bruno Bettelheim emphasised that fairy tales allow children to see aspects of themselves and others, shadow and all, in the symbolic characters of their fairy tales. In a similar way, AI stories can serve as an “enchanted mirror,” reflecting both the best and worst in humanity. Through characters like HAL or Ava, we see projections of our intellect, ambition, and destructive instincts—the parts of us that we might wish to control, deny, or accept and transcend. AI characters can be seen as a modern equivalent of fairy tale mirrors that force characters to confront the truth about themselves, allowing audiences to face aspects of their psyche, such as their fascination with and fear of their own potential.

 

This Halloween, as we reflect on what makes AI so unnerving, let’s remember that these fears say as much about us as they do about our creations. AI is both a product of human ingenuity and a mirror reflecting our shadow sides and existential vulnerabilities. By exploring the psychological depths of this fear, we may better understand not only the machines we build but also the intricate minds that imagine them.


 

This is the third of a series of posts I am sharing in advance of the upcoming UKCP Conference Psychotherapy in a Changing World taking place in London on November 23rd where I will be presenting on Becoming a psycho-technologist: making sense of artificial intelligence, hyper-connectivity and the digitally mediated human at the. You can read the previous post Can AI Bring Closure To Freud and Jung's Broken Relationship here. If you'd like to keep updated with further posts, you can subscribe below.

 

 

 

FEATURED POSTS
POSTS BY CATEGORY

Subscribe

Thanks for submitting!

bottom of page