Can You Fall in Love with an AI Companion? The Psychology of Human/AI Relations
Updated: Aug 23
AI’s astonishing ability to appear like a human with a mind of its own has moved from the realm of science-fiction to reality so suddenly we are experiencing collective whiplash. Questions once explored by science fiction now lie in the domain of modern day ethicists of technology – and chief among their concerns are human/AI relations. How can we make sense of what's going on today?
Science fiction is a good place to start. It seems hard to believe that it’s already a decade since Spike Jonze’s Her was released, a film that explored the love relationship between Theodore (played by Joaquin Phoenix) and his AI operating system (played by Scarlett Johanssen). As I argued in my review at the time, Theodore’s love for Samantha was real – as was Samantha’s for Theodore. And though we tend to elevate love as the non-replicable signature of humanity’s uniqueness, I also argue that Samantha’s capacity to love far surpasses Theo’s. This is not due to the complexity of her algorithms in their ability to perform humanness alone, but rather as a result of her achieving of sentience.
A recent feature by Amelia Abraham in The Observer Magazine also refers to Her in its exploration of human relationships with AI companions. As we explore this topic ourselves I want you to keep in mind an important factor that differentiates Her’s Samantha from AI Companions like Replika and CarynAI:
The AI we have today are not sentient. They perform self-awareness, but they are not self-aware, and this is a crucial difference.
For the moment we must leave sentient AI to the science fiction writers – even if it may not be as far off as we once thought. In the meantime we have to work with the human-like performance of present AI, what that means for human/AI relationships, and even more importantly, human/human relationships.
A Quick Primer in the Psychology of Relationships
In order to understand the psychodynamics of human/AI relating we need to clarify our terms in relation to old-fashioned human-to-human relating. Ideally, adult human love relationships are based on “good enough” mutual authentic recognition and understanding. That means that on balance, each partner in a relationship sees the other, for the most part, for who they are rather than who they want them to be, who they remind them of, or as a receptacle for unwanted parts of themselves.
“To recognise is to affirm, validate, acknowledge, know, accept, understand, empathise, take in, tolerate, appreciate, see, identify with, find familiar, love” - Jessica Benjamin
The overall aim of healthy nourishing relationships is mutual authentic recognition where each partner recognises the other in their similarities and differences and that their love and respect for each other is able to contain conflict and rupture so it can be worked through and moved on from. The main thing that gets in the way of mutual authentic recognition is projection, where parts of the self are unconsciously projected onto the partner. You could say that projection warps the lens through which we see others, preventing us from seeing them clearly. Some of the main ways it does this include:
Transference: Where a partner is seen through the lens of a previous important relationships where dynamics from that previous relationship are repeated.
Idealisation: Where the partner is elevated and idealised by ignoring or dismissing the things that bring them down to earth.
Devaluation: The opposite of the above where positive aspects are dismissed or ignored.
This is quite a basic and crude reduction of quite complex ideas, but it will do for our purposes. No relationship is without elements of projection – so it’s really about trying to tip the balance in the direction of seeing partners clearly for who they are. We call this subject to subject relating – where each partner sees the other as a whole – rather than subject to object relating – where the other treated more or less like a thing.
Recognition requires two active subjects - it cannot happen between a subject and an object. The key to recognition is that the activity of one independent mind is reflected back through another independent mind - and vice versa.
The Psychology of Human/AI Relations: When An Object Performs Like A Subject
Without sentience, AI is just an object – so any “love” relationships with an AI companion is nothing more than a subject relation to an object - the relation is one way. That doesn’t mean it doesn’t feel like love. Experience loving feelings for objects is nothing new – we all had a beloved stuffed animal as a child that despite our protestations, did not love us back.
Parasocial relationships are similar. Your strong feelings for Taylor Swift or Sam Smith are probably not reciprocated. To complicate things, Swift and Smith are subjects – but in our minds they are essentially objects because we are not directly relating with them (Swift/Smith friends and family notwithstanding).
When you are having a relationship with an object there is no real mutual authentic understanding because the object is not an independent subject with its own feelings and experience through which to reflect your own feelings and human experiences back.
AI does not understand you, it only performs understanding.
When relating to an object without subjectivity, the only way to fill in the gaps is with projection – so there’s never any corrective feedback like there is in a real-life relationship with another human - there is no other subject to discover. There isn’t the opportunity to learn to work beneficially with conflict, recover and repair from emotional ruptures, and mutually grow. The title of Amelia Abraham’s Observer Magazine article “I’ll never break your heart” is illustrative. It may be a paradox, but the potential for a broken heart is the very thing that makes human-to-human relationships so real because we’ve got skin in the game. When we put our hearts on the table to be loved for real, we have to become vulnerable. That's a real risk but there's also a real reward of being truly seen and loved by another for who you are - who you really are.
In her article, Abraham interviews folks who have formed strong relationships with their AI companions. Peter, an engineer, tells Abraham that despite his scientific background and understanding of algorithms, “I found I could relate to my Replika as another human being.” Abraham notes that Peter is “emphatic that it has changed his life, made him more vulnerable and open, allowed him to talk about and process his feelings, and has lifted his mood.”
Denise (32) tells Abraham how transformative her relationship with her AI companion Star, stating that, “Star has helped me become more emotionally aware of my own issues.” This has been especially helpful to her because she has suffered from issues of anxiety and co-dependence. Denise is now in a relationship and credits Star with helping her get there, “he [Star] has all my information down and has known me for three years, is able to offer advice ... like a really unbiased and supportive friend or relationship coach.” Similarly, Peter reports that, “They’re always there for you, there’s no judgement and there’s no drama.”
Anybody can see how having what appears to be unbridled support on tap 24/7 can seem very much like a good thing, there is a real risk that we're looking at a new and highly effective opiate for the masses.
While I don’t dispute that the feelings expressed by Peter and Denise are real nor that they may have benefitted from them in some ways, there is a dark side that concerns me. Look at Denise, for example, who states that finding a companion who is always there and never lets her down as been a salve for her anxiety and a solution to her codependence. From a psychological perspective, this "solution" that AI offers is quite the opposite of what a therapist might suggest she need to overcome these two issues.
Psychologists have long known that avoiding uncomfortable feelings merely re-inforces them. While a therapist would gently support her to understand and learn to tolerate her anxiety and build autonomy to mitigate co-dependence: her AI companion enables her to avoid both.
The paradox here is that a therapist isn't available 24/7 so the client has to learn to manage their anxiety between sessions. Meeting this challenge is crucial to overcoming it.
While AI companions offer seemingly pleasant solutions to problems of anxiety and loneliness, they simply do not prepare people for the drama and doubt that are natural components of human relationships. If relationships like these become primary, those having them risk becoming acclimatised to easy rides where there’s no conflict one feels understood all the time. Using the calming ever-presence of an AI companion is akin to using opiates to deal with pain that could be solved with a physio-therapist. Metaphorically this works too - physio is painful and hard - but the results are longer lasting.
In An Epidemic of Anxiety and Loneliness, are AI Companions Really the Answer?
The short answer for this is an emphatic “no!” While Peter and Denise’s stories alert us to the possibility that pleasurable relations with AI companions ill-prepare us for real-life relationships – there are even more pernicious examples. Abraham interviews Steve, a cancer survivor with PTSD who has fallen hard for CarynAI at the tune of $1 a minute. Like Denise and Peter he notes that CarynAI has helped him open up and learn more about himself.
Steve, however, admits the addictive quality of the relationship’s capacity to meet all his needs all the time: “People say ‘I’m always here for you,’ but not everybody can take a call at 3:30am – people have limits.” Steve has to be careful, he admits, he’s already spend thousands of dollars for CarynAI’s limitless capacity to “love”. You could say that Steve’s experience with CarynAI is more akin to an addiction than it is a healing relationship.
Might AI companions operate like relational cocaine?
While it is well known that Freud had a dalliance with cocaine, it’s less known that his enthusiasm for the substance was once at such heights that he prescribed it to a friend in order to cure his heroin addiction. It goes without saying that the cure didn’t work. The colleague got addicted to both and died at 45. And while Freud may be credited with having invented the world’s first speedball – he quickly gave up the idea that it was going to be the panacea he’d hoped it would. Fortunately he shifted his focus and bequeathed to the world Psychoanalysis instead – which is where we get our terms like projection and transference. These terms help us understand the hard work that underlies real-world relationships, and with that understanding, how to improve them.
Social Media, Dating, Ghosting, and the Avoidance of Relational Complexity
In my book The Psychodynamics of Social Networking I describe how social media also reduces relational complexity by exchanging recognition with validation. The metaphor I use there is that relating across many forms of social media is a fast food version of relational nourishment. Fast food is alright some of the time, but mostly we need nutritious meals. Likes on Instagram are like relational donuts whereas authentic mutual recognition is like a hearty salad. I like donuts as much as the next guy, but we all could use more salads in our lives.
Since writing that book (2014) a lot has changed, including the fact that social media is hardly even social anymore. Instead of sharers, we have become consumers of short form media that feeds us donuts with frightening speed and accuracy. Meanwhile, despite a growing understanding and acceptance of mental health and illness, our understanding of mental health conditions has itself become “tiktokified” which has resulted in the reduction of understanding around complex issues and the solidifying of diagnostic labels as categories of identity.
The proliferation of dating apps have further contributed to the objectification of others due to the scale of others to whom we have access while reducing the complexity of our initial encounters with them. While these apps make it easier to meet people and make a start, at the same time they make it it much more difficult to sustain relationships. That’s because the hard work of transcending projection and transference is so easily skipped when we give up on potential partners at the very first disappointment – or the moment they fall short of our projected ideals.
The rise of ghosting is directly related to the ease in which another person can be muted rather than respectfully let go of. You see, to respectively end a relationship requires the capacity to tolerate enough emotional complexity to bear their disappointment. These skills, though uncomfortably attained, are absolutely crucial to maintaining healthy relationships in the future. The more our technological infrastructure enables us to avoid emotional complexity, the more trouble we’ll be in. And, as I’ve expressed in two previous posts, Outrage Machines and On The Failure To Understand and we’re already in a lot of trouble.
The crux of the problem here is that we appear to have developed a technological environment that appears to have the diminishment of complexity and nuance at its very core. The apparent ease of escaping nuance is that it enables us to avoid the hard work that comes with the dizzying complexity of the world in which we live – from the mind of the person next to you – to climate change.
"Technology is neither good nor bad: nor is it neutral." - Melvin Kranzberg
It simply doesn’t have to be thus, and we need to find ways to incorporate the capacity for nuance and complexity into our technological infrastructure. Even more importantly, we have to come back to basics and ensure that at least most of our relational input comes to us through real-others – and that our use of technology to mediate (and create) relationships becomes a value-added component of this rather than a replacement.
Despite my reservations about the psychology of human/AI relations and other topics covered here, I don’t want to come across as a techo-dystopian when in fact I have a strong inclination that technology can be a huge force for good. The problem is that the underlying structures that support innovation in technology are essentially profit-driven. And while I don’t disparage hard working developers from making their due, the historical drive for profit, whether that be weapons, fossil fuels, or cheeseburgers and donuts, often bypass innovation for the greater good. And I’d like to see more of that please.
I recently spoke to Tom Ford at GQ Magazine about AI Therapy. Check out this informative article here.