top of page
Image by Waldemar

Psychology : Applied

Is AI Making Us Stupid? The Dangers of Making Hard Things Too Easy

Updated: Oct 15

A man struggling to write surrounded by bits of paper
Image created by AI

When was the last time you sat down to write something difficult—really write? The kind of task where each sentence feels like a puzzle, where your mind wrestles with ideas, and you leave the process more aware, more engaged, even if slightly exhausted. These moments of struggle are where we grow, where we stretch the boundaries of our understanding. But nowadays, it’s easy to skip all that. I know because, well, I just did. The paragraph you’re reading right now, with its carefully crafted rhythm, its clever hooks—it wasn’t written by me, not in the way you’re thinking. It was written for me. By Chat GPT.

 

And that’s exactly the problem.

 

Only now are you reading the first words that I, Aaron Balick, have produced independently. You see, even the reveal and the evocative sentence, “And that’s exactly the problem”, separated out from the main text, was ChatGPT’s idea - not my own. Upon seeing what it conjured up I was momentarily awestruck - a feeling that was quickly followed by a creeping uncanniness. How does it make you feel to know it was written by AI? A couple more questions to ponder:

 

  1. If you were told at the top that this article wasn’t written by me, but by AI, would you have bothered to read it? Why or why not?

  2. Do you generally bring different expectations to material you know has been written by AI from that of a real person?

  3. Would you feel differently if it were written in collaboration with a me?

 

ChatGPT did exactly what I asked it to do the very first time. As an experienced writer myself, I would judge this opening salvo pretty darned good. I’m confident that would have eventually come up with something similar but it would have taken much more time. And while I expect that you will allow this little gimmick because of the purpose it serves, you might be less forgiving if you found out that the rest of this blog was also written by AI but I was passing off as my own. It wasn't, and for that you only have my word. The curious thing to explore here is why it makes a difference.

 

This is the first of a series of posts I will be sharing on AI as we approach November 23rd where I’ll be presenting Becoming a Techno-Psychologist: Making sense of artificial intelligence, hyper-connectivity and the digitally mediated human at the UKCP 's Psychotherapy in a Changing World Conference in London.


AI is a lot of things, but I will be focussing mostly on how it affects regular folks like you and me through apps and platforms that are available to us all. As my other work, I’ll be looking about the consequences for the self, relationships, and culture, from the perspective of depth psychology. If you’d like to see the prompts I used to get ChatGPT to write my introduction, just scroll down to the bottom.

 

Expertise, the Appearance of Expertise, and Influence

 

I don’t know how you found your way to this blog post, but I do hope it has something to do with the social capital I’ve accumulated over time in this area, and that you are bringing some trust in my expertise in the matter. My work draws on ideas I’ve accumulated form more than two decades of research, examination and personal experience in the nexus of technology and psychology. And while I may not be particularly influential personality in this space, I hope that the depth of my ideas as opposed to the breadth of their reach and influence have made a contribution. 


Front cover of The Psychodynamics of Social Networking

What troubles me is that all of the words I have written until now, words that come from those many years of devoted work – from this blog post to The Psychodynamics of Social Networking, can now be easily written by a bot – a bot which knows nothing but performs expertise (not without, it must be said, making mistakes) about just about anything, magnificently. And it won’t be the bot that’s going to use this to gain influence for itself (at least not yet), that’s going to be the person who best knows how to utilise it – a person who may not have expertise in the given subject area at all. Just look at TikTok or any other social media platform whose platforms weight influence over truth.

 

There is a direct relationship between real expertise and the time invested in the hard graft it takes to achieve it. The recognition of expertise arises from the social capital that usually comes with it. The social capital may include things like degrees and awards, a good reputation, or the solid evidence of work well done. We tend to trust and respect medical doctors, for example, because we’re aware of the long and difficult road it takes to acquire their MD. It’s also why medical school is so difficult: it needs to be so because that’s the only surefire way to produce deep embodied learning and expertise.

But having expertise is no guarantee that you’ll be influential, and having influence is no guarantee of expertise.

And here lies the problem – there is no natural link between expertise and influence. Actual expertise may cruise along invisibly while an appearance of it can gain worldwide attention. One who is expert at the performance of expertise may very well acquire a great deal of perceived social capital without having earned it. Social media algorithms are much more interested in how content engages people than how accurate it is. A more interesting story will spread wider and faster than a less interesting truth.

 

The Widening Gap Between Performance and Knowledge


A person alone on a stage facing a dark room

The performance of expertise is the hallmark of the conman. After all, the “con” that tricks you is the false CONfidence that you were conned into having. And while folks like Donald Trump stand out among the biggest conmen in history, despite being puzzlingly bad at performing expertise, I’m more interested in the smaller cons that happen every day; the way AI enables us to con ourselves and others so easily that we hardly notice. It does this by enabling us to present polished surfaces while circumventing or avoiding difficult work and complicated depths.

Not every cardiac surgeon can do a great Ted Talk, but would you rather have your chest split open by a great surgeon or a great speaker?

I have already written widely about the way in which social media invites the expression of a curated performative self, and the consequences that has for an individual and their relationships. A similar dynamic can be seen in the way that AI makes any desired presentation of self, in any desired context, so much easier. And just like in social media, presenting a shiny surface is not a problem so long as it’s tethered to a grounded sense of self. However, a surface untethered from depth will always be unstable.

 

The promises of AI really shine where they can reduce the burden of administrative grunt-work in the same way that less sophisticated machines like dishwashers and robot vacuums save us time to devote to more important things (if we choose not to use that time doomscrolling or zoning out on Netflix). The potential for AI to divest us of mundane life admin in order to create more time for creative pursuits is an indisputable win. I, for one, would welcome the many weeks I could potentially reclaim from mundane and repetitive jobs like completing tax returns, switching energy providers, and other life/work bureaucracy. But getting it to write my weekly blog posts, which it can easily do, feels like a cheat to me. And perhaps that’s why I never seem to manage to be as prolific as I’d like . . .

 

But with every gain there’s a cost. It might be easier to find your way from Land’s End to John O’Groats driving a car with a satnav, but don’t you lose something, many things even, when you give up you maps, or risk getting lost? A step further – cycling there would orders of magnitude harder – but in the end, muddy and wet, also orders of magnitude more satisfying?

 

AI might write my blog post just as I ask it to, but isn’t the struggle of getting there myself that challenges my brain and imbues my words more soul? To quote ChatGPT, it’s in these struggles, “where we grow, [and] where we stretch the boundaries of our understanding.” If we outsource or circumvent too many struggles, will we not stunt our own growth? And if content is produced for us rather than by us does that reduce its value to others too? My aim isn’t to dismiss AI assistance all together, but to understand it and make informed choices about how it’s used. When it comes to expertise I’ve approached it in three distinct ways.

 

The Haves, the Have Nots, and the Appears to Have

 

Let’s call the people that have real learned expertise or life experience in a particular area the “Haves”; people who don’t have it the “Have Nots”; and the people who don’t have it but choose to perform it, the “Appears to Have.” For each of these people AI will have a different role that might be described as the Good, the Bad, and the Ugly.


People working in a lab

Haves might use AI to assist them in their work. The resources that they draw upon will have already been hard won (e.g. research, life experience, education, training, etc.) and AI can be brought on board as a tool to assist them, either to enhance their work, or even to help them grow their influence, for example, but by assisting to make a more interesting powerpoint presentations (something that very interesting and creative people are often terrible at). This could be considered a “the good” -a use of AI that adds value to solid intellectual or creative capital.

 

The caveat here is that continuing to learn and grow and expand one’s expertise is a life-long task. There are some areas where AI assistance may make some tasks, like research, easier than it ought to be, quietly snipping away opportunities for further growth and discovery.

 

The Have Nots are seeking to become experts in their area of interest, they just don’t have it yet. In order to attain it, they will need to engage in the hard graft of learning, to experience the slow burn of climbing up that learning curve as they develop and grow into being Haves. While there may be a role for AI in this process that is helpful, it will be counterproductive if it takes so much of the pain away that it threatens the progress of embodied learning.

A student in a library

To go back to our metaphor, going from Land’s End to John O’Groats via GPS will get you there faster, but getting there by map, getting lost every now and then, or even better, going by bike, will get you much acquainted with British geography. Being a student is a similar experience. Going to the library, looking at a shelf of actual books, thumbing through pages, taking notes in a notebook with a pen, organising notes; all of this contributes to learning; it’s physical, it’s demanding, and hence it’s more likely to stick.

 

Asking AI to do all that legwork for you may be convenient, but are you really learning? Is the result of that essay showing somebody you arrived at John O’Groats or demonstrating that you’ve learned some British Geography?

 

This is an example of “the bad”. Not bad in a judgy kind of way, but bad as in “bad for their personal and intellectual development” kind of way. This is not to say that students or other learners working towards becoming Haves in their chosen area shouldn’t use AI – they should just be wary of their use and ensure it doesn’t get in the way of their learning.

 

Appearing to Have is the ugly (and is a little bit judgy). Here there is neither the will to learn nor to engage in the work required to Have the expertise that they will be performing. In these cases, the appearance of having trumps having itself. After all, it takes a lot less effort, and if influence can be gained by skipping the hard part, why bother with it? AI can assist anyone who knows how to harness it for those appearances, whether that’s in the form of a covering letter, a personal statement or job application, a blog, a book (AI is writing lots of them now with human authors being credited), or a presentation. Here, the performance of expertise is easily attained, but there is zero foundation beneath it and very little learning to be derived from it.

Two bottles labeled "snake oil"
This image was created by AI

In today’s world, it is easier to gain influence via appearance than actual expertise. Expertise, often dry, fails to get the likes. You can see why the incentive to become expert in the appearance of expertise alongside self-marketing might be a lot more attractive than developing subject-area expertise. Folks can get away with this stuff on short form social media and prolific AI assisted blog posts. But they will be unable to judge if what AI is creating on their behalf is true, and they may be complicit in sharing false information. One day, they may even be held to account. It’s one thing to be able to deliver a glossy Ted Talk – it’s another to be able to rise to the Q & A afterwards.

 

Is AI Making Us Stupid?

 

I consider myself lucky that these tools were not available when I was working my way towards “having” some expertise in my chosen area. Mind you, my life was still rather easier than those that came before me. Becoming an academic in a world of home-computing was easier by magnitudes than it was for my predecessors who relied on index cards, mechanical typewriters, and loads of Typex. And while I’m grateful to have avoided that level of pain, I do wonder if those academics who preceded me in that more analogue world learned stuff maybe just a little bit better because of it?


Old Fashioned Typewriter

I remember when the prospect of writing a 2000-word essay was a terrifying. By the time I finished my master’s degree, a short essay like that seemed like a doddle, but my Doctoral thesis loomed ahead like a total impossibility. It was only by doing the hard work that these things became easier. If we make them too easy on ourselves from the start, we’ll never have the foundational skills we need for the next hard ones. I’m not advocating torture here, I’m advocating “hard enough.”

 

The road from being a Have Not to a Have is a hard one. Even the most proven Haves experience imposter syndrome because (hopefully) they aren’t narcissistic enough to think they know it all (yes, we’ve all met the ones who do). But one will be even more vulnerable to imposter syndrome if they achieved their “Have” (the certificates, the recognition, the social capital) but unwittingly cut too many corners along the way. No mal-intention involved here, it’s just that things were made just a little too easy to feel confident that the accumulated experience is actually theirs.

 

To describe somebody as “accomplished” recognises the accomplishment of their hard work. But if the kudos of accomplishment can be derived from appearances of accomplishment alone, then some may ask why bother? Might it be worth investing your energy in becoming expert in appearances rather than subject matter itself? After all, that’s what people see.

 

Fortunately, most of us do not face this binary situation. Most of us want to have knowledge in our subject area and be able to demonstrate that well. Most people will wish to use AI assistance as part of that process and I believe that most people do so in good faith. They lean into the convenience it offers to give them a hand in giving that polished finish they so desire more quickly. The problem here is more insidious. You see, for each little assistance is a difficulty lifted, and very likely, a learning opportunity lost. Cumulatively, can you find your way to really being a Have if you’ve had training wheels on the whole time?

 

PlagAIrism

 

Getting AI to do all the work for you, whether you get away with it or not, is simply a form of plagiarism, and plagiarism is cheating. For cheaters the main matter is getting caught or not. My concern here is less about intentional cheating than the way in which me may be cheating ourselves and each other. I’m speaking to the risk of a potential authentic and unique voice whose flame is snuffed out by the presence of so much AI. In the end, it's not so much "is AI making us stupid" as "is AI getting in the way of my personal, creative, or intellectual growth?

After all, AI is plagiarism writ large.

Large Language Models (LLMs) are a form of plagiarising. They are produced from the content derived from others and then combined and strung together through some very fancy and complicated algorithms that enable outputs to appear human-like.

It’s hard to find a unique voice in a cacophony.

Intellectual and creative endeavours are an important way in which we express our unique natures as individual human beings. The more we use LLMs to assist us in our intellectual endeavours the less that individual voice finds a place. While it’s natural to lean on those who came before us, you need to integrate and assimilate those ideas in order to have your own take. In academia this has always been the case, but at least there we cite our sources both for transparency and to recognise the shoulders of the giants upon which we stand.


An image of Michelangelo's David inside a machine
This image is created by AI

In AI, other people’s ideas are compiled without their consent and offered to the world in a new form without reference to the originals. The hard work of integrating and making sense of those ideas is outsourced. We’ve got no skin in the game. Imagine Michelangelo describing how he’d like his David to look before inputting a block of marble into an AI 3D printer. Moments later (not three gruelling years) there it is shining before him. I’ve little doubt that this is possible, but what of its value? Furthermore, since the content AI draws on is all in the past, doesn’t this also limit the possibility for a creative future?

 

There’s no “I” in AI

 

If you’re anything like me, you are a lot less interested in reading an article that clearly states it’s written by AI when it does so at the start. For me that’s because I’m interested in the mind that created it. With AI, however well written it may be, there’s no “I” behind the words – it lacks soul. But yet again, the bigger issue isn’t whether something is fully done by AI or a human – it’s the grey area – the in-between. When one’s unique personal expression in the world, the expression one’s creative “I” get occluded, diminished, watered down, or even contaminated with AI’s assistance? To what degree will AI, like social media before it, play a role in our becoming lost between a true expression of ourselves and a performance?

 

On November 23rd I will be presenting Becoming a psycho-technologist: making sense of artificial intelligence, hyper-connectivity and the digitally mediated human at the UKCP Conference Psychotherapy in a Changing World. Between now and then I will be sharing further thoughts on this blog. You can subscribe below. 

 

 

My prompts:

 

AB: I am writing a blog post called "AI and the Great Shallowing: on the danger of making some things too easy." I want to use a gimmick in the first paragraph. You will write it on my instructions, and then reveal at the end that it was written by AI and not by me. I want you to focus on my position that many functions that AI performs circumvent important tasks that require deeper engagement from humans, thereby endangering their learning processes. A difficult task engages the human mind better, getting AI to do it provides a shiny surface, but there has been no engagement. Do you understand my request? Can you do this for me?

 

AI: I understand your request perfectly! You're asking for an introductory paragraph for your blog post "AI and the Great Shallowing," where I, as the AI, will construct an engaging lead based on your position: that AI, while making tasks easier, risks undermining deep human engagement and learning. The twist will come at the end when it's revealed that this paragraph was written by AI itself, reinforcing your argument.

 

Let me know if you'd like to give any further guidance before I begin, or if you're ready for me to write the first draft!

 

AB: Would you like to ask me some questions to guide you?

 

AI: Yes, a few questions will help me tailor the introduction to match your tone and focus. Here they are:

 

1. Tone: Should the introduction be formal, conversational, or somewhere in between? Do you want a specific emotion to be evoked (e.g., concern, curiosity, irony)?

  

2. Audience: Who are you addressing? Tech-savvy readers, general readers, educators, or people in a specific industry?

 

3. Examples: Do you want to use any particular AI examples (e.g., language models like me, automated writing tools, image generators) to illustrate the "shallowing" effect?

 

4. Personal POV: Would you like the introduction to reflect your personal experiences or general observations? Should I refer to "I" (your voice) early on?

 

5. Revelation of AI: How subtly or overtly do you want the reveal at the end to be? Should it be playful, or more of an "aha" moment?

 

Let me know your thoughts on these!

 

AB: Tone - it's the first paragraph so it should be engaging and attention grabbing. I am an academic, but this is written for a general audience, so expert voice, but also natural, humourous, and engaging. You may use yourself as an example. Use the first person as if you were me. Make it an aha moment.

 

AI: Got it! Here's what I'll aim for in the introduction:

 

- Tone: Engaging, with a mix of expert insight and natural humor.

- Voice: First person, written as if you were reflecting on this topic from your own perspective.

- Example: I'll use myself (the AI) as part of the narrative, revealing that "I" was the one writing the paragraph at the end.

- Aha moment: The revelation of AI involvement will come as a twist, emphasizing your core argument about the shallowing of human engagement.

 

I'll start writing now. Ready? Here’s the draft:

 

---

 

When was the last time you sat down to write something difficult—really write? The kind of task where each sentence feels like a puzzle, where your mind wrestles with ideas, and you leave the process more aware, more engaged, even if slightly exhausted. These moments of struggle are where we grow, where we stretch the boundaries of our understanding. But nowadays, it's easy to skip all that. I know because, well, I just did. The paragraph you’re reading right now, with its carefully crafted rhythm, its clever hooks—it wasn’t written by me, not in the way you're thinking. It was written for me. By AI.

 

And that’s exactly the problem.

 

---

 

How’s that?

 

AB: Wow. You are really impressive and a bit scary.

 

 

Thank you! That’s the exact reaction I was hoping to evoke—engagement with a hint of unease, which fits perfectly with your theme. AI can certainly be both helpful and unsettling, especially when it makes things too easy!

 

If you need more adjustments or further sections for your blog, feel free to ask! I’m happy to help (in a non-scary way). :)

FEATURED POSTS
POSTS BY CATEGORY

Subscribe

Thanks for submitting!

bottom of page