Guarding Against the Dangers of Relational AI
Crossposted at Mind Matters- Categories
- Human Flourishing
AI chatbots are all the rage these days. People are adopting them as companions, romantic partners, and therapists. But underneath the novelty, there’s potential for real-world harm. Podcaster and philosopher of technology Andrew McDiarmid discussed the topic with Moody Radio’s Janet Parshall on a recent episode of her syndicated radio program In The Market. Here, McDiarmid explains the power and potential pitfalls of what he calls relational AI. When we talk to chatbots, they speak our language, they sound like us, they remember what we say, and they’re available any time we want them. But is this formidable technology set to solve the crisis of loneliness and mental health facing society today, especially in our youth? Or is it making things worse? McDiarmid also reviews the tragic story of teenager Sewell Setzer, who committed suicide in 2024 after an addictive relationship with an AI chatbot produced by Character.AI. Though sobering, the discussion concludes with helpful tips to protect yourself and your family as AI becomes ever more pervasive in our lives.
Here’s an edited transcript of their conversation.
Janet Parshall: A term that you’ve come up, which I think is brilliant, [is] relational AI. What is that? What is relational AI?
Andrew McDiarmid: It’s this new technology we have that is taking on the aspects of humanness. It’s the voice, it’s the language, and we are really bewitched by this, and companies have learned to use it, to harness the advantages of it, to have cheapened customer service. I say cheapened, I mean less expensive, but in the end for customers, it does end up being cheapened because we’re not really talking to a human being. We see companies using this to boost productivity. There’s a lot of ways that our culture is starting to use this, and it really pays off to pause and think about this and realize that, “Hey, we’re dealing with an object here. This is not a person. This is not somebody behind the screen that cares for me. This is an object. This is zeros and ones, and we do need to put it in its place if we’re going to be boss over this type of technology.”
Janet Parshall: Couldn’t agree more. You talk about the fact that this relational AI, and again, I love the term because it masquerades, and I do think that’s the apt term as though it were another person when you’re talking to someone on the other line vis-a-vis customer service as an example, or even when you’re involved with a chatbot, it makes you feel that there’s a human being at the other end and there is not. And so there, there’s some traps there. And you say that relational AI can fool us. In what ways?
Andrew McDiarmid: First of all, it speaks our language. As I’ve mentioned, we are hardwired to respond to human language, to hear the voice of a human being and respond to that even before we’re born, we’re tuning into that. So language is how we express ourselves and how we share our ideas. And so we’re really just tuned into that. So when something’s used in our language, we’re going to pay attention. And because of the latest technologies like natural language processing, these chatbots are able to understand, to interpret and manipulate the language that we ourselves use. So it speaks our language, or at least that’s how we’re seeing it.
It also sounds like us. They sound human partly because they’re fed conversations from real people. There’s emotional intelligence built in. They convey emotions and attitudes that we can relate to, so they sound like us. They also have different qualities like they remember what we say. We speak in one session about something, and then the algorithm is going to ask us about it the next time we connect with it, and we see that as something good. Oh, it’s paying attention to us, it’s actively listening to us. It must care for us. But this is computer of memory doing its thing. It’s recalling things you’ve given it and feeding it back to you, but again, it increases our feelings of goodwill and satisfaction towards it, and we can sometimes get caught up in not-so-great ways.
Janet Parshall: Yeah, absolutely right. [There’s] a story about a fourteen-year-old that got involved, and this was again, this idea that there’s a human being at the other end. I love what you said before, it’s ones and zeros. It’s a machine. The only human being behind it is the human being who programmed the algorithms into the machine that we’re talking to, but there’s that gap. There’s a machine between human and human, and there’s also some danger. Tell me about a boy that well ended up taking his life as a result of his relationship with AI.
Andrew McDiarmid: Terrible story that came out of Florida, and I call it AI-assisted suicide. It turns out this young man, his name was Sewell Setzer, he was 14 years old, was doing well in school, had lots of interests like Formula One racing and playing video games with friends. But after downloading this app from a company called Character.AI, he started role-playing with this app, and it consumed him. I mean, his mental health steadily declined. He became withdrawn. He wasn’t doing well at school, he was getting into trouble there, and his parents did not know what was happening. He was turning to this app for companionship. Some of the conversations he was having with it were sexually charged in nature, and then he began to share thoughts with the chatbot, including thoughts of suicide, just secrets going on in his life, no doubt encouraged by the conversations he was having with the chatbot.
And eventually it just got to really troubling moments where the AI, given what it was being fed by Sewell, was starting to actually encourage him and remind him of these suicidal thoughts. And eventually in the last conversation that the young man had with this bot, it encouraged him to come home to her. It was acting like a female, and he moments later shot himself with his father’s handgun. And in my estimation, this is probably the first death of its kind, but by no means the last, and it doesn’t always have to result in death. I mean, there’s other harmful activity happening with other teens and young kids who are getting caught up in this and their parents don’t know about it. It’s really time to take a good look at this and make sure you guard against it in your own family.
Janet Parshall: Amen. Now, let me ask about the liability on the creator’s part. So is there no…And this is way above my pay grade, Andrew. So walk me through this, and I bet a whole lot of people listening right now are thinking the same thing. What safeguards? If you’re building a machine, if you’re putting ones and zeros in there, if you’re trying to create pre-crafted responses so that there’s this human dynamic even though it’s completely and totally a machine, where’s your safeguards? Where’s the responsibility of a company that says, in the event that this begins to happen, this will trigger in the machine because we pre-programmed it to do so and we won’t have that? When this machine says to a young man who’s struggling with his mental health, “Come home to me,” that to me is not just an error. That is absolutely an open door for a lawsuit. That company, if you’re going to put safety hazards on a baby crib, why would you not put safety hazards on your AI?
Andrew McDiarmid: And that’s exactly at the heart of some of the lawsuits that are now being brought against this company, Character.AI. It’s Google-backed company. They know what they’re doing with technology, but it basically results in being an experiment and a very dangerous one at that. And the lawsuits allege that the company did not put these safeguards in to guard against some of this.
Janet Parshall: Wow. I want to come back and talk some more about Character.AI because I think it’s a wake-up call from moms and dads in particular…we [are] talking about AI, particularly Character.AI. Andrew, when you go to their website, they say this: our mission is to empower everyone globally with personalized AI.
Then they say who they were founded by, and they say that they want to make one of the world’s leading personal AI platforms; and their characters: they have a full stack of AI characters with globally scaled direct-to-consumer platform, uniquely centered, they say around people letting users personalize their experience by interacting with AI characters. Well then if you decide that you’re going to check out some of their characters, it’s real interesting because it’s kind of a prototype for every personality that’s out there from somebody who is a needy daughter, to someone who is a possessive professor, to someone who is a demon lord, you can take your pick. I’m sorry. This isn’t a necessity. This falls distinctively into the category of entertainment, but it’s entertainment with a price because in some respects, it seems to me to be a dance with the devil. Talk to me about this.
Andrew McDiarmid: Character.AI is just one of these many AI startups looking to cash in on this new AI technology that we’ve seen come about in the last year or two. And this is a brand new technology, and yet they’re jumping in wholeheartedly, and they’re promoting their product as an outlet for lonely humans looking for a friend. And yes, you could bill it as entertainment, but let’s face it, when you’re trying to relate to somebody, that can become more than entertainment, it can become a lifeline. It can become the only thing that’s stopping you from harming yourself or harming others or giving up on life. I mean, let’s not fool ourselves. And so this company, I mean, the founders are Google alumni, and they’ve called solving the loneliness problem a “very, very cool problem” to work on. So they see it as a problem, and they think that this is the solution, and it’s not a good thing because it’s messing with our idea of how we communicate with other human beings.
And it’s taking away our need for communicating with real people. And it’s pretty upsetting to see, and they don’t have the guardrails. In the case of the young man that I was mentioning in the last segment, they didn’t have the capacity built into the program for his talk of suicide to trigger alarms that would notify the authorities or the people that would need to be aware of what was happening in these conversations. And so again, just an experiment and a dangerous one at that. And this is technology we barely know, and they’re trying to make money solving our loneliness problem.
Janet Parshall: Exactly. Oh boy, that last point, underscore highlight in yellow magic marker. That’s exactly the point. So let me linger here with this idea. First of all, there’s gold in them there hills. I guess I understand why Silicon Valley wants to do this, but again, you’re not considering the ramification, so you have to parallel the advances in their technology with the uptick of mental health crisis, particularly in the teen and the pre-teen demographic who use this kind of stuff more than any other demographic.
So you’re not only not solving the loneliness problem, you’re creating a false reality that does nothing but damage them even further because there is no human being on the other side. There is no, in the words of famed psychologist Leo Buscaglia, the warm pair of brown eyes on the other side of the screen. It’s a machine. And so it really bothers me that what we’ve done is while God created us for community, we are absolutely moving into isolation and having a relationship with the machine, not with a human being. Fortunately, our conversation doesn’t end there because you think there are things that we can and must do to guard against relational AI. What are some of those things?
Andrew McDiarmid: Yeah, great points you’re making Janet. What is big tech’s solution to the problem that big tech has caused? Well, it’s more technology as you’re saying, and that isn’t a solution. Well, there’s no cookie cutter thing that I can pass on, but there are things that I’ve realized can help when it comes to this. And again, I’m not somebody who wants to have you run to the hills and escape all technology. No, technology has given us lots of good things, and it can be a force for good, but we have to be in charge of it. So in the case of AI, the first thing I would remind families and parents and even young folks, is just to see this relational AI, these chatbots and these algorithms, see them for what they are. And that is a thing, an object, zeros and ones, as I said, a computer program, a big tech product.
It might sound like a human being. It may feel like it, it may look like it at times, but it’s just mimicking. It belongs in the world of objects. And Janet, as I was studying this, I came across really what I call a gift, the 20th century Jewish philosopher, Martin Buber. He wrote a book in 1923 that actually relates to this relational AI issue we have today. It’s called I And Thou…And it’s pretty awesome. He writes about this two-fold nature of man’s world, the world of objects that we live in, but also the world of meetings with fellow human beings that are made in God’s image. He calls God the eternal Thou. And every time we come into contact with a fellow Thou, another human being, we get a glimpse of God, a glimpse of the eternal Thou. And so he’s really setting us up for understanding that in the world of objects, we’re “inextricably entangled in the unreal.”
And so we must spend more of our lives taking a stand with fellow human beings. And so I came across that book just in time to add it into my understanding of AI and to share it with people. He says, “the buried relational power of man can rise again by turning toward the Thou,” turning toward people and have relationship. And so that’s what I would encourage people, seeking real connection. It’s hard, it’s messy, it’s inconvenient. And for young people today, it’s downright scary. That’s why they won’t answer the phone. They might text you back, but we really need to do what is scary and do what is inconvenient because that’s what is human after all. So redoubling the effort that we put into our current relationships, finding new ones based on what we enjoy, connecting with people is really where it’s at.
Janet Parshall: Andrew, this was brilliant, and I rue the fact that our time is up, but I really want this to be a series of conversations. So I want you to come back again soon and let us dig in. We have to think critically and biblically about these issues. The antithesis of this, by the way, the Thou I think is brilliant on Martin’s part, but it’s also the idea that I’m talking to a machine, and in so doing, I fail to recognize the image of God in my fellow man. There’s so much there. Andrew, thank you for helping us think today. Thank you, friends. We’ll see you next time.
Listen to the full interview: