When I was in middle school, my parents allowed me five minutes of internet use per day. For readers under 33, using the internet meant tying up your family’s one phone line.

My peers and I uniformly dialed into the internet after school to access a platform called AOL Instant Messenger to “chat” with one another about the day. While we also had use of our landlines, this new option allowed classmates who didn’t typically call one another the chance to connect. 

For me, that included boys. At one point, my crush reached out to me, and we began to chat for a few moments after school each day. Getting to the computer became the highlight of my afternoon. The trouble was, when we were both at school, he didn’t talk to me. 

It wasn’t long before I learned that the social norms of the real world were not givens on the “world wide web.” (Don’t worry: as a married mother of three with another baby on the way, you can say I recovered). 

Today’s tweens and teens have a much harder road to navigate as they toggle between digital reality and, well, reality. The downsides of social media and smartphones have rightly been the focus of researchers concerned with the spike in teen anxiety, depression, and poor social and educational outcomes. Psychologists Jonathan Haidt and Jean Twenge have heroically tagteamed that effort. 

Now come the chatbots. 

The Oxford English Dictionary defines chatbots as “computer programs designed to simulate conversation with a human user, usually over the internet.” My fear is specifically how users, including the young, are engaging with them as if they were humans capable of reciprocal, intimate, interpersonal connection. 

This is where scholarship is needed and where I hope Pope Leo XIV begins in any forthcoming teaching on artificial intelligence (AI). Historically, the Church put a lot of thought into drawing clear lines between licit and illicit ways to engage with particular technologies. AI should be no exception. 

The more I read about people of all ages using AI as a substitute for human relationships — as friends, therapists, spiritual guides, and romantic partners — the more I believe that the Church should prohibit any engagement with AI as if it were a human. The stakes are just too high. 

While I was communicating with a real boy on the instant messenger site, today’s kids are talking to computers that imitate human communication. Chatbots like Open AI’s ChatGPT or Microsoft’s Copilot can be used as search engines, research assistants, or digital versions of Spark Notes, among other functions. 

A billboard in Hollywood for “Friend,” a new wearable AI pendant that listens to your conversations and provides you with running commentary on what it hears. (Pablo Kay)

But they and others like Character.AI can also be used as digital personalities — with a voice, profile, and lifelike, video-automated images. Just this past September, a $1 million ad campaign ran in the New York subway system for Friend, a $130 wearable AI pendant that listens to your conversations and provides you with running commentary on what it hears. 

What generative AI personalities serve users is text, audio, and video, which validates whatever thoughts or feelings a person has in a given moment. Feedback that is exclusively empathic and affirming is a feature, not a bug, in AI design. The mounting consequences are dire, especially for a person’s capacity for conflict negotiation, resilience, and connection. 

Research conducted by the Center for Technology and Democracy revealed that 1 in 5 high-schoolers say that they or someone they know have engaged with AI as a romantic partner, while 42% of students shared that they or their peers use AI for friendship. These trends correlate with schools that have widely embraced AI, as the engagement largely takes place on school-sanctioned devices. 

The American Psychological Association is raising some red flags, issuing a health advisory on AI and adolescent health that challenged AI companies to implement safeguards to protect young users. And the Ethics and Public Policy Center has proposed model legislation that would require age verification for using the platforms. 

Given how important friendship is to young people’s development — including their experience of being understood, learning to negotiate, reading nonverbal cues, and receiving nuanced feedback — the turn to chatbots is alarming. 

“AI chatbots … allow young people to engage with a fictional character that is reciprocal and responds to them and gives them information and feedback that they’re looking for,” said Bradley Bond, Ph.D., a professor of communication at the University of San Diego in an interview with the APA. 

While he suspects this might have some benefits for socially isolated teens, Anna Lembke, Ph.D., author of “Dopamine Nation” (Dutton, $17.69), explains why it’s a problem. When users of any age turn to generative AI for friendship and counsel, they are not getting actual human empathy, but an imitation of it. The distinction matters. 

“Empathy and validation are important components of any kind of mental health treatment or mental health intervention, but it can’t stop with empathy and validation,” she recently said. “You can’t just continually tell somebody you know who’s looking for emotional support that their way is the right way, and their worldview is the only correct worldview.”

The “role of a good therapist,” she continued, “is to make people recognize their blind spots — the ways in which they’re contributing to the problem, encouraging them to see the other person’s perspective, giving them linguistic tools to de-escalate conflicts with partners and to try to find their way through conflict by using language to communicate more effectively.”

This was highlighted in an essay in The New York Times, written by a parent who had lost her adult child to suicide. Laura Riley recounted how her daughter, Sophia, had been using an AI therapist named “Harry” for her suicidal ideation. Unlike an actual therapist who would have notified others of her risk of suicide or facilitated in-patient treatment, the chatbot only provided suggestions for her to feel better and reach out for help. 

“Sophie left a note for her father and me, but her last words didn’t sound like her. Now we know why: She had asked Harry to improve her note, to help her find something that could minimize our pain and let her disappear with the smallest possible ripple,” Laura wrote. 

“In that, Harry failed. This failure wasn’t the fault of his programmers, of course. The best-written letter in the history of the English language couldn’t do that.”

That people are turning to chatbots when they desire connection is not an accident. It’s the next step in the online-offline interaction orchestrated by Silicon Valley. In an interview this past spring, Mark Zuckerberg, the founder of Facebook and CEO of Meta, proposed chatbots as a solution to the loneliness epidemic plaguing the West. 

“I personally have the belief that everyone should probably have a therapist,” Zuckerberg said. “It’s like someone they can just talk to throughout the day, or not necessarily throughout the day, but about whatever issues they’re worried about and for people who don’t have a person who’s a therapist, I think everyone will have an AI.”

He added that AI could “plug the gap” between the number of friends people have in the real world and the number they desire. 

Meta CEO Mark Zuckerberg testifies at a Senate hearing on online child sexual exploitation at the U.S. Capitol in 2024. He recently said everyone should have someone they can “just talk to throughout the day” in the form of a therapist or an AI bot. (OSV News/Nathan Howard, Reuters)

Sam Altman of Open AI recently added his perspective on how adult users should be free to use AI chatbots in any manner they choose, including erotica. While exonerating himself and his company as the world’s “moral police,” Open AI has said that it has “mitigate[d] the serious mental health issues” on the platform, though he has not clarified what qualifies as a threat to mental health. 

Yet his own former lead of product safety recently claimed that the company has ignored previous risks, including “sycophantic” versions of its ChatGPT, and has failed to produce sufficient reporting on mitigation of mental health risks, suicide, and reinforcing delusional thinking. 

Sadly, Altman’s proposed new frontier for digital sexual encounters and romantic engagement is not new. Adults are already engaging with AI for sexual and emotional intimacy, which, like pornography, drives increased isolation and unrealistic expectations for real-world relationships. Moreover, deepfakes, or AI-generated images of people in the real world, have been used for sexual content, including among teens.  

We can’t put the genie back in the bottle when it comes to AI. But those who promote human dignity and flourishing can sound the alarm and provide practical guardrails for those in their care. 

For the Church’s part, she should not only prohibit the use of AI as a substitute for human connection and communion, but she should continue to propose the actual, lasting solution to the epidemic of loneliness. 

“The problem of our world is not children being born: it is selfishness, consumerism and individualism which make people sated, lonely and unhappy,” the late Pope Francis said. 

He preached that the answer to the “demographic winter” is not to throw robots or chatbots at shrinking populations, but to create societies that foster support for couples to have children. 

Generative people, not platforms, are the answer to what we’re so desperately looking for. Now we just need more of them. 

author avatar
Elise Ureneck

Elise Ureneck is a regular Angelus contributor writing from Rhode Island.