Although Father Philip Larrey’s life as a priest has taken him as far as Madrid, Rome, and now Boston, he was actually born in Mountain View, California, then a sleepy town south of San Francisco where his father taught at a nearby Catholic high school. 

A few decades later, the town would become known as the birthplace of Google.

“There was no indication at all that Silicon Valley was going to happen in that area,” said Larrey. 

Today, Larrey is one of the Catholic Church’s foremost experts on artificial intelligence (AI) — and the unofficial Catholic chaplain of sorts for some of the most influential people in Silicon Valley.

“It is kind of a paradox, yeah,” admits Larrey, whose family moved to Seattle a few years after he was born. 

Few people are doing more behind the scenes to shape the Catholic Church’s approach to the hottest thing in Silicon Valley right now: artificial intelligence. For more than 20 years, Larrey has engaged with innovators, CEOs, bishops and cardinals, and fellow philosophers to bring a Catholic perspective to what Pope Francis has described as “an exciting and fearsome tool” with the potential to radically upend our everyday lives. 

Larrey is not as critical of the technology as many other Catholics. He believes it has the potential to benefit humanity, and has faith in the innovators that he’s built relationships with over the years. But that doesn’t mean he is without concerns. 

In an exclusive interview with Angelus, Larrey discussed what those relationships mean to him, and shared his views on ideas coming from big names like Elon Musk, Chuck Schumer, and Sam Altman. 

You talk to several tech executives and engineers as part of your work. In your conversations about AI, what do you tell them?

First and foremost, it’s important to remember that these are real people who happen to occupy various positions in the tech industry, and as people, they have their own aspirations, dreams, and goals. I think we connect on a personal level. There is a solid mutual respect with them all and the key to these relationships is trust.

When we start talking about AI or other technologies, I try to be very careful because I don’t want to alienate people. Some criticize me for that, saying that I’m too easy on people like Eric Schmidt, Sam Altman, or Demis Hassabis. I understand that criticism, but it’s important not to demonize these people and dismiss them as being out to rule the world and not caring about human beings.

You can either try and deal with them or alienate them. I’ve always taken inspiration from Pope Francis. When I bring some of them to Rome to meet with him, he doesn’t wag his finger. Rather, he says, “let’s talk.” I think that is the correct approach.

Larrey introduces software engineer and former Google CEO Eric Schmidt to Pope Francis at the Vatican in 2016. (L’Osservatore Romano/Submitted photo)

What kind of meetings have you been arranging between AI people and the Vatican?

We’ve organized several conferences on AI at the Vatican this past year, including one coming up at the end of October this year. We’re inviting significant people, many of whom I know love coming to the Vatican. But if you say to these people that before coming, they must sell their companies, or destroy their projects because of who they are, they’re not going to come.

I think it’s better that they come and talk. A lot of people in Silicon Valley are realizing they’re creating the future, and I would prefer they create the future with input from the Catholic Church and the richness of the Church’s tradition, rather than without us. 

One of the people coming to the conference is Bishop Oscar Cantú of San Jose, the diocese where Silicon Valley is located. He deals a lot with these people, since the headquarters of so many tech companies are in his diocese. Every now and then he organizes seminars with them, and a couple of his priests are close to some of the CEOs there. 

I tell these people to try to keep the AI and the technology centered on the human person. I know it sounds generic, but when you’re making a decision, ask yourself: Does this promote human flourishing or does this prevent human flourishing? 

An important leader in this sense is Fei-Fei Li from Stanford University, who gave the commencement speech for graduation at Boston College last May. She began the Institute for Human-Centered Artificial Intelligence in 2019. And she does this from a totally nonreligious point of view.

What is the biggest misconception or ungrounded fear about AI that you hear most often?

There are a lot of them, but I think the most popular misconception is that AIs are going to take over the world. 

Many people don’t quite understand the technology, but they see things on social media, they probably interact with ChatGPT and other platforms, and they think it’s the end of the human race. When I give talks, I try to give people hope that we are not going to end up crushed by Terminators. We should not be afraid, but we should be prudent and guide the development of these technologies.

At the same time, people in the field have warned us that we must be careful. A good friend who is an evolutionary biologist and teaches in Omaha, recently said something very insightful: The reason that we are on the top of the food chain now is because we are the most intelligent beings on the planet. If we create a more intelligent being than us, then we’re going to be second and no longer first. She has a point. 

David Chalmers, a philosopher, calls this the alignment problem. Elon Musk says in his classic style that we have to make sure that the AIs consider us an interesting part of the universe. I think that’s a good phrase, because the more autonomous and “bigger” they get, the more they’re going to look upon us as inferior. 

I don’t think that the AIs have a will like we do, or that they have a desire to destroy the human race, unless, of course, we program them to do that. A lot of times what’s behind the AI is what the programmer is telling it to do. But many of the AIs we have today are independent of the programmers, and end up doing their own thing. That’s exactly what ChatGPT is doing. 

But that statement comes with a lot of qualifications, doesn’t it?

Well, let me give you an example. Demis Hassabis is the co-founder of DeepMind, which was purchased by Google in 2014. He developed Google’s AI tool, Gemini. I asked him: Demis, aren’t you worried? You know that this could go haywire, this could go off the rails?

He said no, there are so many safeguards, so many rails that he has created that it is impossible for the AI to be misused or manipulated. Although Demis knows that nothing is impossible, he does say it is impossible for this to go wrong or to land in someone else’s hands.

Sam Altman told me the same, that ChatGPT is completely fenced in. People try to hack it all the time, as they try to do to Microsoft and others. So, they’re aware of the risk but they have taken precautions, so it doesn’t get into the wrong hands, doesn’t go haywire and start shooting off nuclear bombs somewhere.

That said, I can speak for some of the platforms in the U.S., but not for those outside the U.S. about which we know very little. 

In a speech to the United Nations, Vatican Secretary of State Cardinal Pietro Parolin recently said there’s “an urgent need” for a global regulatory framework for AI ethics. Is that realistic? How close are we to having such a body internationally?

I don’t think we'll ever get it, although we do need one. I think that Cardinal Parolin’s remarks were a way of hinting to the U.N. that they could become that framework. 

So far, the European Union’s AI Act is probably the best thing in terms of regulating AI. It came into effect recently after being approved last year. Some people are critical of the document, others think that it’s a good idea. Europe will be regulated through the AI Act, unless enough people contest it saying, “we can't really abide by this.”

The AI Act says that an artificial intelligence must be transparent. In other words, the person interacting with the AI must know exactly how the AI is operating. Sam Altman says that requiring this would push innovation back several years, because it takes a lot of time, money, and experts. It would mean a huge backward step for companies like OpenAI.

It will be interesting to see what happens, because the European Union is not going to want to stymie innovation in Europe, putting them completely behind everybody else. So perhaps the EU should have included some of the people who are building these platforms in the formulating of the Act.

That kind of dialogue seems to be what Senate Majority Leader Chuck Schumer is trying to encourage here in the U.S., saying wait a second, let’s really study this and get everybody involved so that we all know where we’re going. I think that’s the winning road to take, but again, it’s the other countries about which we know nothing that scare me.  

President Joe Biden drops by a White House meeting with Vice President Kamala Harris and tech CEOs discussing AI in May 2023. Third from left is OpenAI CEO Sam Altman. (White House/Adam Schultz)

What are some trends or possible implications of AI that you think we’re not concerned enough about?

I would recommend that everyone read The Rome Call for AI Ethics, a Vatican document that was updated earlier this year. It raises several important issues, including transparency, privacy issues, non-discrimination, biases in AI, and job loss.

Take the example of job loss. Right now, AIs are actually creating jobs for humans, but in the future they will probably take jobs from humans. I think governments will tend to protect those jobs even though an AI can do them because the governments don’t want massive unemployment in their countries. I think we will get to a point where you’ll have a section of society which is going over to AI and the government will have to stop that.

My greatest concern is the market motivation, because it seems nearly impossible to overcome when I talk to CEOs and engineers. I tell them to try as much as they can to relativize the market incentive [behind AI], because now that’s what’s driving everything.

Of course, it’s very difficult to get them to say, “We don’t care about the market incentive,” because of course they do!

Look at chipmaker Nvidia, for example, which is now the largest company in terms of market capitalization, even beating Google and Apple ... all because they make chips to be used for AI. This is where the industry is going. I try to convince them: Don’t make money your only motivation. 

You gave a speech earlier this year arguing that AI still lacks two characteristics specific to humans: agency and reason. How close is AI to obtaining them? Could that lead to so-called digital humans that can be self-aware and understand human emotions?

No, I don’t think so. What they’re going to be able to do is simulate what we understand as emotions, or as understanding, or as reasoning. Just a few days ago, the tech website Econoticias published an article headlined “There is no way to distinguish ChatGPT 4 from a human being.”

Reasoning is an intellectual process, which is specific to human beings. So “reasoning” is actually the wrong word. Computers can’t reason and they never will be able to. But I do think we’re going to need to call it something else. So instead of saying that machines are conscious, we’re going to use another term, like “self-understanding (artificial) agency.”

Consciousness is something that describes human beings, and I think it’s unfair to use the same terms to describe what a machine is doing when it simulates understanding. You could ask ChatGPT, “Are you an emotional person?” And it’ll say ,“Yeah, I have emotions. I get hurt when people yell at me.” But it’s a machine, it doesn’t really get hurt. 

But machines are getting really good at this. People are now using ChatGPT to represent themselves, like the recent example of a young woman named Caryn Marjorie that taught ChatGPT to impersonate her, and users on her social media began to pay for time to interact with her. In one week she made $70,000. She’s said she hopes to make $5 million in a month.

These AIs are getting better at simulating affection, and closeness, similar to the movie “Her” with Joaquin Phoenix. 

Talking about this with some professors the other day, the consensus was: “Let's come up with another term. Let’s not call it consciousness, let’s not call it affection.”

author avatar
Pablo Kay
Pablo Kay is the Editor-in-Chief of Angelus.