11 min read

Recently on our podcast, Charlotte Clay, our director of Marketing and Communications, and Dr. Jeff Kloha, our chief curatorial officer, interviewed David Brenner, the board chair of AI and Faith, an organization that seeks to engage the world in moral and ethical issues around artificial intelligence.

The following interview has been edited for clarity and space. To hear the whole interview, listen to it on our podcast or watch the episode on our YouTube channel.

Charlotte Clay: David, thank you so much for being here. I'd love to hear how you got into such an interesting topic—AI and religion.

David Brenner: It’s a pleasure to be here with you both. I came yesterday to the museum and saw the new exhibit on science and the Bible and thought, What a marvelous interpretation of these kinds of questions. And it was really the questions at the beginning of your video and of that exhibit that got me into [AI and religion.] Who are we? imago Dei? Do we have real agency? What is truth and how do we preserve it? Justice, fairness, the meaning of life, the meaning of work . . . those are the big questions that are in the AI debate.

Jeff Kloha: Why are we hearing about AI all of a sudden? Seems like last year it’d float around as an esoteric kind of thing, and now it's like every day.

David: Well, it's the coalescence of a lot of things that have been coming together for 25 years. AI was basically DOA in the 1980s, the AI winter. And then in the ’90s, a fresh group of people took a new tack, which began to really start bearing fruit in the early teens, with machine-learning and deep neural networks—the idea of mimicking the way we think but constantly processing data through different levels over and over again to build a robust data analytics. Then you put that together with social media, which took off at the same time, with interfaces like the 20-years-old graphical user interface that gave us the ability to easily interact with the computer, and then the cell phone of 2008 that everybody now has that went audio mobile. That all came together basically last fall [2022] with GPT. So we have this moment now where all of that's coalesced around something that seems to think like us, emote like us, relate to us, take our work, and be creative just like we are. So that's what's rocked everybody.

Jeff: People are asking AI questions of faith. How are people using this in relation to faith, and what are some of the things we should be watching for or thinking about?

David: People who are looking for answers and are attracted to technology and are hearing about this sort of mystical (almost) answering box like the Oracle of Delphi, they're beginning to ask it questions, partly a parlor trick, right? What would it say if I asked it this—“What's the origin of the universe?” “What's right, creation or evolution?” But the hook is that what it says is compelling, and on many occasions, it's confounding. Just makes things up and lies to your face. Or it can also sort of [be] like a dog rolling over. It can just say, “Well, I don't know, I'm not sure. I'm not really prepared to answer that question.”

Charlotte: You found the limit.

David: Yeah, it's such a mystery because it's not actually programmed, and the whole game for Microsoft, Google, and other major responsible companies is how to get those boundaries in place. Because otherwise, if it just continues to confabulate, to lie—well—not “lie,” it's just not putting out good words.

Jeff: It’s just putting out inaccurate information, right? Can a machine learning thing lie? Or is it just bad data?

Charlotte: It's only outputting what it's been given.

David: I think that goes to the heart of the whole question for people of faith—not confusing what this is with who we are. Let's say you have a smart refrigerator, and it's supposed to be able to tell you when you're low on cottage cheese. And maybe it gets that wrong. So, we don't say it's lying. The whole idea that GPT 3 can lie is an anthropomorphization. It's back to Blake Lemoine in June of last year [2022], where his mistake was thinking that functionality is the equivalent of consciousness. And we know it's not, it's just functionality.

Charlotte: And right now, I suppose there's all these people at Google and Microsoft that are being really cautious, but is there a chance we're going to lose control and their intent won't be able to be restrained anymore?

David: There was a survey that got a lot of attention about two months ago of a lot of high-end tech professionals who estimated there was, I think, a 10% chance of existential harm to the human race. And so the question was, would you get on an airplane if you had a 10% chance of crashing? But it shows you the power of the opportunity, both commercially and also for research. I often think of computer–brain interface, where you have these remarkable programs happening all over universities and research centers around the country, where they're plugging in the computer in place of the physiological break. So now, the deaf hear. Soon the blind will see. They'll literally bypass the problem at the back of the eye and go right to the retinal cells in the back of the brain. The lame are already walking with exoskeletons and more, so that whole transhumanism thing, which is problematic in many ways theologically, but can also be. . .

Charlotte: Wonderful.

David: Yeah, just like Jesus at Capernaum.

Jeff: But see, that's kind of like restorative, right? But this has the capability of kind of transcending humanism, right? Is that the concern?

David: Well, I think the concern is misuse. Again, as a tool, is it going to be used just like that scandal over college admissions, upscaling your children if you're privileged? So that inequality, and also the search for perfection is just like eugenics. You have a good piece on scientific racism down in the new exhibit [Scripture and Science]. So will we do that with this technology, or will we help the lame walk, the blind see, the deaf hear, do the work of the kingdom that Jesus did when he came to this earth as a part of his ministry?

Jeff: For Christians, for Jews, the Bible is normative, people rely on that for how they live their lives, what they believe, but ChatGPT or AI, is it like an alternative normative source? How do we navigate that, or how do people of faith navigate that, and is the Bible simply another set of data that's incorporated into all this?

David: I think those are essential questions. I think we're heading into a period where we're going to have GPT as basically a new way of interfacing with texts, with knowledge, all sorts, especially works with text because it's word based. So how do you do that? Basically, what we have here [is a] big box of words and then we have a training set on top of it, and then we have queries. And already you're seeing how to query this thing, so some guidance on that, because what you put into your query will help determine what you get out of it. The training set that sits on the big box of words is a subset of knowledge, so you can put the Bible in there and then like a study Bible, all the helps. Or you can put in the whole world of reference sources, just keep expanding out, and the game will be as Google and Microsoft and everybody else are trying to boundary and keep this thing from going crazy, we can stay current with that and use the best of that to focus the tool on the knowledge we believe is orthodox, right? Whatever your faith.

And the most useful approach with this tool is when you're an expert already, or at least you have enough knowledge about the field you're asking about so you can sort out what's true and what's not right. And so, the Bible and all of the reference sources that already exist can help you do that. I kind of think of it as like send a bot to seminary. How do we send this GPT to a seminary? It's keeping it on track, and we have different views of what's orthodox. We have different faith traditions, of course, but even within Christianity there are many differen[ces].

Jeff: Well, this is fantastic, David. I think we're going to have you back in about three months and get a quarterly update on what the heck is going on.

David: I’m delighted to be a part of this conversation and to have our organization AI and Faith connect, because I think what you're doing here at Museum of the Bible is a wonderful translation of both the great theology of the Bible’s big story for Jews and Christians, and all who are interested, and at really sophisticated levels and at levels that people can engage. So that's what we'd like to do, too.

Charlotte: Thank you so much. We'll look forward to seeing you again very soon.

David: Thank you. Real pleasure.

11 min read