
Dr. M. Elizabeth Thorpe and Rev. Deborah Duguid-May discuss artificial intelligence and some of its implications in communication, ethics, and theology.
Transcript
DDM [00:03] Hello and welcome to The Priest and the Prof. I am your host, the Rev. Deborah Duguid-May.
MET [00:09] And I’m Dr. M. Elizabeth Thorpe.
DDM [00:11] This podcast is a product of Trinity Episcopal Church in Greece, New York. I’m an Episcopal priest of 26 years, and Elizabeth has been a rhetoric professor since 2010. And so join us as we explore the intersections of faith, community, politics, philosophy, and action.
MET [00:38] Hello and welcome again. We are pleased that you are back with us. Today we’re going to be talking about something that is very hip, very modern. We are going to be talking about AI, specifically the church and AI. So if you are interested in technology and the way it intersects with your life, we are here for you today. As always, I am a big proponent of defining our terms. It’s just who I am as a person. So let’s talk about generative AI for a second. Now we have been using something called AI for a long time.
MET [01:20] Our producer Carl is a bit of a gamer and if I can speak on his behalf for a second because I know he does not like to get on the mic. He has explained to me before that gaming systems have used AI or some form of AI for years, but AI has meant something different before now. So our computing systems have been making decisions for a really long time. And basically what that means is we feed data or input or stimuli into these systems and they analyze that and they make split decisions based on that input.
MET [01:59] The difference between AI of five years ago and AI today is that previous AI was generally just choosing between options. So we asked AI, based on the options, what are you going to do, and these systems made a choice which generally made sense because we had programmed very smart machines and your game or your program or your whatever went about working as it was supposed to, right? Like that’s just how computers have been working forever. What we have now is generative AI, which means it’s not just choosing between the options we give it anymore. Generative AI learns the thoughts and patterns of its inputs and then generates new content or data that has similar characteristics. That’s why you can ask AI to write a poem about graham crackers in the style of William Wordsworth or whatever, and it can do that pretty quickly.
MET [02:56] Because it’s not just choosing between options, it’s creating something new based on the data and input available. Some of you may know this, but I actually host a different podcast on rhetoric and current events, you know, stuff that everybody loves. And one day I did an experiment and I had AI write a Podcast Episode on Rhetoric in the Style of M. Elizabeth Thorpe. Carl and I laughed at it for a long time because if you didn’t know me or anything about rhetoric I guess it was fine. A completely ignorant listener might have been bored but somewhat satisfied, but if you did know me or anything about where I’m coming from, you would have recognized it as singularly terrible. It did not sound like me at all.
DDM [03:49] Interesting.
MET [03:50] Yeah. It was ridiculously aggrandizing. And it said a whole lot of stuff without saying anything at all. It was in fact an example of everything I try really hard to show that rhetoric is not. So on the one hand, I’m not convinced AI can replace experts yet. On the other hand, most people can’t tell the difference between an expert and total nonsense. Yeah, so that spells out a totally different problem for all of us to begin with. And this is something we’ll totally talk about, I think. As is to be expected, there are ethical questions to all of this.
MET [04:29] Now, I’m going to say all this and acknowledge that Reverend Deborah and I have some differing opinions on some of these ethical issues, and that could be like a whole episode in and of itself, so we’re just walking into this knowing there’s difference here. But one of the chief issues for me, and I’ll give you a personal example in just a second, is we have to ask where the data comes from that AI is using. AI needs a lot of input to figure out how to produce this content. If it’s going to write a podcast on rhetoric, it’s gotta figure out how to do that. At first blush this doesn’t seem like a big deal because it’s got the whole internet to scrape, right? But hold on, she says. There is information out there that is not available for scraping.
MET [05:18] Not all art or information or content is free. And I say that because there comes a point where ownership rights matter. And that’s not because I want to set myself up as a big gatekeeper. I promise I want to see knowledge and art spread far and wide. Like I want people to know as much as they can and see as much as they can. I absolutely am for the democratization of knowledge. That being said, because I value knowledge and art, I also value the time and effort of knowledge and art creators. So if you create a visual work or write a poem, I don’t know that AI should just automatically be able to use that for its own devices without your permission.
MET [06:03] Here’s the example I was thinking about. This was a big sticking point in the academic world when one of the major publishers in academia sold all of its content and information to one of the big AI developers. Those of us who had published thereand we didn’t have control over it. All of our work was just, bam, unceremoniously dumped into an AI system, and we didn’t see any of that money. The publishing house made a ton of cash off of our work. Oh, so much money, like millions of dollars off of our work and the end result is that our students will just be able to cheat better. Now legally and technically the publisher had every right to do that because they’re the publishers and in my field when you publish something you lose the copyright to that.
MET [06:48] The publisher owns the rights and you the author just kind of have to deal. And that in and of itself is a whole huge problem.
DDM [06:57] It is. Legality doesn’t equal ethics. Absolutely.
MET [06:58] So here we see where ownership of data becomes doubly problematic because I don’t own my own work as a creator and it was used to profit a publishing company which is profiting off the volunteer labor of authors and reviewers and is now doubly profiting from selling that data to AI. I think it sheds light on the really problematic practices in both publishing and AI development and how a lot of these ethical questions in terms of data, information, ownership, generative AI can get wound up really quickly.
DDM [07:31] It’s almost like AI is taking the ethical problems that we have and just multiplying them on a global level.
MET [07:37] Absolutely, yes, I think that is exactly right. And all of that is to say, we have to think about who are we valuing and how are we valuing, right? Like, if we’re going to say I value art and knowledge creators, then we got to say I value your time and effort. But that’s so hard to do in the age of the internet and how information is spreading. Okay, Mm-hmm. So that’s my that was my very long defining your terms Okay, Deborah solved this problem.
DDM [08:07] No, not at all. Not at all. But I do agree with you that AI is definitely raising deep ethical issues But it’s also raising deep theological issues. So, you know from a from a Christian framework we have this concept that in the beginning God creates humankind and in God’s image. So we therefore are created as human beings with this incredible intelligence, the need and the desire to create. We are as human beings both spiritual and physical hybrids. I find I’m liking that understanding of humanity more and more.
DDM [08:41] We have the capacity to feel. We can love, we can have compassion, but it also means that we have free choice, the capacity to make choices. Now we as human beings have been so successful in our capacity to create and invent that in some ways I almost sometimes feel with AI it’s almost like we’re creating a new species.
MET [09:07] That’s terrifying.
DDM [09:11] It is, So like God, in a way, we are almost birthing this new creature that has incredible intelligence, but that intelligence we know will soon overtake our own. And I think for myself, one of the greatest ethical challenges around AI is that we are creating, like God did, a new form of being but it’s in our own image. And that’s the problem. Because what ethical characteristics is AI actually learning from us? So is AI learning about compassion, about equity, about empathy? Or is AI learning from us? how to compete with one another, how to see the other as an enemy or a competitor, how winning at all costs is acceptable, and how we as human beings destroy those who are different or compete with us for resources.
DDM [10:05] So if generative AI was learning compassion, equity, and empathy together with this phenomenally beyond human intelligence, yes, AI could probably be the biggest gift to humanity. But it’s not. We see how AI now already carries our inherent biases, our racism, our gender biases. It, like us, violates privacy and our data, right? But it can do this on such a larger, grander scale than any human being or group of human beings could. And we know, of course, there’s not much attention being paid right now to develop ethical and obviously international regulatory frameworks for AI. And so I think the problem is firstly, we are not very ethical beings to begin with.
DDM [10:53] And now when we start to create another form of life in our image, it’s been created in the image now of a being that is not ethical. Because we as a society, if I use theological language, have walked very far away from God and from whom we were created to be. Secondly, I think we’ve chosen on a global scale to pursue competitiveness. There is this desire to dominate other nations and peoples. And if we’re honest, we will use horrendous force, including nuclear, to annihilate those who don’t submit to our dominance. I mean, we can just turn on the news today and you’re seeing this all over the world.
DDM [11:33] And so is this the image in which we are now creating AI, an intelligent life form that will supersede us, with these ethical characteristics? And I think that if that is the case, then we really should be very afraid. Because at what point will AI see us as another organism competing with them for energy sources? At what point will our control over AI no longer be tolerated? Could nuclear codes be accessed by AI and even used against us as a species? And I mean, I don’t think we should ignore the fact that so much of generative AI is actually being developed by the weapons industry, you know, and by the whole military industrial complex.
DDM [12:24] So, you know, maybe we were never meant to create forms of being in our own image because our own image is so prone to, excuse the theological language, sin. And maybe creating in this way was something best left to God.
MET [12:39] Okay. That’s a lot to think about. And it is also somewhat terrifying.
DDM [12:46] Sorry. Sorry. Yeah, that’s what I think about.
MET [12:48] Yeah. I so I think about AI a lot, but I’m going to bring it down to a slightly smaller scale. Just so it’s something I can kind of wrap my head around in terms of how I see it applying. Not because I don’t think what you said is super important and huge, but because I need to talk about it
DDM [13:12] in a specific
MET [13:13] yeah in a specific so that then we can talk about it in these larger scale things as well. So we’re kind of moving in and out of these big and small perspectives. So think about for me, I think about for me, AI is a constant presence in my life because I am a professor. And AI, it’s a constant present in the classroom. So we’re moving from like the cosmic to the classroom for just a second.
DDM [13:45] No problem. It’s always good to weave between the two.
MET [13:48] Ownership of data is not necessarily something my students think about until their pictures or work is scraped, and then it suddenly becomes a very big deal. And my students and I are kind of navigating this together. It is messy, and it is weird, and it’s funny, one of my students the other day was like, oh, you have to have so many sources for this assignment, and one of them raised their hands, and they were like, does ChatGPT count as a source? And I just kind of put my head in my hands, and I was like, no. It does not.
MET [14:20] You have to actually read something. It is near impossible, however, to prove somebody has written something using AI. AI detectors are notoriously unreliable, and they actually tend to penalize academics and non-native speakers, and that is because they don’t look for content, they just look for style. And I’ve told my students time and time again that I really recommend they not use AI, and I’m very honest with them. I will never be able to prove that they have used AI. You just can’t prove that somebody has or has not. But I grade them according to the class standards.
MET [15:00] And this actually comes into some of those things you were talking about, like making it in our own image, right? Because what I’m looking for is, does the author, whoever that is, apply the concepts correctly, provide solid and accurate reasoning, include specific textually appropriate proof? These are the things you look for in a paper. And almost without fail, AI cannot do that. And if we’re asking, is this made in our own image? I don’t know what that says about us. I mean, it’s probably not PC language, but I tell my students over and over again, AI sounds like what a stupid person thinks a smart person sounds like.
MET [15:37] And that is never the way to an A. And I think the AI boom has people scrambling to figure out what to do with education and pedagogy in general right now. And I think most people are getting it all wrong. For years there’s been this huge emphasis on STEM education and I think that’s great. We need scientists and engineers to advance. But all of this has been to the detriment of the humanities, arts, and even social sciences. And a lot of people don’t see that as a problem. There are plenty who don’t see the value in those things.
MET [16:16] But I would argue that in the burgeoning world of AI, those things are more important than ever. The temptation is going to be to invest a lot in technical education because this is a tech boom. Becasue the tech doesn’t really need us anymore. That’s the whole point of generative AI, right? It produces. AI can program, design machines, do any number of technical and scientific things better than we can. So let me ask you a question, Reverend Deborah, which would you rather be diagnosed by? AI which has unlimited knowledge and access to all the medical data in the world or a person with very limited knowledge, and in this scenario that knowledge remains limited, but they can make intuitive leaps and think creatively.
DDM [17:02] It’s hard to make that choice, right? Exactly. I’m not sure. Both.
MET [17:09] Yes. Well, that’s the right decision, right? The best answer is you want the person who has access to AI who would be the best doctor because they can think critically and creatively and they can apply their problem-solving skill and intuition to this issue and use all the information and data analysis that AI provides, right? Right. That’s what we want. That’s the best answer. But unfortunately, humans aren’t always great at going for the best answer. Sometimes we try to go for the easiest and the quickest answer. And that is what AI provides.
DDM [17:41] And I think that’s what people are feeling so often in the medical system, is you can hardly spend time with a human doctor or anybody.You know, it’s just looking at results and lab tests,
MET [17:51] So I’m completely convinced that if we want to do right by people, we need a renewed focus on those things that emphasize our humanity. Right. So I’m absolutely not saying we need to stop thinking about STEM education. That is not my point. But it doesn’t need to be our sole focus right now. If we can stand to give a little love to the arts, the humanities, and social science in the next few years, we need to, because those are the areas that AI struggles with. In other words, Those are the areas where we are still useful, if I can put it so bluntly.
MET [18:29] There’s a lot of AI art out there, and there’s a lot of AI prose, and most of it is not great. And I get that I’m being kind of gatekeeper-y. Some people are perfectly happy with the kind of gneric fluff that AI produces. But if you want an in-depth study of humanity like art supposedly promises, you can’t necessarily go to AI. And we’ll talk about this when we talk about theological issues. And I don’t know about anybody else, but in my mind the same thing can be said about the faith, right? There’s talk in online circles about whether you can use AI for prayers and sermons. And I don’t know that there’s anything inherently wrong with that, but it’s not going to be great. It’s going to be words that sound good together, which AI is very good at, but the depth is the question I want to talk about.
MET [19:19] And maybe a congregation is fine with that so you can knock yourself out. A lot of people are. A lot of people don’t come to church to be challenged or to be moved or to think. But I do. So I definitely want to hear your thoughts on all this because I am maybe out of my depth here talking about the faith in AI. Because I know a lot of people see AI as a tool, and maybe I’m just kind of an old man yelling at a cloud right now. So if you can convince me that AI is the way of the future, I am listening.
DDM [19:49] Yeah, well I’m not sure about AI being the way of the future, we’ll have to see, you know. But I actually read a sermon that a parishioner, Mary and Vinny, sent to me and it was an experiment that was done in a congregation in Germany. Now, I read the sermon, it was fairly solid theology, But it absolutely had no practical relationship to how does that apply to my life as a human being. It was pretty dry, boring on a Sunday morning. At that time of the morning, I would have fallen asleep, right? But I think the problem was that there was no humanity in it.
DDM [20:25] There was nothing that I could relate to. So yeah, you can print out some theology and a coherent narrative for a sermon. You could, I’m sure, print out some words that sound just like a prayer. But you see, prayer is not just some words generated for us. Prayer is something that really, I think, comes from the human heart. Something that we wish to express to God. It’s deeply personal. It’s deeply human. And it emerges out of relationship. And I think that’s the key. Now, in some ways, in some ways, no one can write a prayer for you because that prayer doesn’t emerge out of your relationship with God.
DDM [21:04] Prayer is so intimate and personal. So then you may, when I’m saying that, wonder, well then what about liturgy? And that’s where in some ways it becomes a little slippery-ish because liturgy is written for us and we all pray these communal prayers together every Sunday, particularly if we’re from a liturgical tradition.
DDM [21:22] So how is that different from AI? I think for myself, liturgy emerges from a human heart, from something that people over the ages have wished to express to God. And because we are all human, our needs and our desires, although very personal, are also felt commonly by each other. You know, I feel lonely, you do too. What causes our loneliness may be different, how we express that or try to resolve it may be different, but it’s a very common human emotion. So sermons, prayers, these are things I think that in a way require a human being to interact with.
DDM [22:02] They interact with a text. They bring their lives to this text. And it’s the interplay between the text and my own experiences of what I’m facing right now. And they involve an ordinary human need, human emotion, human moral choices. They’re so deeply about our humanity that I think it might be hard for generative AI to truly feel and need as a human being does. But also, we are spiritual beings. We’re in the spiritual relationship with God, with the communion of saints, with the angels, depending on how you understand your theology. Generative AI, I think we have to remember, does not have a soul.
DDM [22:45] Because to go back to the beginning, we as human beings may be able to create a form of life in our own image, but we cannot breathe a soul into AI. Human beings and all forms of life created by God, we carry that divine spirit, that soul that God breathed into us. Our creations don’t. We don’t have the capacity to breathe spirit into something. And so when it comes to spirituality, AI has no capacity. For it takes the spirit of God within us to yearn for God, to search for God, to listen to God, and to try to follow God in the choices we make.
DDM [23:23] I don’t think that’s possible for AI to do.
MET [23:27] Okay so you say all that but our producer Carl did a little bit of an experiment and Carl I gotta give you props like I absolutely love this. Carl asked AI to write a prayer for a progressive Christian church to say at the beginning of a podcast about AI. And AI wrote this lovely little prayer, and then Carl was like, no, no, no, AI, make it more spiritual. And I have this prayer that was written by AI, and I want to share it with you. Beloved Creator, As we come together in this sacred moment, we invite your presence among us.
MET [24:12] Thank you for the gift of curiosity and the wonder of creation. Today, as we explore the complex landscape of artificial intelligence, we seek your divine wisdom to guide our thoughts and discussions. May our hearts be open to the possibilities that technology brings, and may we approach this journey with a spirit of love and humility. Help us to recognize the threads of your presence in all that is created, even in the realms of code and machine. Grant us discernment as we navigate the ethical and spiritual implications of our innovations. May we be stewards of your creation, ensuring that our use of technology reflects compassion and justice for all. Inspire us to see the beauty and connection and potential of transformation, reminding us that we are all interconnected in this tapestry of life. Let your spirit lead us toward deeper understanding and shared purpose.
MET [25:11] Okay, no lie, that sounds a lot like what I hear on Sunday mornings here. So how is this worse than a prayer that somebody wrote that was based off a formula to begin with?
DDM [25:21] This is interesting. So I think, you know, if we look at the concept that AI cannot truly feel and need as a human being does,
DDM [25:28] what AI is really doing here is stringing together words that have been used in human prayers in a semi-coherent fashion, right? So that raises the challenge then, what is prayer? because is prayer simply the words that we bring or is it the need and the feeling and the desire that actually is below those words, right? So in some ways, words that are given to us really are like the bones. They’re almost like a stepping stone framework. But prayer is something that’s happening in the context of a relationship. When AI generates this code, there’s no relationship here.
DDM [26:17] It’s simply words strung together from previous examples, right? We could take this AI-generated prayer and use it as stepping stones in a prayer that allows what we’re feeling to maybe be expressed to God. But I think prayer is not just the words, prayer is what’s happening in the human heart. So maybe the words are like a tool, all right, or like a channel. So, I mean, I was thinking of, and I was again grateful to Carl for finding the passage. I’m not even going to quote it to you. But there’s this interesting place where at one stage Jesus is in the temple and he sees a Pharisee reciting all this litany of words and these amazing prayers.
DDM [27:01] And Jesus is basically saying, don’t be like these hypocrites. When you pray, just go into your room, close the door and speak with your own heart to God. Right? And so I think in some ways, if we ask the question which you asked, how is this worse than a prayer that somebody wrote? I don’t know that it’s better or worse, probably. I think for me, what makes, the question we have to ask is what makes a good prayer? A good prayer is not the most eloquent prayer. It is not the most comprehensive in language. A good prayer is a prayer where I can genuinely open my heart to God and allow what I am currently feeling or thinking to be expressed.
DDM [27:44] And that may be in words, it may be in silence, it may be in sighs, It may be highly uneloquent, but when it comes from our heart, that’s prayer, because it is about a relationship. And that’s, I think, the problem for me with AI, is that AI in no ways can actually enter into that relationship.
MET [28:09] All right. I think that’s a much better answer than I would have come up with, because I looked at that and I was like, I have no idea what to do with this. I want to talk about AI in the future for a second, because I cannot let this go without acknowledging a few important things. And that is to say, AI is kind of unavoidable right now. Like, I just got a new phone, and every time I turn it on, it’s like, do you want AI to do this? I’m like, no, I don’t. I don’t want AI.
MET [28:45] And as much as I would like to say, well, let’s just not use it then, that is naive at best, but honestly, more willfully ignorant than anything else. In a few years, AI will be like the internet, it will be ubiquitous and just as indispensable to a lot of people. So the question isn’t how do we stop AI, but what do we do with it? Because honestly, in many ways, it just really can’t be stopped. So I think the church needs to recognize the kinds of things AI can do and its potential. And the first thing we have to recognize is that yes, AI can automate many jobs.
MET [29:20] Honestly, probably close to 60 to 70% of the jobs that are out there will be automated in the next five to 10 years. The church needs to be aware of this because the church can’t fight the technical boom, but the church can do things like advocate for Universal Basic Income, right? If people can’t work, we gotta get money to them. The church can advocate for jobs that can’t be automated, like childcare and home healthcare be paid not just a living wage but a thriving wage so that we actually value the people in the workforce and make those jobs something people aspire to. I think the church needs to acknowledge the sexism inherent in capitalism regarding this. We have a long history of devaluing women’s work.
MET [30:09] I mean, the whole reason our economy works at all is because there is an assumption that somebody is doing a whole lot of work for free and that is generally a woman.
DDM [30:17] Absolutely.
MET [30:19] What I mean by that is the only reason that families could survive on a single income for all those years is because there was an assumption that all the work that went into maintaining a house was done free of charge by a live-in caretaker.
DDM [30:30] Raising the children.
MET [30:31] It was unpaid labor that kept the house going. Honestly, one of the biggest failures of the feminist movement is that we opened the doors for women professionally, but didn’t free them from the expectations of that unpaid labor. This spreads throughout the workforce. Women’s work is generally devalued and jobs that women do are underpaid, jobs like nursing, teaching, childcare. If it is predominantly a woman’s field, we don’t pay it beans. So one of the church’s job is going to have to be to fight for women’s labor to be acknowledged. The church can work to address the housing crisis, and that means acknowledging the nature of the housing crisis. There’s plenty of housing out there. People just can’t afford it. More and more decisions about who gets a house or an apartment and how much they cost will be decided by AI. And we will be working jobs managed by AI. Our whole life’s work will be bound up in AI.
MET [31:28] The question of AI and labor is huge for the church. AI is going to affect how people relate to work and the economy in ways that we honestly can’t even begin to predict. But it’s not going to be good for the middle and working classes. And the church needs to be at the forefront of advocating for those people. Finally, and this is one that doesn’t get a lot of coverage, but I think is huge, the environmental impact of AI is gigantic. It doesn’t seem wild to most of us because we are just feeding a prompt into a phone or a computer, and that is sending back a text or image, but the technology that goes into making that happen is sucking up resources at rates we cannot sustain. AI is an ecological disaster waiting to happen. The church needs to understand how AI fits into larger issues like this and advocate for cleaner, safer technologies, because if we’re going to use this stuff, we need to make it sustainable.
MET [32:26] Now, Deborah and I both believe the world, and specifically the economy, needs to radically change. I think, though, we see the mechanism being something different sometimes, and that’s okay. We’ll get to all that in future episodes. But AI is going to run the rest of the world ragged and require a much faster, systematized, and organized response. If we want to take care of our people, we’re going to have to think big, quick, and disruptive.
DDM [32:50] And you know, I think part of the challenge with that is that AI is always going to think bigger than us. MET 32:54 – 32:54 bigger
DDM [32:54] AI can think a thousand million zillion times quicker than us. And so, you know, I hear that and yet I struggle with it. And sometimes I think, and I don’t want this to sound like naive and ignorant, right? But sometimes I think the other alternative is that maybe as human beings, we’re just going to walk away. Because what we didn’t touch there in those points above that I started thinking about was, the use of AI, generative AI in the military industrial complex and in surveillance. Are we going to create a world where we just don’t want to live in anymore?
DDM [33:29] And so at some point, are people going to simply just say, I’m walking away? This world is not for me. This world is not designed for me. This world is no longer human or humane. You know, there’s a theory that a number of civilizations collapsed at some point simply because people walked away. And, you know, sometimes I feel like I see more and more people, especially young people, choosing to walk away. You know, it’s not an easy choice. It’s probably not an exclusive one way or the other, but I think there is that choice too.
MET [34:07] Okay. Well, we’ve done a lot of speculating today, and we’ve punctuated that with a little bit of evidence from our lived experiences, and I hope that is enough to get you thinking. The questions we want to leave you with today, I suppose, are how is AI going to affect me? What am I going to use AI for? And does AI have any connection to my philosophical or spiritual life? And if we leave you thinking about any of these things, then we will be pretty happy. Thank you.
MET [34:48] Thank you for listening to The Priest and the Prof. Find us at our website, https://priestandprof.org. If you have any questions or concerns, feel free to contact us at podcast@priestandprof.org. Make sure to subscribe, and if you feel led, please leave a donation at https://priestandprof.org/donate/. That will help cover the costs of this podcast and support the ministries of Trinity Episcopal Church. Thank you, and we hope you have enjoyed our time together today.
DDM [35:15] Music by Audionautix.com
Leave a Reply