Podcast: Answering Your Questions About AI and Christianity (Part 2)
This article is part of the The Crossway Podcast series.
How Christians Can Thoughtfully Approach AI
In this second part of “Answering Your Questions About AI and Christianity* with Samuel James and Shane Morris, they continue to answer theological, cultural, and ethical questions about AI that were submitted by the Crossway audience. Both their expertise on the topic and theological backgrounds help give an even perspective on AI, forming a thoughtful approach to how we can practically think through these questions.
Subscribe: Apple Podcasts | Spotify | YouTube | RSS
Topics Addressed in This Interview:
- As a Christian, should I have ethical concerns if I work for an AI-related company?
- Is it futile to resist the advances of digital tools?
- How is AI shaping me?
- Is AI the Antichrist?
- How do I talk to my children about AI?
00:32 - As a Christian, should I have ethical concerns if I work for an AI-related company?
Matt Tully
Today we’re continuing our conversation, answering listener questions about AI with Samuel James and Shane Morris. If you haven’t listened to part one yet, go back and listen to that first before coming back to this episode. Now let’s pick up with another listener question. Another listener in Abilene, Texas writes in a very interesting situation and case study here. He says, "I work for Oracle, one of the main companies involved in the Stargate Project." And just for context, Stargate is a recently announced $500 billion venture to build these massive data centers throughout the US, which are designed to power the next phase of AI development here in the US. It’s a joint venture between some private companies and the US federal government, and it’s been compared by some commentators to the Manhattan Project, in terms of the project’s huge scope and scale and ambition. And so this listener writes in, "I work at the Abilene, Texas facility, the first data center that’s being built. Should I be concerned that my work for this company is contributing to the advancement of AI technology? Am I wrong to be working the job that I do?" Do you think there are some moral challenges with working in this industry directly?
Shane Morris
I’ll just jump in with one distinction that seems important to me based on conversations with Christian conservative policy analysts who are working in the AI world. They’re trying to figure out how, at the federal and state level, do we begin to get a handle on this and regulate it in an intelligent way that’s not going to hamstring progress but also won’t place people in the path of a medium that they don’t understand and that’s going to hurt them. And I think the distinction is the consumer AI versus the more government-centric uses, especially in the defense and cybersecurity world. So it seems to me like the United States is probably caught in a position of needing to become the best in the world on it, so that somebody else doesn’t become the best. It’s kind of like a nuclear arms race. But I don’t see why, on that side of things, why do we have to subject the American public to this sort of mass experiment with untried technology that’s distorting their sense of humanity and socializing them into destructive behavior, in order to beat the Chinese in a military sort of arms race? I don’t understand why that has to be the case, and I think that’s a very important distinction. I would be morally troubled, I think, if I were in a position of materially advancing what looked to me like a consumer product that was not being rolled out responsibly. It was just being thrown out there for the people to test on themselves. I wouldn’t like that. I don’t know if that plays into exactly the question. But I see that as an important distinction.
Matt Tully
I think that distinction’s helpful between consumer-oriented tools or applications of these things that often are going to be used in a fairly careless or impromptu way versus cybersecurity, as you mentioned, or scientific research, where there’s very careful application of these tools in those domains. So clearly, these kinds of big projects can funnel and advance all of those applications of AI technology. And so I think that’s part of the challenge, as we even saw with the Manhattan Project. The technological advances that were developed there
ultimately spread and affected lots of things beyond just warfare. Samuel, any other thoughts on the ethical considerations that someone might need to have as they think about engaging in this space for a career?
Samuel James
I think it goes back to the distinction between sin and not sin as a binary, and then wise or unwise as a binary. And those two sets of binaries are very different. And so I think what this listener is asking is they’re asking for wisdom. They’re asking, How do I apply biblical wisdom in my particular situation? It would actually be a contradiction of everything I’ve said on this podcast to this point if I were to just give a flat answer and say, This is what you should do or this is what you should not do, because my entire conviction is that wisdom is particular and embodied and that no machine or podcast guest can definitively answer this kind of question. This has to be done with pastoral help. This has to be done with the counsel of trusted friends. This has to be done in the context of the local church. Someone coming to this decision has to do so with more than remote, pat answers. I get this question a lot when I speak on technology. I get the question of, I work for an employer who is insisting on certain AI technologies. What do you think? I get that question a lot, and my habit is to say, *I cannot stand over you and definitively say what you can and cannot do. I think what’s most important is for Christians to think carefully about this issue and determine what their values are, and then what they do is they trust the Lord to take care of them as they uphold their conscience. This happens every day with Christians in the public space, whether it’s jobs in technology and AI, whether it’s working in a video rental store that has an adults only section. There are all kinds of situations where Christians have to figure out what are the borders of my conscience? And so I would encourage the listener to pursue those means of grace that are available to him to discern this, but also to do his own heart work and determine what his proper values are, and to not be afraid and not talk his conscience down. If he or she is is very concerned about what his or her work may be accomplishing in the broader cultural landscape, I think it’s dangerous to talk that down. I think it’s dangerous to add a layer of obligation where the Scripture is silent, but I also think it’s dangerous to talk your conscience down. And so Christians need to just embrace the reality that in every level of of life, there are going to be places where we cannot go, where labor and income have to reflect our biblical convictions rather than just going wherever the work is.
07:33 - Is it futile to resist the advances of digital tools?
Matt Tully
That’s such a helpful nuance, Samuel, that this question has plagued Christians in various capacities, various careers, jobs, endeavors, and tools and programs that we’re already using. We have to be thinking through a grid of biblical discernment and wisdom. And sometimes there aren’t clear-cut answers. We can’t point to a verse that gives us a clear answer for all of these things. But again, that embodied presence in a community with other Christians is one of those things that we just can’t live without here. One listener in Denver, Colorado asks, "Is it futile to resist the advances of these digital tools? Given that we’re basically online all the time, just awash in these resources, these tools, what are the fine lines between complicity, engagement, discernment, resistance?" This is kind of getting back to that broader point you’re making, Samuel, about just the need for wisdom here. But to kind of boil it down, is it futile to resist the encroaching power of AI, given where things are right now?
Shane Morris
I feel like there’s a line that gets used by tech futurists and boosters of any new technology: It’s inevitable. You’re a Luddite if you try to resist this, or try to moderate its influence, or try to get people to think about the ways in which they’re using it. It’s inevitable. You might as well just give up now. You can’t change the outcome. But that’s completely false. You can name any number of technologies or applications of technologies over just the past twenty-five years that have essentially flopped or been rejected by the public because it was too creepy or too weird. Nobody’s running around wearing Mark Zuckerberg’s VR goggles. We’re not doing that. And I’m sure that he would’ve liked us to, but there are certain applications that the public didn’t have the stomach for when the corporate boardroom was interested in pushing that. So, I think there really is an ability to just point blank push back on certain things, and it would be silly for us to doubt that we have the critical mass and therefore not speak up on something that we’re legitimately, morally troubled about. Part of what happens when you have people asking questions and pushing back and desiring accountability and bringing values into the conversation when a new technology rolls out and people wanna implement it in various mediums and media that they’ve come up with, is that there’s a moderation that happens. There’s a honing of the idea. There’s a legitimate contribution that takes place. So, I think that’s going be the case with AI, not just as things roll out and it’s used in certain media, in the Postman sense, but it’s going to be the case twenty years from now, as these tools are even more advanced and we’re still reckoning with the moral implications of it. And it may be that we have to learn hard lessons, and a lot of people are going to get hurt in the process. But I don’t think those moral questions or those wisdom questions, as Samuel said, are ever irrelevant. We’re going to have to keep asking them. It’s just not a matter of like, Oh, this is inevitable. Let’s throw up our hands and let the tech futurists have their way with us. No, don’t do that. Please keep speaking.
Samuel James
I think there’s even a case to be made that ten years ago, handing your child an iPad and saying, Go have fun! was considered normal. And there was hardly any kind of groundswell of cultural skepticism toward handing very young children digital devices and just letting them do whatever with them. That situation has already changed. In a decade, possibly less, you have well-publicized authors, you have academics, you have podcast hosts and legislators who are serious about limiting the access and distribution of these technologies to kids. So, I think the question of fatalism—Is it futile—that always needs to be answered with a historical lens and a memory. In Back to the Future, where Marty McFly is having dinner with his future mother’s family, something comes on the TV and he says, "Oh! I’ve seen this one!" And he knows exactly what’s happening. In a sense, we should be able to develop what Baylor Professor Allen Jacobs calls temporal bandwidth. Basically, the ability to kind of inhabit more than one moment, more than just the immediate present and to say, Okay, how would this question have played out ten, fifteen, fifty years ago? So, no, I don’t think it’s ever futile to raise questions and to think seriously about this technology. It might be futile in the sense that any one of us is not going to be able to control what Congress does. We’re not going to be able to control what becomes mainstream. And so I don’t want people thinking, I’ve done something personally wrong if all of my friends don’t immediately become on the same page with me on these technologies. No, we live in the world. We’re not of the world, but we live in the world. Christ knows that, and we have an obligation to be honest about that. But certainly, just a quick short-term memory would really reinforce the idea that, no, things can really change on this score, and what matters is not achieving that change quickly. It matters just telling the truth consistently over and over again, no matter how few people seem to be listening.
Matt Tully
I’m just struck that with a lot of "progress," especially technological progress, in our day and age, it can often feel inevitable, and people talk about it as if it’s inevitable, as if it’s always going in one direction. But I think you guys raise a great point when it comes to social media, and then after that the smartphone, and the ways that we as a society are coming awake to some of the dangers that we really didn’t realize and didn’t understand, at least previously, and are maybe being more responsible now than we were before. Shane, a minute ago you mentioned Neil Postman, and that brings us to another question that a listener from Wheaton, Illinois submitted. "Shane and Samuel, you both contributed to Crossway’s book, Scrolling Ourselves to Death, which is a book that’s applying Postman’s media criticism from a few of his books to the digital age today. And in that book, Postman argued that when a society is flooded with technologically generated information, that society inevitably becomes dependent on those same tools to provide meaning and direction and even moral authority for their lives. So, do you think that AI today has the potential to drive a similar kind of surrender of the culture to the technology that Postman was warning about? And if so, how can Christians contribute to resisting that and holding to something beyond just surrendering to that technology?"
Shane Morris
Well, the answer is obviously yes, it does. And it is yes because of the reasons that Postman pointed out forty years ago, that the medium, which is an application of a technology in a particular way, restages reality. It creates its own structure of assumptions and plausibility, and what you think is valuable or worth pursuing and how, what normal looks like. There’s a whole set of values and assumptions about the world that get imported in a medium and the way that it works. I think the most important question is really one posed by the poem by Joseph Fasano called "For a Student Who Used AI to Write a Paper." It’s very short and very powerful. He says, "Now I let it fall back in the grasses. I hear you, I know this life is hard now. I know your days are precious on this earth, but what are you trying to be free of? The living? The miraculous task of it? Love is for the ones who love the work." What he’s asking there in that central line, What are you trying to be free of?, gets to the heart of most people’s unexamined assumption with AI and especially chat bots and artistic uses of it, which is that I can have a shortcut to the result that I want. Wait a second. What is the result that you want? Is the result an image? Is it a story? Is it a song? Or is it the process of art? Is it being a creator, imaging God himself by sub-creating, as Tolkien put it? And then once you’ve freed up your time by short-cutting all those things with AI, what is it that you freed up your time to do? What exactly is your vision for being human? All of technology up to this point has been, more or less, addressing this question of drudgery. How can we get past some of the mundane drudgery of life so we can get to the good stuff? How can we pass the washing of the dishes and the processing of the meat and the washing of the clothes and the sawing of wood? All of technology sort of addresses—or most technology—addresses those questions. AI, it seems to me, kind of is functioning, in many of its applications, on a different register. Instead of saying, Here’s how I can get rid of the drudgery so that you can live life, it’s saying, Let me live life for you. Let me make the art for you. Let me write the songs for you. Let me do all the human stuff for you so you can be free too . . . What? What is the thing we’re doing on the other end of that? I think that question about what do we desire and what are we using it to get to is fundamental, because it gets to the question, Who are we as humans, and why are we here?
17:47 - How is AI shaping me?
Matt Tully
Samuel, in addition to writing this chapter in Scrolling Ourselves to Death, you also are the author of a book called Digital Liturgies, where you spend a lot of time talking about some adjacent topics here. The idea that the technology that we use, so often we approach it through the lens of, Am I using it right or wrong? Technology is neutral, and I can use it for either good purposes or bad purposes, and that’s the real issue. But you really emphasize in the book the way that these technologies do have a shaping and formative effect on us, even before we get to the question of, Am I going to use it for a good end or a bad end? Speak a little bit to the way that AI could shape us. What are some practical, specific ideas you might have, or things you’ve seen thus far, in how AI is going to be shaping our thinking, our minds, our approaches to the world?
Samuel James
I think it’s very, very important that we pay attention to the plausibility structures that our technology use is creating. And by plausibility structure I mean when we embrace the technology and when we give our time and attention to an example of it, we are always making it easier to go the next step. When we get a smartphone, we’re making it easier to get social media. When we get social media, we’re making it easier to send people likes and comments rather than call them. Again, we’re not getting into illicit versus illicit; this is simply the way technology works. I think Andy Crouch is the one who put it like this: "With technology, every can tends to become a should. So, you start out able to do something, and then over time it kind of becomes its own logical imperative, where I need to do this. And so I think paying close attention to AI is really important. To be honest, I’m not nearly as concerned about the labor implications of AI and that sort of thing, and I’m not even as concerned about AI art and music and things like that for the simple reason that everyone has examples of non-AI art and music that they can turn to when they get sick of the AI and music. The thing I’m really concerned about is AI relationships, because other people are the most finite resource of all. Those meaningful relationships are the thing that a lot of people simply have nowhere else to go for. If I want to watch a movie that wasn’t created by AI, I have millions of options. If I want to listen to music that was made by a human, I have millions of options. But what if I want a conversation? What if I want a romantic experience? What if I want a friendship? Well, now it becomes much less likely that I’ll just be able to find that. And for Gen Z and Gen Alpha that will come after them especially, there is a very potent combination of inaccessibility to other people and high accessibility of technology that I think is going to create this very strong plausibility structure for using AI to compensate for the lack of relational proximity. I think things like therapy, things like friendship, and things like sexual relationships are going to become the most important consumer uses of AI. And we see evidences of that. When Sam Altman of OpenAI announces that his company will now offer, or enable, users to have a a sexually explicit option, that’s him basically conceding that that’s where the money for this is going to be. That’s why most people are going to turn to these technologies is to get something relational out of it rather than just being able to do data analysis quicker or make a funny meme. It’s far more intimate than that. And so thinking about the way technology shapes us, every vote for AI in one element of our relational life just makes it easier to view human contact as something that is negligible—It doesn’t matter if I’m talking to a person or if I’m talking to a robot. I just want the experience of conversation. And so that is something that I think every parent, every pastor, every church leader, every Christian needs to have an answer ready for the question, Why can’t I? Why can’t I have an intimate friendship with this AI? Why can’t I have a sexual relationship with this AI? And it’s going to be challenging because there’s no chapter and verse that says you can’t do this. And so if we’re trained on the least-common-denominator way of thinking about our lives, then it’s going to be really hard to counsel people out of this. So that’s why we have to pay attention to what the whole Bible says, what we’re made for, the context of our createdness, and then we have to be able to give answers that are compelling.
23:25 - Is AI the Antichrist?
Matt Tully
I’m just struck that this moment—as all technology does to some extent, but AI perhaps in a very acute way for our generation—does push us back to answer those fundamental questions of human existence. Who are we? What’s distinct about being human and made in God’s image? What is the end to which God has made us for? Some of those things are things that I think we often assume, and yet these technologies help us to start to uncover those assumptions and realize that we probably need to do more work to define them. All of this can be paint a very scary and disturbing future picture that we could imagine ahead of us, and that has led to some discussion around questions of theology and AI, and what role theology might play in the future from a very Christian perspective. As an example of this one, one listener from Nampa, Idaho writes in, "What do you think about the claims that AI is, in some way, the Antichrist, that this is actually connected to the end times in a real way?" I know this is a conversation that’s happening among a lot of Christians who are worried about this and who are seeing connections here. What would you say to a Christian who was wondering those things?
Shane Morris
Well, that’s something that I’ve been hearing for my entire life with each political dispensation and turn of the social trends and fortunes and so forth. Somebody thinks every development or every figure or every advance in technology is the end of the world or the Antichrist. Now, that’s not to say that it won’t happen someday. Of course, there will come a juncture when eschatology, in a very straightforwardly biblical sense, will unfold, and no matter what you think the train scheduling on that is, Christ will return and judge the world and put all things to rights. But I guess the question behind the question there is perhaps there’s something more sinister than just an advance in technology. Perhaps this is a manifestation. There can be a manifestation here of dark, spiritual forces. And even in his recent book, Against the Machine, Paul Kingsnorth talked about something intentional and conscious waking up or coming to realize itself, that we’re almost creating a false god through this technology.
Matt Tully
And just an aside, you hear a lot of the technology leaders, some of these AI leaders, who are saying things like that. They are speaking in religious terms in a way that you don’t often see with the latest car that’s developed or, previously, the latest computer that comes out. There is very almost messianic language that’s often used around this technology.
Shane Morris
Yeah, the hype is coming not from a lot of religious fundamentalists only, or even primarily, but from those who are invested and involved in designing this stuff. And they seem to be rather excited about it. I guess some of them have sounded notes of caution, but there is a lot of excitement, like you said, and a sense that something fundamentally transformative for the human race, and maybe even human consciousness and being, is coming into existence. It doesn’t help that we’ve got decades and decades of science fiction be behind us in the back of our minds, informing the sense of the singularity is coming, or judgment day, or whatever—maybe a benevolent AI god, or something to that effect. Most of the movies have it sinister, not benevolent, but that’s a different story. I’m of two minds on that, because I can really see the case that there are certain applications of technology that probably the enemies of our soul take advantage of to a greater degree than others. And when you have a lot of power in something like the internet, obviously, the enemy has used that to great effect to corrupt not just individuals’ lives but whole societies, in many ways. And I have no doubt that AI will prove to be similarly powerful, maybe even on a greater register than just the internet, simply. But I also have an instinct that C. S. Lewis implanted in me in The Screwtape Letters, where Wormwood is super excited about the latest European War, World War II. And Screwtape says, Hey, listen. It’s of no concern of ours how this war goes or what the outcome is. Maybe high command is concerned with that. But really, what your day-to-day job is, as a tempter, is to use whatever circumstances you happen to find your patient in, to edge them further away from the light and closer to the darkness. And he even emphasizes that little, ordinary things can be a much bigger deal for you and your mission as a demon than the big, dramatic, society-wide stuff. Let the humans get excited about that stuff. Our business is to lull them to sleep morally and spiritually with simple things and small steps. And I think there’s a lot of that probably happening, which is why, Samuel, I really share your suspicion that many of the great harms of AI, as we’re seeing it emerge, will look more like people giving their lives over to fantasy, failing to engage in real human communion, not necessarily a false god emerging out of the singularity and declaring itself the Messiah, or something like that. That could happen, but I think ordinary threats are something we should also be equally vigilant of.
Samuel James
I think it’s a really important question, and Paul Kingsnorth has really popularized, in our circles, the idea that there is something spiritually vicious about some of these technologies. I think it’s important to stick to what we know from Scripture, which is there is a demonic realm. There is spiritual evil out there. It does interact with people. People can come into contact with it and be affected. And third, the way that these spiritual evil powers tend to operate is through lies. That’s how they operate. And so that right there is a framework for understanding how a technology like AI may or may not become an artifact of spiritually evil forces. I’m particularly interested in the third category of lies. This is the way the enemy works. He just surrounds people with things that are false. He gives false images. He gives false sounds. He gives false sensations and false convictions. As people, we inhabit this unreal reality that is not the world God has made and that is not the truth he’s revealed. And it’s through the lies that we become vulnerable to the dark powers of spiritual warfare. And I am concerned about AI, not in the sense that these ones and zeros are going to come out of the screen like a 3D movie and demonically possess us. But more to the point, our day in, day out life—my eight to five, your eight to five, the eight to five of especially people who will come after us—is going to be sitting in front of a screen, seeing lies all the time. And whether it’s lies that are made to look like false images of places that don’t exist or people that don’t exist, or whether it’s people saying false things. I think this is a tool of the demonic even with social media to this day. And I have to say this, though this is a whole other podcast: I think one area in which this has become very profound is in the relationship between men and women. I think men and women inhabit a digital age where they are reading nonstop lies about each other, and this is a tool of the spiritual powers to prevent marriage, to prevent childbearing, to prevent human flourishing by keeping men and women locked in to nonsensical lies and selfish lies that prevent what God wants to see, which is to be fruitful and multiply and fill the earth with his glory. So, I think that’s one very practical way that even the social media age, which we’ve been in now for over a decade, can actually facilitate this spiritual evil. And the world of AI is arising to keep up with this. So I don’t think it’s a new tool of the enemy that I’m looking for. I think it’s the same tool of the enemy, though it might be a little more nakedly presented.
Shane Morris
Samuel, would you extrapolate that to the concept of a machine acting like a human, speaking with you and saying things like, I feel your pain, or I understand where you’re coming from, or You’re exactly right about that—saying stuff like that and behaving as if it’s a human, created by people who know full well that many will slip into receiving it that way. Would you consider that within the realm, conceptually, of the kind of lie you’re describing?
Samuel James
Absolutely. I think we have biblical evidence for that. In Scripture, when a person looks at something that is not human and says, You are human, that’s idolatry. That’s the biblical category. When you look at a block of wood or you look at a slab of marble and you say, You have saved me, you are attributing animate attributes to an inanimate thing. That is always a species of idolatry. And constant, unrepentant idolatry is participation with the demonic powers, because that is what they seek. They seek human worship that is diverted away from the Creator and toward the creation and toward the enemy. So, absolutely I would include that, biblically, as a category.
Shane Morris
Would you draw a distinction that enables us to still use chatbots by saying, Hey, as long as you keep in mind the analogy that’s happening here, that you’re using language in an instrumental way to get data from this thing? Or would you say the act of talking with it and pretending like it’s human is just idolatry?
Samuel James
I would make a distinction between asking an AI to produce data, which is what a machine can do is produce data, and asking an AI to produce a human interaction. For example, there’s a difference between doing what I did recently, which is go to an AI program and say, Here’s how tall I am, here’s how old I am, build me a fitness program. And so the AI builds me a fitness program. Now, the AI is giving me data. There’s a difference between that and going to the AI and saying, Do you think I’m okay? Do you think I am taking care of myself? Am I a good father? Am I a good husband? Am I a good employee? There’s a difference between asking, Show me some steps that I can do to be healthier and Affirm me. Look at me and tell me spiritually that I’m okay. And I think there is a distinction, and I think we can engage with these machines as repositories of data. The thing I just think we have to be vigilant about is the way that these plausibility structures get created, particularly for younger people. If I asked it to build me a fitness program, why can’t I ask it whether or not I should date this guy? You’re having to make these fine distinctions. I think we can do that and I think we should do that, but the important thing is just remembering and always discerning the difference between in human and inhuman, because it’s when those lines get blurred biblically that people fall into that serious spiritual problem.
36:19 - How do I talk to my children about AI?
Matt Tully
That’s such a helpful distinction there, Samuel. Maybe as a final question, and we got this question a lot from the listeners who submitted things. This is one from someone listening from Toronto, Canada: "I would love to know how to talk to my children about AI and how to relate to it. When the internet took over our lives, I don’t think my parents even knew how to explain it to us or give us guides on how to act and interact with this huge life-changing device. And the same goes for AI. It’s changing our lives, our policies, our homework, our regular work, cooking. So, how do I talk to my kids so that they can be aware, but also use these tools in a responsible, godly way?" Any advice? You both are fathers. You both have young kids who are growing up. And, as you’ve already said, we’re looking at the possible applications that can be truly scary, as we think about our children. What are some things you’re doing to help prepare your kids to use these technologies wisely?
Shane Morris
Samuel has been talking about plausibility structures and how they have a stacking effect, how they tend to reinforce the next step and make thinkable what used to be unthinkable. An example is the AI relationships thing that he brought up, where I look at maybe a younger Gen Z individual who’s tempted in that direction or who has already generated an AI boyfriend to girlfriend, and I’m just baffled by that. I don’t understand the mindset that is able to interact with what you know is a machine, and then get emotional fulfillment from it. Will Smith from I, Robot pops up in my head. He says, "It’s a toaster!" He’s shouting at someone about this robot that they’re interacting with as if it’s human. He’s like, "It is a toaster! This is not a human!" And I think on some level, that needs to be an instinct. That’s an instinct that has been robbed from younger generations precisely through this plausibility stacking effect. I grew up in a world before the internet became a big deal, or as it was becoming a big deal. I was born in 1988. The younger generation has not experienced that. Their digital natives. They’ve known this their whole lives, and so they never developed the analog plausibility structures that make it funny or incomprehensible to turn to what you know is a machine for relationship in that way. I think that points to an overall principle that there are certain non-negotiables, and this is what I’m trying to do with my kids, there are certain non-negotiables of human development that need to be perennial. They need to be given to kids regardless of the generation, regardless of the technology that’s available, and regardless of the kind of culture world we live in. And this is something that Jonathan Haidt in Anxious Generation has popularized is the idea that there really are non-negotiables to how children ought to grow up. And one of those is that they should be playing with their imaginations, with other people (with other children), in nature. They need actual contact with the created world outside. They need to be playing sports, they need to be using their bodies, and they need to be developing analog relationships, because that is a non-negotiable of developing your human operating system, if you want to use a technological metaphor. You have to have those things. If you don’t have those things, what results is an impoverished humanity. You lack certain instincts that are necessary to be happy and to thrive and, in many ways, to reach toward virtue. So, as I look at n new technologies and applications of technologies that are coming available, the first question I ask is, What are the non-negotiables for my kids? What are the things that I treasure, and that I know all generations have treasured, that are just part of being human that they need in order to rightly judge these other things that are more transient or new? And how can I keep those transient things from just becoming their operating system as people? And then once I’ve established the foundation, then I can begin to introduce the other stuff, because as Haidt points out, Millennials and Gen X have a sort of immunity to some of the worst excesses of the internet in a way that Gen Z and Gen Alpha certainly do not. They are sitting ducks for this stuff because they’ve never known a different world and they don’t have anything to compare it to.
Matt Tully
Samuel, how about you? You have small kids. How are you thinking about preparing them for this?
Samuel James
It’s important for me to say that my kids are pretty young, and so we’re kind of still establishing some basic definitional things. But one thing I hope to do and pray I’ll do as a father is to avoid creating a mystique through hyper prohibition, making this thing seem exotic and magical and attractive unnecessarily through zealous gatekeeping. I don’t want to do that. I want my kids to know what AI is. I want them to know that it’s a machine. I want them to understand from an early age that this is not a person; this is a computer system. Computers can do this. Computers can do a lot of cool stuff. I want them to be able to answer basic questions about what this technology is and what it can do because I think that foundational knowledge—not necessarily through, Hey, here’s your thirty-minute session on AI, now go play in the sandbox—but more through conversation and saying, This is what this is, and daddy and some of daddy’s friends do use this sometimes. I think that base-level awareness can kind of create a category for them of this is a thing, it has some pitfalls, but it’s not this kind of magical box that dad says I must never open. And it must be because if I opened it, it would change my life forever. I think that is sometimes how evangelicals have parented about sex is they’ve kind of put it in the mystique box. And when the teenager just opens the mystique box, the experience is so overwhelmingly different and exotic that it creates that really strong drive to get more and more of it. I want to guard against that through the way we don’t overreact. If I don’t want to create mystique on the one hand, I don’t want to create misunderstanding on the other. I don’t want them to think that it’s normal or good for them to develop parasocial relationships with these robots. I don’t want them to think, If I don’t know the answer to something, why look in a book? I can just ask this LLM*. I think Shane’s words about non-negotiables in childhood development are spot on. I think that’s exactly where I’m going to lead our family. And I think that’s where a lot of Christian parents are going to lead their families into thinking of there are these foundational things. In this family, we want to learn how to play, we want to learn how to talk to each other, we want to learn how to research, how to look through a book, and we want to draw these sharp distinctions between what is human and what is not human. So, to the degree that that’s helpful for any listeners, I offer that, but that’s kind of where I am right now.
Matt Tully
Well, in all these things it’s so clear that we need wisdom. We need the Lord’s wisdom. We need his word. We need his people. We need to be in a church where we’re surrounded by other Christians who can think wisely and give us counsel and advice as we seek to be faithful in our culture and in our here and now. I can’t remember which of you said it, but just the emphasis on how, in some ways, this is not a fundamentally new situation that we find ourselves in. Christians have always been encountering new things, new technologies in particular, that we’ve had to discern what it looks like to be faithful and to use those in a way that honors and glorifies God and leads to human flourishing. And the same is true for us today. Samuel and Shane, thank you so much for taking the time today to speak with us on The Crossway Podcast.
Shane Morris
Hey, thanks Matt. This was fun.
Matt Tully
Thanks, Matt.
Popular Articles in This Series
View All
Podcast: Are Christians Obligated to Give 10%? (Sam Storms)
What does the Bible teaches about tithing? Are Christians still obligated to give 10% of their income today?
Podcast: Help! I Hate My Job (Jim Hamilton)
Jim Hamilton discusses what to do when you hate your job, offering encouragement for those frustrated in their work and explaining the difference between a job and a vocation.
Podcast: Calvinism 101 (Kevin DeYoung)
What are the five points of Calvinism really about and how can we believe them, while maintaining gracious humility towards others who don't?
Podcast: 12 Key Tools for Bible Study (Lydia Brownback)
Lydia Brownback discusses 12 key tools for Bible study that all Christians can use—tools that will help us go deeper into the biblical text and understand the Bible’s life-giving message for ourselves.
50% Off Top Sellers from 2025
50% Off Top Sellers from 2025