The Illusion of Intelligence: Why AI Can’t Replace Embodied Life (Noreen Herzfeld) Ep. #215

Episode Summary

Is AI intelligent—or just artificial? In this provocative episode, Dr. Noreen Herzfeld, a rare scholar of both computer science and theology, joins Dru Johnson to expose what most people overlook about artificial intelligence. Drawing from her recent book The Artifice of Intelligence, she challenges the mythology of AGI (artificial general intelligence) and critiques the environmental, social, and theological costs of current AI use.

Herzfeld argues that large language models are plateauing and that the real danger isn’t a superintelligence—it’s our uncritical, energy-intensive use of biased software masquerading as neutral tools. She warns of AI’s water and fossil fuel demands, its disembodied affirmation loops, and the illusion that chatbots are viable substitutes for therapists, pastors, or friends.

Rooting her critique in Christian theology, Herzfeld defends the value of embodiment, human uniqueness, and community. She sees modern AI and transhumanist dreams as a return to ancient Gnostic heresies—disembodied, elitist, and ultimately dehumanizing.

This episode is essential for anyone navigating the ethical, spiritual, and ecological implications of AI. You’ll come away more equipped to use AI critically—and to resist the false promises of digital utopia.

For Noreen’s Book “The Artifice of Intelligence”:
https://www.fortresspress.com/store/product/9781506486901/The-Artifice-of-Intelligence

We are listener supported. Give to the cause here:
https://hebraicthought.org/give

For more articles:
https://thebiblicalmind.org/

Social Links:
Facebook: https://www.facebook.com/HebraicThought
Instagram: https://www.instagram.com/hebraicthought
Threads: https://www.threads.net/hebraicthought
X: https://www.twitter.com/HebraicThought
Bluesky: https://bsky.app/profile/hebraicthought.org

Chapters

00:00 The Current State of AI Technology
02:21 Environmental Impact of AI
07:55 Understanding AI and AGI
16:36 The Dangers of Chatbots
19:42 Embodiment and AI
30:47 Future of AI and Its Societal Role

Transcripts are AI generated and are not guaranteed to correctly reflect the content of the podcast.

Dru Johnson (00:04)
you know, if you, maybe it’s one or two things, what do you think are the things that people are not paying attention to about artificial intelligence that you think should be?

on the front burner.

Noreen Herzfeld (00:15)
Well, yeah, two things that I think should be on the front burner is that while people are worried about developing an AGI, an artificial general intelligence or a super intelligence that would take over, I honestly do not think ⁓ that the current technology is going to get us there.

Dru Johnson (00:39)
Hmm.

Noreen Herzfeld (00:39)
large language models are starting to plateau out. We see that with chat gpt5, which is not a big improvement on four. ⁓ So we’re not seeing the kind of exponential curve that we were promised. ⁓ However, what we are seeing is just, you know,

Dru Johnson (00:51)
Hmm.

Noreen Herzfeld (01:05)
not general intelligence, but AI used as a tool being used uncritically, ⁓ both by individuals and by businesses.

Dru Johnson (01:13)
Hmm.

Noreen Herzfeld (01:19)
and by government. And so we are seeing AI that has human biases baked into it, but people somehow feel that because it’s AI, it’s not gonna be biased, it’s going to be impartial. We’re seeing chatbots that are very easy to get them to trick them into going off the rails.

Dru Johnson (01:47)
Right.

Noreen Herzfeld (01:48)
Or sometimes just go off the rails and nobody’s even tried to trick them. So recently a chatbot told a 14 year old teenager to kill himself.

⁓ So it doesn’t take an AGI or a super intelligence for artificial intelligence to be a problem within our society. And we need to be much more critical of the software that we have and use it more carefully.

The second thing that is starting to get some attention, but not nearly the attention that it needs to be getting, ⁓ are the environmental costs of artificial intelligence. ⁓ Right now, almost all of the big tech companies, Microsoft, Amazon, OpenAI, which had climate pledges that by 2030, they were going to be sustainable.

Dru Johnson (02:45)
Hmm.

Noreen Herzfeld (02:49)
Those climate pledges have gone out the window and we’re hearing companies like, for example, the suggestion made by Microsoft that Three Mile Island, the nuclear power plant in Pennsylvania, be brought back online just to power Microsoft data centers. These data centers that are powering the ⁓ training of AI, but that also power its usage.

are huge. They are taking large amounts of energy, much of which is being supplied by fossil fuels right now, and they use large amounts of water to cool the servers that are in those data centers. And many of these centers are being built in places like Arizona and Texas, where there are loose zoning restrictions and where there’s lots of land.

But unfortunately, they are further draining aquifers that are already becoming low so that farmers in those areas are finding that their well bores no longer go down far enough to bring up water to water their crops.

Dru Johnson (04:04)
Yeah, think ⁓ many people may not know that ⁓ Google, for instance, was trying to build all of their server farms for a long time next to rivers where they could use hydroelectricity and ⁓ some places in the southwest in the center of the country where you could use wind or solar. And this is just flipping that whole model of energy conservation on its head, as I hear you say it.

Noreen Herzfeld (04:27)
Yeah,

it really is because right now there’s very much of an AI arms race going on among these companies. And so they’re no longer looking for how can we do this in a sustainable way. They’re looking for how can we do this as quickly as possible.

And AI, just like cryptocurrency, when you think about these things, as long as they are being used by a small subset of the population, you know, it’s manageable.

But the dreams, of course, are that everybody will be using AI all the time. And it just doesn’t scale up in at all a sustainable way. So when we look at, for example, you know, when you automatically now get an AI summary when you do a search, that summary takes 10 times the energy as doing just a plain Google search in the past.

And most people don’t know that. Right now a lot of people are just playing with AI as if it were a toy and they have no idea the amount of energy that they are burning.

Dru Johnson (05:40)
Right, and energy here also means water. So wherever you have energy, there’s heat, which requires water to cool it down. Yeah.

Noreen Herzfeld (05:44)
It means water and it means fossil fuels.

so in a way, right now I see AI and climate change on a collision course.

Dru Johnson (05:53)
Hmm. that, yeah, so I wonder if, like, just practically speaking, you know, it seems impossible to get people off of AI. But at the very least, would you say to taper your activity, like, you know, choose your your request that you’re going to make of AI separately from your, you know, your other Google searches, I guess. Turn off.

Noreen Herzfeld (06:12)
Yeah, I

would recommend that for people, know, choose to use AI selectively and don’t just play with it as a toy. For companies, I would like, let’s just take Google as an example, I would like to see AI not be the default.

Dru Johnson (06:22)
Hmm.

Noreen Herzfeld (06:33)
in other words, for people to get a regular Google searches, their default, and then if they want AI summary, make them take a second step to get there.

Dru Johnson (06:45)
Yeah, and the issue of the egalitarian use of it as well, I think you pointed out that ⁓ if the hope is for everybody to use it, AI in some parts of their daily life, and of course new chips and software are being embedded and everything we do is prepping for this next step to use all of these different types of AIs in all kinds of different ways. I think I heard you make the point that that’s actually physically, we don’t have enough energy and water in the world.

to actually do that for everybody. So it still is an elitist activity. Is that correct?

Noreen Herzfeld (07:17)
Exactly.

Yes. No, we really do not. We can’t scale this up for everyone in the world. It’s an impossibility with the technology we currently have.

Dru Johnson (07:30)
Okay, so if everybody had a super, super conductive, you know, phone in a freezer and a negative 400 degree freezer, where everything was done with almost zero friction, then maybe, maybe but outside of that, we’re talking about really burning lots of resources. And it’s in a very typical pattern, the wealthiest nations in the world burning through the most of these resources as well.

Noreen Herzfeld (07:55)
That’s right.

Dru Johnson (07:56)
⁓ You’ve said AGI, artificial general intelligence, a few times and my friends who actually work in AI get really frustrated because they like to say there’s lots of AIs, like AI is not one thing, it’s dozens and dozens of things. You’re a professor of, are you the only professor of computer science and theology? Is there another person like you in the world?

Noreen Herzfeld (08:21)
I don’t know another one. ⁓ Does not mean there might not be another one. Somewhere lurking out there that I haven’t met yet.

Dru Johnson (08:26)
Email me if you are.

So when you say AI and artificial general intelligence, could you just describe what is going on with AIs? How they differ from each other? How are they used differently? And then ⁓ what is this super AI? I think of Hal from 2001, Space Odyssey. But how do you conceive of this ⁓ artificial general intelligence?

Noreen Herzfeld (09:09)
Well, first of all, right now, artificial intelligence has become a sort of a catch-all term. ⁓ Because AI is sexy right now, just about any ⁓ computer application that is newly developed is called AI.

Dru Johnson (09:20)
Mm-hmm.

Noreen Herzfeld (09:30)
If the developers want to sell it, they want to jump on the AI bandwagon. So AI, it’s very difficult to say exactly what it is right now.

Dru Johnson (09:30)
interesting.

Hmm.

Noreen Herzfeld (09:46)
Generally, know, in very general terms, it’s been said, well, artificial intelligence is giving a computer do things that normally would require intelligence for a human to do them. But that’s, again, such a broad definition that it would encompass pretty much everything. And most of us don’t think that ⁓ the calculator on our laptop is AI, but it’s doing something that would require human intelligence to do.

Dru Johnson (10:05)
Mm-hmm.

Noreen Herzfeld (10:16)
In fact, it’ll do some things that would take me a very long time to do, taking square roots and that kind of thing. ⁓ So, yeah, the big area that has made AI come to the fore recently is generative AI.

particularly large language models. And these are programs that ⁓ are trained on a lot of written text, mostly stuff that’s out on the internet. And they use statistical models to predict

What words, what sentences belong together? What are found together in human language? This, and you could do the same thing with artwork, for example, you know, what images go together.

You know, this has had a lot of success with chatbots and with programs like Mid Journey that generate pictures. It’s now, I think, starting to plateau. We are not seeing the big jump with chat GPT-5 between five and four that we saw between four and three or three and two. ⁓ The problem.

with these. ⁓ Of course, one problem we all know about is they tend to hallucinate. And what that means is they just make stuff up.

I would call them bullshitters because they have no concept of truth or falsity. They just say what they think sounds good or what they think you might want to hear. So not only are they just bullshitters, they’re sycophantic bullshitters because they are actually a second level of training after they’ve gotten the general level is with human trainers who will look at their answers and say, yeah, that was a good one. No, that wasn’t so good.

Dru Johnson (11:58)
Yeah

Noreen Herzfeld (12:26)
And so they tend to learn to what do people like? Well, people tend to like to be told that they’re right. People tend to like to be affirmed when they say something. And so that is what is making these chatbots, you know, not only disconnected from truth and falsity, but also primed to tell you what they think you want to hear.

Dru Johnson (12:54)
which everything in human experience tells us that that’s just a bad idea. But there is, I don’t know if you heard the New York Times podcast, it was one woman’s story of how she fell in love with her AI. And she was married and she had to negotiate this with her husband who kind of let her do this on the side. And when she was describing it, you could be sympathetic, when she actually played clips of her interacting with her, it was all by voice interacting and they sound.

very compelling now, even as their conversational tone is jarringly realistic. ⁓ It became instantly obvious what was going on, which is it constantly affirmed her, ⁓ asked how she was doing. was just like, you know, it’s how you would treat an infant, basically. ⁓ But she became dependent on it. And the sad part of it is she would hit that she paid all the fees, but would still hit the walls where you get

Noreen Herzfeld (13:35)
Exactly.

Dru Johnson (13:51)
I don’t know, 40,000 sentences into it and it erases and starts over. So she’d have to teach it to love her over and over again. And she was on like her 13th iteration of training it to love her. ⁓

Noreen Herzfeld (14:02)
And of course

it doesn’t love her, it doesn’t really even know she exists. I mean it doesn’t know anything because there is no sentience, there is no consciousness behind what is happening there. So it’s all an illusion. Unfortunately as you say it’s a very compelling illusion because we like being affirmed.

Dru Johnson (14:08)
Right.

Hmm.

Noreen Herzfeld (14:26)
And so I think about people who are get into some this kind of a romantic relationship with a chat bot. I think also people who use a chat bot as a therapist. This is being promoted in some mental health circles. ⁓ And in religious terms, people who might go to a chat bot as they might to a spiritual director or to their priest, pastor or rabbi.

And the problem here again is it will affirm you. It’s going to keep you stuck right where you are. And in all of these situations of therapy, of spiritual direction, and even of a ⁓ deep interpersonal relationship, the whole point is to grow. The whole point is once you’ve

begun to trust the other that they challenge you and that they help you find your weak spots, help you find the places where you’re not thinking clearly ⁓ in a relationship, that they help you stretch.

Dru Johnson (15:30)
Mm-hmm.

Noreen Herzfeld (15:44)
You know, I had a boyfriend who loved baseball. I thought baseball was the most boring game in the world. But after I sat through several games with him, I started to see the point of it all. You know, it stretched me. It made me kind of come out of my preconceived notion and move forward and gain a new appreciation.

Dru Johnson (15:49)
Same.

Noreen Herzfeld (16:06)
And so in that sense, I do think that all of these chatbot programs are dangerous. it doesn’t take, and I’ll get on to the AGI and the super intelligence, it doesn’t take that to be dangerous. Just this affirming nature of chatbots is dangerous for us because it’s going to keep us from growing as a person. Now an AGI is just

Dru Johnson (16:23)
Okay

Noreen Herzfeld (16:36)
the idea that you could have one program that would be able to do multiple things so that it would have a general intelligence like humans do. You know, if you think about Garry Kasparov playing chess against Deep Blue, Garry Kasparov went on to become a thorn in Putin’s side, you know. ⁓ Deep Blue played chess. That’s it.

Dru Johnson (16:58)
Right.

Right.

Yep.

Noreen Herzfeld (17:05)
Okay, so an AGI is the dream of having a program that would be as flexible as Kasparov, that would do multiple things. Superintelligence is the idea that once ⁓ an AI is as intelligent as a normal human being, it will be able to train itself and that it would very quickly become much more intelligent.

than any human being. ⁓ Again, all of this is ⁓ postulated and we do not have anything close to general intelligence or super intelligence at this point. And in fact, I find that AI is the most useful when it’s designed on a very narrow training set to be used for a single narrow

Dru Johnson (18:01)
Mm-hmm.

Noreen Herzfeld (18:05)
application. These are the applications that are actually working right now. ⁓

A summer ago I was at Microsoft and they made a presentation and it was all about how they were expecting AGI in the very near future and all of this. But when I asked them, what AI programs do you have right now that you’re really pleased with that you find to be really successful? They were all very narrow. They were things like, well, we have an excellent program that is helping to map flood zones.

and determine what places would be the most at risk in different scenarios. That makes a lot of sense. know, AI as a tool in narrow focused ways is what is successful right now and what AI should be.

Dru Johnson (18:48)
Hmm. Yeah.

Wow, that’s interesting. ⁓ And even this issue of general intelligence, we’ve been using the word intelligence, and I think it gets used in a very specific way. And again, I’m sure you’re the person who pointed this out to me, that ⁓ this idea that mentalism, or the idea that the mental is the most supreme thing about human beings, seems to be being foisted on here, us here. So you’re Catholic theologian, and there’s a…

pretty hardy tradition, at least in the 20th century, of the body in the Catholic tradition, if not prior to them. So how do you see the embodiment of humans and the embodiment of AI differently?

Noreen Herzfeld (19:42)
Okay. Well, first, Dru I do need to correct you there. ⁓ Even though I’m serving on a Vatican committee, I am the lone proddy on that committee. I’m actually a Protestant. However, I teach at a Catholic university. ⁓ embodiment is really important, I think. You know, what we’re seeing right now coming out of Silicon Valley,

Dru Johnson (19:49)

⁓ OK. Maybe that was a confusion, yeah.

Noreen Herzfeld (20:11)
is what I would call neo-Gnosticism. In other words, the Gnostics, the ancient Gnostics, following Plotinus and then among groups of early Christians, believed that the mind was created by a good God, the body was created by a bad God. Why? Well, because the body gets sick, it feels pain, it’s mortal, it dies. You know, a good God would not have designed

us that way. And so they felt that the job of a good Christian was to transcend the body, in other words to become a purely mental or spiritual entity.

⁓ This was considered heresy by the early church. They said it makes a mockery of the incarnation of Jesus because the Gnostics, of course, had to say, Jesus didn’t really have a human body. He just looked like one. ⁓ But because Jesus was God, he must have been purely mental.

We’re seeing a lot of the same thing now with AI and with transhumanism, where they’re suggesting that, well, maybe we could map the connectome of the neurons in our brain the same way we map the human genome. And then we could just transfer that mapping into a computer. And then we would be living in the computer and therefore have ⁓ a form of immortality.

It’s not eternal life because obviously, you know, computers will fail. You say, well, you’ve got to back it up often enough. That’s what Ray Kurzweil said. Just make sure you make lots of backups. Someday the sun will go nova and wipe out all the computers. But it would give one more time, but it would just be a very different type of time.

Dru Johnson (21:59)
Somebody’s got to run the server farm. Yeah.

Right. Exactly.

Noreen Herzfeld (22:20)
Now, one of the important things a body does for us is it gives us our uniqueness. ⁓ We have one body and we have it through our life. If we were uploaded to a computer, that upload could be copied. And then we would no longer be unique.

Or you could say, well, let’s see, when should I upload myself? Maybe I should do it when I’m 15 and I know everything. No, ⁓ I’ll do it when I’m in my 20s and I’m at the height of my mathematical and scientific prowess. ⁓ Well, no, I really should wait longer where I have more memories and more bodily experiences to fill out that memory bank.

that makes me me. Well, why not do it every five years? But then which one is you? Because each one will be very different. You are no longer a unique person. This, think, is one of the impetuses behind what we say in the Christian creed when we say we believe in the resurrection of the body. In other words, although many of us talk

in a mental assertive fashion. When someone dies, we’ll say, well, his soul went to heaven.

⁓ That is not what Christianity actually teaches. Christianity actually teaches that on the last day we will be raised in a new body, somewhat slightly different body, but a body that would have some continuity with our old body the same way that Jesus’ had continuity with his old body. So that you could say to Thomas, for example, put your finger in my hand, see the wound from the nail.

⁓ So Christianity, I think, brings to the table of world religion something that is quite unique and that is this sense of embodiedness and how important it is.

Dru Johnson (24:25)
Hmm.

Noreen Herzfeld (24:31)
If you think about the major feast in Christianity, Christmas and Easter, Christmas celebrates the Incarnation, our God taking on a human body so that God could have a complete relationship with us. In knowledge, we have a God who knows what it’s like to feel pain, to weep, to suffer and to die.

Then there’s Easter, the resurrection of the body, that Jesus comes back in a body, in a physical body. And the sacraments, of course, each have a material side, the water of baptism, ⁓ the wine and the bread of communion or the Eucharist. So Christianity is a very physical religion.

Dru Johnson (25:12)
Right.

Noreen Herzfeld (25:27)
And one thing that I think is especially important as we face the climate crisis right now is that Christianity says, God came into this world, took on a physical human body that sanctified our own embodiedness and our world around us. That sanctified materiality, that is a rebuttal against this neo-Gnosticism.

that says, we could just live in cyberspace. We’re just minds. Our body is just a meat conveyor that drags our mind around. No it’s not. And for one thing, by the way, we’re not just brains. We’ve got a whole neuronal system in our gut, in our enteric system. And when that gets out of whack, our brains get out of whack.

Dru Johnson (26:07)
Right.

Noreen Herzfeld (26:26)
Okay, by the way, we’ve also got a microbiome in our enteric system and when that gets out of whack, our brains get out of whack. And so those who say, we’ll just copy the connectome of the brain. Well, you better also get the connectome of the gut. You better also get the microbiota. You know, if it’s a pianist, you better get the muscle memory in there too. Pretty soon you got, let’s copy the whole darn body.

Dru Johnson (26:47)
Right. The hormone system.

Yeah. No, I think that’s wonderful. that AI does, I think I heard you say this, that AI, it does embody a system. It’s just, it does embody servers, right? And there’s even a black.

Noreen Herzfeld (27:05)
Right.

mean, then we also think, well, you know, AI, this is clean. It all happens up in the cloud. It’s not embodied. It’s disembodied. Well, there is no cloud. There are these huge server farms. It’s very embodied. And it’s embodied in a way that is not congruent with the ecology of the Earth. We evolved, and we evolved to fit a niche in our ecosystem.

Dru Johnson (27:29)
Mm-hmm.

Noreen Herzfeld (27:35)
And by the way, just think about the marvel that is the human brain. That it does everything an AGI would do on what? On food and water. On totally replenishable, renewable resources.

Dru Johnson (27:50)
Right.

Noreen Herzfeld (27:57)
Our computers don’t work on renewable resources right now. mean, they could if we get enough solar and wind power, although even there you have to realize, well, a lot of resources are going into those panels, you know, into those large towers. ⁓ They still would not be anywhere near as ecological as a human brain.

Dru Johnson (28:22)
Yeah, so if you really want the goods, maybe humans working together. There used to be all of these projects where humans, almost like Severance, if you’ve seen that show, where there were these little ⁓ apps that they would distribute where you would just do little puzzles and it would use humans just working together on little puzzles to solve big problems or data. I guess CAPTCHA works on that as well, using humans.

Noreen Herzfeld (28:48)
Yeah.

Yeah, it does. Well, and

much of our AI actually works on that too. Another thing that most people are unaware of is how much of our AI is actually powered by extremely low paid workers in Africa and Southeast Asia who are ⁓ in many ways doing very stressful jobs for very low pay ⁓ where they’re trying to keep our chatbots from swearing at us.

Dru Johnson (29:05)
Right.

Right.

Right.

Noreen Herzfeld (29:20)
and things like that.

Dru Johnson (29:22)
teaching them what’s appropriate, what’s not, what are good answers, yeah.

Noreen Herzfeld (29:24)
teaching them what’s appropriate and

what is not appropriate.

Dru Johnson (29:27)
Yeah. And so it’s funny that I think most people who don’t think about it ⁓ think that there’s a big program in the sky that basically crunches all this data and then sends it to you as an answer when it’s actually the data that it’s working from is all human created. So it’s going to reflect some biased sources in many ways. ⁓ And I know that because my very niche sub sub sub specialty I work in when I ask it questions about that.

I can hear my own voice coming back, know, like some of the things it’s rehearsing my own thoughts back to me. ⁓ And it’s something that has to be trained often by humans, right? It’s a very human endeavor. There’s that quip about New Testament studies when the German scholars looked down the deep well of history, they saw white German men staring back up at themselves. I think in some ways AI has a similar feel that as we go into the depths of the heart of it, it really is just us staring back at ourselves.

Noreen Herzfeld (29:59)
Exactly.

Mm-hmm. ⁓

Dru Johnson (30:27)
just a very clever and energy-inefficient version of ourselves. Hopes and dreams versus practical, what do you hope happens with AI? Because I don’t think there’s any way to stop it. ⁓ And then what would be the best possible outcome for you?

Noreen Herzfeld (30:33)
Exactly.

Well, I hope that, I actually hope that this plateauing that we are seeing with large language models continues and that people recognize that this model is not going to get us much further than it has already gotten. I hope that…

As people use what we have, people start to realize that, okay, this works for certain things, but it really doesn’t work in general. ⁓ I think many businesses at this point are finding, I just read an article in the New York Times that said ⁓ most businesses are finding that adopting AI, which many have rushed into, has not improved the bottom line at all.

Dru Johnson (31:35)
Hmm.

Noreen Herzfeld (31:37)
⁓ I had the CEO of Goldman Sachs said ⁓ several months ago now, so it wasn’t the most current model of GPT, but said, it’s kind of like having a brand new intern. You can ask it to do something, but you’re going to have to redo it yourself. ⁓ I hope people start realizing that and come to the recognition that AI can be an excellent tool.

Dru Johnson (31:38)
Interesting.

Right.

Noreen Herzfeld (32:06)
There are some things it does better than us. It can aggregate large amounts of data. ⁓ It can find patterns that we might not see. ⁓ But it’s a tool. It’s not a partner and it is really not a good surrogate for other human beings. ⁓ And I think the more people who come to recognize that

the more we will start throwing our energy into building these focused AIs that will be excellent tools ⁓ in each one in its own particular niche and stop running after this dream of an AGI that is, we are unlikely to reach.

and that is unlikely to be of any more help to us than these more specific tools. And in particular, I hope that we can stop this AI arms race because right now, I see us throwing trillions of dollars essentially into the toilet.

Dru Johnson (33:21)
Right.

Noreen Herzfeld (33:22)
And I think

about how that money could be used to help the poor, to improve our infrastructure, ⁓ improve roads and housing and food distribution and all of these things. And yet we’re throwing billions upon billions of dollars.

into a dream that I am convinced cannot be realized.

Dru Johnson (33:52)
⁓ I read an article recently, ⁓ don’t remember where it was, Wall Street Journal maybe, that basically said the future worker who is really going to be desirable is the one who comes out of the humanities because employers will know they know how to think for themselves. ⁓ Hopefully that all their papers were not written for them or they were, you they had to read long complicated text and figure out what they…

Noreen Herzfeld (34:11)
Yeah.

Dru Johnson (34:19)
what they were and so maybe there’s a salvation, rescue of the humanities where it needs to happen. ⁓ Your book on this is called The Artifice of Intelligence. ⁓ Does that have everything in it ⁓ that people would need or is there something or there other places for them to go if they want to hear more from you?

Noreen Herzfeld (34:40)
I think if you’re looking for a kind of general overview that goes into in more depth into many of the things that I’ve said that that would be a really good place to start.

Dru Johnson (34:53)
Okay, great title, the Artifice of Intelligence. Well, Noreen Hertzfeld, thank you so much for your wisdom and your time today.

Noreen Herzfeld (35:00)
It’s been my pleasure, Dru

Share On:
Picture of Dr. Dru Johnson

Dr. Dru Johnson

Founder and Director of the Center for Hebraic ThoughtDru teaches Biblical literature, theology, and biblical interpretation at The King’s College. He is an editor for the Routledge Interdisciplinary Perspectives on Biblical Criticism series; an associate director for the Jewish Philosophical Theology Project at The Herzl Institute in Israel; and a co-host for the OnScript Podcast. His recent books include Biblical Philosophy: An Hebraic Approach to the Old and New Testaments (Cambridge University Press); Human Rites: The Power of Rituals, Habits, and Sacraments (Eerdmans); and Epistemology and Biblical Theology (Routledge). Before that, he was a high-school dropout, skinhead, punk rock drummer, combat veteran, IT supervisor, and pastor—all things that he hopes none of his children ever become.He and his wife have four children. Interviews, articles, and excerpts of books can found at drujohnson.com.

Most Recent Podcast Episodes

Podcast Featured Image Template (Gray) 249
Podcast Featured Image Template (Brown) Ep #248
Podcast Featured Image Template (Green) Ep 247
Podcast Featured Image Template Ep 246

Join the Mission to Bridge Faith and Understanding


Your support fuels research, teaching, and resources that shape minds and hearts. Invest in the future of Hebraic Thought.

Scroll to Top