I'll be honest: my view of artificial intelligence was shaped in my adolescence by science fiction. I don't know whether that alone qualifies or disqualifies me now to speak on the spiritual implications of what we've unleashed on ourselves. What I know is that I am disturbed to my core by what I'm reading about AI, and I think we Christians should be paying close attention to the conversation.
We Baby Boomers especially can remember "rogue computer" stories from the original TV shows "The Outer Limits" and "Twilight Zone," to say nothing of films such as Colossus: The Forbin Project or Stanley Kubrick's monumental "2001: A Space Odyssey" based on Arthur C. Clarke's novels. Beyond these imaginative visuals, we readers soaked in visions from stories such as Ray Bradbury's anthology "The Illustrated Man" with its terrifying virtual reality story The Veldt;" Isaac Asimov's "I, Robot;" and for me, most terrifyingly, Dangerous Visions and Again, Dangerous Visions, anthologies of SF short stories edited by Harlan Ellison, especially his own work, "I Have No Mouth and I Must Scream."
With such a background, the latest news about AI disturbs me further.
I draw my new concerns from the Washington Post's July 25 newsletter Today's WorldView by Ishaan Tharoor with Sammy Westfall, using as its news peg the debut of writer/director Christopher Nolan's epic, "Oppenheimer," about the life of J. Robert Oppenheimer, the physicist termed "the father of the atomic bomb."
Tharoor writes:
“'When I talk to the leading researchers in the field of AI right now … they literally refer to this as their Oppenheimer moment,” Nolan told NBC News last week. 'They’re looking to his story to say "OK, what are the responsibilities for scientists developing new technologies that may have unintended consequences?"'
"In a guest essay for the New York Times published Tuesday (July 25), Alexander C. Karp, the CEO of Palantir, a big data analytics company that works with the Pentagon, writes: 'We have now arrived at a similar crossroads in the science of computing, a crossroads that connects engineering and ethics, where we will again have to choose whether to proceed with the development of a technology whose power and potential we do not yet fully apprehend.'
"The technological uses of machine learning systems are diverse and vast, but, as the introduction of OpenAI’s ChatGPT has already made clear, few corners of human society will be left untouched as AI tools evolve and grow more sophisticated and powerful. Whole industries and professions are likely to disappear ...."
As if that implication wasn't enough, here's where AI intersects directly with faith, also from Today's WorldView:
"Last week, Lt. Gen. Richard G. Moore Jr., a three-star Air Force general, laid out the contest over AI in somewhat baffling ideological terms, suggesting the United States’ 'Judeo-Christian' character would prevent its planners from misusing AI. 'Regardless of what your beliefs are, our society is a Judeo-Christian society, and we have a moral compass. Not everybody does,' Moore said at a think-tank event in Washington. 'And there are those that are willing to go for the ends regardless of what means have to be employed.'"
Most educated Christians will boggle at the general's statement – first, because the United States no longer has the stereotypical "Judeo-Christian" character of the past (if it ever did), and second, because Christendom (the marriage of church and state initiated by emperor Constantine in the fourth century) has been guilty of the bloodiest warfare in human history, "moral compass" notwithstanding. The difference now is that between nuclear weapons and artificial intelligence, there likely will be no blood left, not even ashes or atoms, if global warfare breaks out.
Aside from the threat of AI getting hold of military decisions, what frightens me most about artificial intelligence is its threat to our souls.
Can humans created in the image of God be reduced to the questionable mathematics of algorithms? Experimenters and early adopters of AI are already discovering that AI is only as good as the data fed into it by its human curators. Some experiments have shown that AI can even make up false responses; one researcher discovered it had attributed to her a book that doesn't exist.
Even more, can AI destroy our connection to the divine? Again, I confess to the influence of a lingering image: in George Lucas' 1971 debut film, "THX 1138," the hero seeks consolation from a computer-generated image of Jesus that spouts platitudes to assuage his spiritual pain. That THX (that's the character's name) eventually escapes his computer-controlled dictatorship is more a tribute to economics than spirit. (I won't give away the ending; see it for yourself).
As is my wont, I raise more questions than answers here – in no small part because we are all having an "Oppenheimer moment" where the unintended consequences of what we've created still reside in the murky future. What we can say about the technological marvel of artificial intelligence is that we cannot ignore this earth-shaking event (see resources below). We have taken another bite of fruit from the tree of the knowledge of good and evil, as Eve and Adam did in the Garden of Eden (Genesis 3). Thus enlightened, now we must decide what to do with our new knowledge.
AI has already permeated our daily lives; our family's concerns over AI spying are why we have no "Alexa" at our house. It's why I resist playing online games based on algorithms or delete them promptly when I succumb to friends' urging to play along. AI's potential for harm is why I keep tight control over the way I use automated email and social media posts for United Methodist Insight. There's already so much personal information about us floating around the internet that we have no clue who has it or, more importantly, how it's being used. That's a major issue with artificial intelligence – how it gets hold of the data it uses to generate its responses.
Faith has no role in manufacturing the technology of artificial intelligence. However, like the premise of Michael Crichton’s novel "Jurassic Park," I fear that our scientists are now so enthralled by what they can do with AI that they're giving little thought to whether they should proceed with their innovations. Hence, people of faith (not just Christians) have a critical role to play as AI continues to spread, to ask important moral and ethical questions, and to press for regulations that will draw clear boundary lines about what constitutes appropriate AI development and use. These should occupy our time and energy far more than whether women are fit to preach or who does what with their beloved in the bedroom.
For one, I have no intention of becoming like THX 1138, sitting in a plastic booth before a computer-generated icon of Jesus being absolved of my sins by AI platitudes. Either my faith in Jesus Christ is incarnated as he was in real human flesh, in the body and blood of Holy Communion, or it devolves into ones and zeros transmitted by and for machines. I am not a machine, and I will not be ruled by machines playing God. Only God is God, as Jesus proclaimed, and the powers-that-were killed him for his proclamation. We who witness today to Jesus' message could be next to die if the artificial intelligence we've created gets away from us.
Additional reading:
These resources come from The Conversation, a news website devoted to blending science with journalism that has done an excellent job tracking the public discussion around artificial intelligence.
Can chatbots write inspirational and wise sermons? by Joanne M. Pierce, College of the Holy Cross.
Can you trust AI? Here's why you shouldn't by Bruce Schneier, Harvard Kennedy School, and Nathan Sanders, Harvard University.
Eliminating bias in AI may be impossible – a computer scientist explains how to tame it instead by Emilio Ferrara, University of Southern California.
The hidden cost of the AI boom: social and environmental exploitation by Ascelin Gordon, Afshin Jafari, and Carl Higgs, RMIT University.
FTC probe of OpenAI: Consumer protection is the opening salvo of US AI regulation by Anjana Susarla, Michigan State University.
Thought-provoking new exhibition suggests the public should help shape the future of AI by Aniko Ekart, Aston University.
What is ‘AI alignment’? Silicon Valley’s favourite way to think about AI safety misses the real issues by Aaron J. Snoswell, Queensland University of Technology.
AI might eventually be an extinction threat, but it poses more pressing risks by Amin Al-Habaibeh, Nottingham Trent University.
2001: A Space Odyssey still leaves an indelible mark on our culture 55 years on by Nathan Abrams, Bangor University.
Award-winning religion journalist Cynthia B. Astle has reported on The United Methodist Church at all levels for 35 years. She serves as Editor of United Methodist Insight, a journal she founded in 2011 as a media channel for marginalized and under-served news and views of United Methodists. This content may be reproduced elsewhere with full credit and links to its original posting.