Almost every one of us takes advantage of some form of artificial intelligence every day. From web searches to digital voice assistants, from online maps to movie and shopping recommendations, we are inundated with computers replicating what used to be possible only for human minds. In 1642, Blaise Pascal introduced a mechanical calculator able to carry out basic math functions in order to aid his father in his tax collecting, essentially relegating one small task of human thought to a computer. We have come a long way since then, but one haunting question has been pondered seriously for many decades now: Can machines ever go beyond performing simple computations and become, well, human?
Consider one imaginary futuristic scenario of being able to achieve “mind uploading” or “whole brain emulation” (WBE). If our technology became sophisticated enough to provide comprehensive scans of every nanoscopic state of a subject’s brain, some have posited that we would then be able to transfer that information into a high-tech simulator that could, in turn, recreate the state of the brain and its mental contents. This would include the current active thoughts of the subject and also all of their stored memories, desires, inclinations, and every other aspect of their mental lives. Then, using software not yet developed, our hypothetical computer could copy the way the brain processes both its own internal states and the external stimuli it is exposed to. By doing so, the WBE protocol would at that point create a perfect replica of the subject’s brain. The dream of futurists is that the computer would have the exact same mental life as the human—but without the body. Of course, if technology really became that advanced, there is no reason to suppose a matching body could not be furnished too. Put the computer inside the artificial body, and the engineered object would be just as human in its experience of the world as you or I or anyone else. A kind of modern golem.
Obviously, this technology does not exist, nor does anything even close. But as a theoretical possibility, does it provide a plausible way to recreate human conscious experience in a machine? Put me down as having serious doubts. Let me explain why.
Weak vs. Strong Artificial Intelligence
First, it is helpful to draw a distinction between weak artificial intelligence (WAI) and strong artificial intelligence (SAI). Both forms of AI replicate human thought, but they differ in the degree to which their functions are directly the result of prior programming. With WAI, the processes that humans perform are copied by a machine, much as with Pascal’s primitive calculator. This is essentially achieved by setting up very complicated rules for switching strings of operators between the values of 0 and 1. This process can get incredibly complex, leading to smart devices even being able to tell a knock-knock joke on demand. Truth be told, it may even tell jokes that are better than those in our own repertoire. But if we venture too far from what the device has been programmed to respond to, we are met with, “Hmm . . . I don’t understand that.”
This is where SAI comes in. If strong artificial intelligence were achieved in a machine, it would be able to transcend its programming to synthesize information in novel ways. Being able to apply knowledge from a familiar situation to a situation never encountered before, being able to make plans and adapt as needed, being able to pivot when circumstances change unexpectedly in ways that were not anticipated by any program or programmer—this would be SAI. There are some rudimentary forms of what some argue is SAI. For example, video games. But these features of human intelligence have not been captured well in any existing technology. Without achieving these goals it seems an exaggeration to say any system is truly intelligent.
But even if such apparently autonomous and adaptable systems could be created, how similar would they really be to humans? No matter how clever they seem, we all know that the “smart” device does not actually think its jokes are funny. In fact, it has no awareness of any kind—no consciousness, no desires, no thinking, no mind. It tells better jokes than we do because it has been programmed to, not because it has a better sense of humor.
Some argue that if a machine could pass something known as the Turing Test, it would have achieved real intelligence. The Turing Test, named after mathematician and pioneering computer scientist Alan Turing, says that if some sort of conversation or interaction with a machine cannot be successfully distinguished from an interaction with a human, then the machine should be said to think. That is, thinking is a matter of functioning in a way indistinguishable from humans.
Philosopher John Searle, however, provides a thought experiment to show the insufficiency of the Turing Test. In something known as the “Chinese Room” argument,1 Searle has us imagine someone in a locked and windowless room who is being passed symbols in Chinese through a slot in the door. They have been provided with a code book that they use to look up the appropriate response to the inputs sent through the door, and then they copy the response onto a card that they pass back through the slot. However, the subject does not understand what the symbols mean, either on the cards or in the book, and they do not even understand what they are writing—they are merely following an algorithm that provides a prescribed output for the given inputs. If the look-up algorithm is sufficiently complex and comprehensive, people who speak Chinese would not know that the person in the room doesn’t understand. But clearly, it would be wrong to say the person in the room speaks or understands Chinese, even if their outputs are identical to a native speaker. Similarly, even if a machine were to pass the Turing Test, that would not mean it understands—and it certainly would not be sufficient to make it conscious.
The Why or How of the Human Brain
Searle’s objection to the Turing Test touches on something dubbed by Australian philosopher David Chalmers as “the hard problem of consciousness.”2In addition to the practical (and maybe theoretical) impossibility of the nanoscopic mapping of the brain required for something like WBE, there is no known connection between the structures of the brain and the production of consciousness. That is, no matter how adept we get at brain mapping, nothing in even the most precise topographies would explain how or why those structures give rise to consciousness. We have no idea how that happens, so how could we have confidence we could ever replicate it? As I point out in God on the Brain,
Understanding all the physical structures underlying our mental life (assuming they do so) does not in any way explain the connection between those structures and the thoughts themselves. A thought is not like a collection of neurons. Pain is not like C-fibers firing. The mechanisms do not really do anything to explain the phenomena of conscious experience. Why should an arrangement of physical stuff like that create consciousness? Nothing in our understanding of the nature of matter or the constitution of our brains gives even the slightest hint at an explanation.3
If we are not just machines, but spiritual beings as well, even the cleverest computer could never replicate the priceless and wondrous imago dei borne by every human.
A deeper diagnosis of the problem might also come from questioning the assumption made by advocates of SAI that consciousness could be replicated merely by clever enough manipulation of matter. I, for one, am skeptical that a computer could be human partly because I am skeptical about the underlying belief regarding human nature. More specifically, I do not believe that physicalism about persons is true. There isn’t time here to make the case for that claim, but as someone who believes humans are spiritual-physical unity, I do not hold with the view that the body in general, or the brain in particular, constitutes who we are. I think the scientific, philosophical, and theological arguments for materialism about persons are weak, and the criticisms of mind-body dualism are all flawed or unconvincing.
Thus, I do not think they threaten the traditional Christian view of persons as a unity of spirit and matter in the least, and I continue to believe—justifiably, it seems to me—that the mind, or spirit, is the immaterial locus of our consciousness. The brain is an important part of the picture, but it alone is not the whole story. I do not think even the most sophisticated machinery could ever generate the consciousness and first-person experience we have as humans because we are more than physical objects. We can produce clever computers, but it does not follow that we can produce minds. Therefore, I doubt that a machine could ever really think.4
In reflecting on the possibility of computers someday having minds like ours, renowned Harvard neuroscientist John Dowling says, “At the moment, we are a very long way from this happening, and serious reservations can be raised as to whether this will ever happen.”5To his scientific and technological reservations we can also add skepticism rooted in theological anthropology. We would be in good company doing so. Thus, even if a machine could pass the Turing Test and act in ways externally indistinguishable from real people, I am far from convinced that it could ever deserve to be labeled “human.” If we are not just machines, but spiritual beings as well, even the cleverest computer could never replicate the priceless and wondrous imago dei borne by every human. They will always be just machines.
- John Searle, “Minds, Brains and Programs”, Behavioral and Brain Sciences, 1980, vol. 3: 417–57
- David J. Chalmers, The Conscious Mind: In Search of a Fundamental Theory(Oxford: Oxford University Press, 1996)
- Bradley L. Sickler, God on the Brain (Crossway, 2020), p. 116.
- These are complicated issues that require much further elaboration. See my book for a fuller treatment.
- John E. Dowling, Understanding the Brain, (New York: W.W. Norton & Co., 2018), p. 266.
Bradley Sickler is the author of God on the Brain: What Cognitive Science Does (and Does Not) Tell Us about Faith, Human Nature, and the Divine.
Martin Luther believed that maintaining a good conscience was worth going to prison for and even dying for.
Your conscience can function like a moral version of your nervous system.
Crossway talks with Vern Poythress about his newest book, Chance and the Sovereignty of God: A God-Centered Approach to Probability and Random Events.
Bradley Green's book The Gospel and the Mind explores this correlation between the gospel and the mind, and in doing so addresses five theological themes and their relevance to the intellectual life.