The Marvel movie Age of Ultron is not the first time intelligent robots threatened to take over the world. It has been the stuff of science fiction for decades. But when one of the world’s most prominent scientists began warning about it in 2017, people paid attention.
Speaking at the world’s largest annual technology conference shortly before he passed away, astrophysicist Stephen Hawking expressed both optimism and a stark warning about the potential danger of artificial intelligence. He told the Web Summit in 2017, “Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So, we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined, or conceivably destroyed by it.”
And Hawking is not a lone voice. Many highly respected figures, such as Bill Gates of Microsoft and Elon Musk of SpaceX and Tesla fame, have raised similar concerns in the past year. On the one hand, they claim AI might one day be the savior of mankind, but on the other, it could eradicate humanity from the face of the earth if it gets beyond our ability to control.
But how close are we to actually building this kind of science fiction technology, and does the Bible have anything to say about the danger by that AI poses to humans?
When experts began building the first computers in the 1950s, it seemed inevitable that no mental task would be beyond their reach. Back then, people thought of computers as electronic brains with several advantages over organic human brains: they don’t suffer from fatigue, they never get bored with tedious calculations, and they are incapable of errors. Over time, the optimism faded. Computers could not even solve “simple” problems that any young child could solve, such as distinguishing cats from dogs. Alan Perlis, a famous Yale computer science professor in the early years, once quipped in frustration, “A year spent in artificial intelligence is enough to make one believe in God.”
But within the last decade, much of the optimism has returned. A branch of AI called machine learning has made impressive strides. These computer programs are not explicitly programmed to perform a task, but to “learn” how to do it. Machine learning depends on something known as artificial neural networks, or neural nets for short.
You could call this computer ability “intelligence,” but it is really mathematics, and the word learn is just a term that makes a mathematical process sound human. Consider what really happens when a neural net learns how to distinguish between images of cats and dogs. An intelligent human must first tag millions of images of cats and dogs and feed them into the program, which is fine-tuned to compare numerical values by trial and error until the program is ready to classify and tag images by itself. You might say it has “learned” to tell the difference between a cat and a dog, but the computer has zero comprehension of what it is doing or what cats and dogs truly are.
Perhaps the most publicized triumph of neural nets was Google’s victory in the intricate game of Go in 2016. Computers had already bested humans in chess in the 1990s, but these victories were due more to sheer computing horsepower. The game of Go is much more complex, so many believed winning against a grandmaster requires real ingenuity. The new program, AlphaGo, did indeed learn how to play by playing against itself. Even the programmers who created it do not know how it calculates its moves. What they can be sure of, though, along with everybody else, is that its moves are more insightful than those of even the best human players. A computer that can learn to that degree has fueled much of today’s AI hype.
This is a marvelous breakthrough, but is AI really a threat to human dominance? AI is a broad term. The ability of computers to solve problems in a narrow, specific domain is known as weak AI. In areas like speech and image recognition, as well as games like Go, researchers and neural nets are making great strides. But that type of AI could never replace us. The type of AI that has everyone concerned is called strong AI, the quest for creating computers that have artificial general intelligence. This term refers to broad intelligence that is indistinguishable from human intelligence, which includes reasoning, understanding abstract language, and volition.
Once a computer can think on its own, it can learn anything it wants. In fact, it could even tackle the problem of making itself smarter, which is why many people believe that once the threshold of artificial general intelligence is crossed, computers will quickly become super-intelligent. The result will be a phenomenon that has been termed the singularity, when changes accelerate so quickly they become unstoppable.
When the singularity happens, we are told, computers will be much smarter than humans in all domains of knowledge. How they will use that knowledge—for better or for worse—is anybody’s guess. Prophecies of the coming singularity fuel both dystopian and utopian visions of the future that we see in the movies and read about in the news.
If a computer had all the traits of human intelligence, then it follows that it would be self-conscious just like we are. To be self-conscious means that you exist and you are aware that you exist. A computer with general intelligence would by definition possess a mind like humans, and hence, be self-conscious. Underlying the widespread fear of a self-conscious AI, however, are some materialistic assumptions. And this is crucial. If these assumptions are wrong, then the whole argument falls apart.
If you look closer at the dire warnings of an AI apocalypse, you will see how materialistic assumptions come into play. Hawkins, for instance, was an atheist and avid evolutionist. In an interview with the BBC in 2014, he explained why he believed “the development of full artificial intelligence could spell the end of the human race.” His reasons? “It would take off on its own, and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
A leading artificial intelligence expert, Ray Kurzweil, elaborated an even more radical vision for the future in his book The Singularity Is Near: When Humans Transcend Biology (2005). He believed that machine intelligence would become infinitely more powerful than human intelligence and eventually combine with humans through technology, leading to a post-human existence.
Such views rely on the assumption that everything in the universe has a physical origin and evolved from nothing. That includes our self-consciousness, which they believe is just the product of electrical activity in our brains.
Secular scientists believe our brains evolved to create the necessary conditions for self-consciousness. Self-consciousness emerges, most secular scientists believe, once a complexity threshold is crossed in the number and interconnectedness of our brain’s neurons. The theory is that human brains have reached this threshold, but animal brains have not; hence, the superiority of humans over the “other” animals.
Emergence is a mysterious phenomenon where the sum is far greater than its parts and cannot be explained in reducible terms. For example, in a colony of termites, any given termite is insignificant, but taken as a whole, the colony mysteriously coordinates its activity to achieve extraordinary results. So, the theory goes, each neuron can’t do much by itself. But given billions of neurons and complex relationships among them, a mysterious coordination emerges capable of producing extraordinary phenomena like self-consciousness.
If emergence sounds like a leap of faith, not science, that’s because it is. Belief in emergence is basically another way of saying, “We don’t know how it happens—we just know that it happens.” It is actually a rational perspective if you are a materialist—that is, if you believe nothing supernatural exists in the world. Starting from the presupposition that humans evolved from a single-celled organism over billions of years, and given the undeniable reality of our self-conscious existence today, then it must be that at some point in our evolutionary past a threshold was crossed and humans became self-conscious.
So, the logic continues, if we can explain our self-consciousness purely in terms of our brain’s physical composition, then technically nothing prevents us from imitating nature’s blind feat by trial and error.
Christians have a much different way to explain where our self-consciousness comes from: it is part of our creation in the image of God (Genesis 1:27). Man is the pinnacle of God’s creation—nothing else was created in his image. Because we were created to be in a covenant relationship with God, he endowed us with self-consciousness so that we could be good stewards of his creation and truly know, worship, and glorify him (Isaiah 43:10). If we were not self-conscious, it would be impossible for us to reflect on the needs of his creation, to commune with him, and to willfully commit our lives to him.
Modern computers bear very little resemblance to human brains. But the question remains: might we one day succeed in reverse engineering the brain and create a different kind of computer that more accurately reflects what is going on inside of our brains?
This possibility cannot be completely ruled out, even though at present we are nowhere close. As neuroscience advances, we uncover as many new mysteries as we do answers. But even if we do eventually figure it all out, will we ever turn on the computer and hear it begin uttering, “I think, therefore I am”?
From a biblical perspective, the answer is almost certainly “no.” Self-consciousness is not merely a matter of brain composition. Our self-consciousness has a supernatural origin and transcends our brains. After we die and our earthly brains are decomposing in the grave, we will stand before God in judgment fully conscious of who we are and what we have done (Hebrews 9:27).
The next time you hear a quote in the news about “the coming singularity,” it is safe to assume the speaker is operating from a false worldview.
Science alone cannot manufacture supernatural phenomena. Scientists cannot isolate something so simple as the life force that animates the simplest living things such as worms. Nor can we pour life back into a body. If we cannot conjure something so basic as that, how could scientists ever hope to create something as complex as human-like consciousness?
The next time you hear a quote in the news about “the coming singularity,” it is safe to assume the speaker is operating from a false worldview. The singularity is predicated on computers having general intelligence, and that implies self-consciousness. But self-consciousness is in the same Genesis 1 category as creating matter and energy from nothing and life from nonlife—all futile scientific endeavors. In Job 38–41, God chastens Job and anybody else who forgets the distinction between the Creator and the created. Making minds is God’s domain, not ours.
If the singularity is not near and never will be, does this mean Christians should not be concerned about AI? No, it doesn’t. Humans are responsible before their Creator for their actions, including the proper use of weak AI, such as lethal drones that choose their own targets and companion robots that keep children and the elderly company. Evangelical Christians have begun to offer some biblical guidance, such as the Ethics and Religious Liberty Commission’s 2019 statement Artificial Intelligence: An Evangelical Statement of Principles. The biblical principles that the church has always shared about human responsibility still apply. Meanwhile, the existential question of whether AI will replace humans is a nonissue.
It is easy to see why AI has captured the world’s imagination. After all, in only 70 years, computers have utterly changed our way of life, and it is fascinating to consider what the next 70 years may hold. Some people fear a Pandora’s box of self-wrought devastation, and others, like Kurzweil, hope for a technology-based eternal life.
But Christians have much different expectations for the future, based on the certainty of God’s infallible Word, not flights of human fancy. God is in control of this world and its future—we cannot destroy it and we cannot fix it by our own devices. We look forward to a day when the Creator—the ultimate intelligence—sets things right. We will experience a different kind of singularity even more awe-inspiring than the one of science fiction—life with God in eternal paradise.
AI isn’t just a future concept. Infant AI (machine learning, at least) has arrived. It has already become so commonplace that you may not have noticed.
In a constant cat-and-mouse game, Google’s Gmail filters spam by outsmarting the human spammers. By constantly learning the latest techniques that humans use to send junk mail, the filter successfully removes 99.9% of the junk.
Since its start in 1998 by two Stanford PhD students, Google’s powerful search engine has become the gold standard for algorithms that can process piles of data and predict the best results using at least 200 separate factors. Even after all these years of searches, 15% of the wording in searches each day is new for Google.
When you upload a photo to Facebook, the social media site uses its vast database and facial recognition software to identify your friends so you can tag and share. Increasingly, automobiles are equipped with this same technology so that onboard safety systems can help us steer clear of other vehicles and obstacles on the road.
YouTube has a knack for suggesting videos we want to watch. It turns out AI is behind these and many other websites that crunch tons of data to peg our likes and dislikes. YouTube has a special challenge: with 300 hours of video added every minute, the AI must constantly update new data and suggestions in real time.