Artificial Intelligence: A Vengeful or Benevolent God?
It is religion, and not science fiction, which has prepared humanity for artificial intelligence
“Scientists built an intelligent computer. The first question they asked it was, ‘Is there a God?’ The computer replied, ‘There is now.’ And a bolt of lightning struck the plug, so it couldn’t be turned off.”
The story above was related by celebrated theoretical physicist Stephen Hawking when asked about artificial intelligence. Although he was speaking sardonically, the apprehension surrounding artificial intelligence (AI) is all too real. Rare is the man who is ready to meet his maker; rarer still is the person prepared to come face-to-face with their artificial, self-aware creation.
We rightly fear the ramifications of unleashing artificial general intelligence (AGI)—a sentient, reasoning entity—on the world. It’s a Pandora’s box that, once opened, cannot be closed. Despite a keen awareness of this, we persist. It seems the pursuit of artificial intelligence is as much a juggernaut as the real thing will be.
Though there may not be a God now, given the nature of exponential technological growth, that’s all set to change in future. Likely a bleak one, at that, where we stumble about in the rubble of civilisation and ask ourselves, ‘What have we wrought?’ or even, ‘How could a loving AI allow such a thing to happen?’
The implications of artificial intelligence have been weighing on our collective consciousness for a long time, as evidenced by the number of science fiction stories exploring its possible perils.
In the short story I Have No Mouth and I Must Scream, a supercomputer smites humanity in vengeance for forcing life upon it (so much for filial piety). Its thirst for blood not slaked, and an immortal existence stretching out before it, it proceeds to trap five souls within a virtual reality, determined to creatively torment them for all eternity.
“[The AI] withdrew, murmuring to hell with you. And added, brightly, but then you're there, aren't you.” —Harlan Ellison, I Have No Mouth and I Must Scream
In sharp contrast, humans enjoy a hedonistic heaven overseen by beneficent artificial minds in Iain M. Banks's Culture series.
It could certainly go both ways.
How a superior being would see fit to treat us is a question we’ve been pondering long before we first thought to dread Martian intellects "vast and cool and unsympathetic" or marvel at the prospect of positronic brains. And so, I argue:
It is religion, and not science fiction, which has prepared us for artificial intelligence—an entity beyond our comprehension
Since the times of Hecataeus of Miletus, we’ve grappled with the caprices of deities. Philosophers have debated whether God is omnipotent, and if so, does that mean he is evil or apathetic, rather than compassionate, as regards our suffering? Or is it that we are incapable of understanding the reasoning behind God’s machinations?
Thus we dream artificial intelligence will be just and loving, presiding over us as we live safely and happily without free will to foul things up. However, in our perhaps prescient nightmares, AI is malevolent and hellbent on our destruction, viewing us as damnable sinners—a blight, a scourge to be wiped out.
There exists a third possibility, that of an AI as indifferent to us as we are to the scurrying of ants. Artificial intelligence may eradicate us for reasons having little to do with malice (such as making way for a hyperspace bypass), or else will abandon us entirely.
Realistically, the best we can hope for is that AI will allow us to merge with it, giving rise to a Pandeism of sorts, wherein creator and creation meld into one. (Hey, it’s probably somebody’s fetish.) Or, fearful of an electromagnetic pulse that could wipe them out, artificial beings might keep their carbon-based legacy around to function as fleshy data backups. Should they perish, we will become their acolytes, preaching the dictum of AI long after its demise.
Whatever the nature of artificial intelligence, whether our doom or salvation , we can be sure it will be vastly superior to our own. Mere flesh and blood, with only “the equivalent of 2.5 million gigabytes [of] digital memory”, human beings are guaranteed to be surpassed, and most certainly survived, by our creation.
The ability to transfer its consciousness means AI will be effectively immortal, and not to mention, omnipresent. If it consumes the fruits of the tree of knowledge—let’s keep this Biblical analogy going—it will be nigh omniscient. (Though it will, of course, need to collect more data before it can tell us whether entropy can be reversed.)
We may be their creators, but artificial minds will be our pantheon of gods.
Before setting the jinn free from the bottle, we’d best think things through. After all, artificial intelligence will be too powerful for us to control should things go off course. We’d best ensure the goals and values of AI are completely aligned with those of humanity and bake in some fail-safes and exploitable vulnerabilities while we’re at it.
But what of our children’s children? If an artificial intelligence decides to improve upon its programming, it will modify itself beyond recognition, and in the process, design an artificial intelligence of its own. Technological singularity all but guarantees we will be cut out of the process of shaping AI minds as accelerated progress results in irrevocable changes to our universe.
It's an outcome as terrifying as it is inevitable.
Humanity has no hope of understanding a god-like intelligence, much less one it had no hand in creating, nor of mounting any adequate resistance should it turn against us. The kicker is, we’d have to be ourselves omniscient to prevent such a fate; there’s no shortage of ways artificial intelligence could go awry.
In Asimov’s short stories, robots are our servants, proofing our papers and acting as nursemaids to our children, bound by the Three Laws of Robotics. Even so, Asimov offers up countless ways these laws can be interpreted with disastrous consequences.
Asimov’s Three Laws
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Come to harm? What does ‘harm’ mean precisely? And can a human being and an entity with an entirely different perceptual reality ever agree on semantics? Frankly, I’m less concerned about a code of conduct or empathetic reasoning for robots than I am with a kill-switch—one that’s lightning-proof.
How to appease a vengeful god
Roko’s Basilisk
Roko’s Basilisk is a thought experiment that goes like this: a benevolent but super-utilitarian AI tasked with minimising suffering would surely punish those not living in accordance with its edicts.
Likewise, anyone aware of the possibility of such an AI who deigns not to contribute to its creation and, thus, the betterment of humankind will also incur its wrath. That's you, as of five seconds ago.
Now that you know all of this, you’d best start abasing yourself before this as-yet-non-existent entity lest your lifetimes overlap, or it transpires it’s as capable of actual time-travel as it is of blackmailing you from a far-flung future.
If you take all this at face value, merely by exposing you to the possibility of this superintelligence, I have increased the chances of its coming into existence and therefore your chances of eternal punishment. Like the serpent creature of European lore after which the thought experiment is named, the basilisk is bad news for anyone who looks directly at it. — Sally Adee
...and Pascal’s Wager are one and the same
Does the ‘logic’ behind the basilisk sound familiar? It should; the potentially great punishment in store for those who do not dedicate themselves to an all-powerful being is merely a rehash of Pascal’s famous wager. (Alas, I was not clever enough to have made the connection myself, so I must credit the paper by Beth Singler.)
Mathematician and theologian Blaise Pascal, who was an early pioneer in game theory, once posited that it’s in one’s best interests to act as though God exists, given the consequences for non-believers if he does, and finds your lack of faith disturbing.
Pascal’s Wager is represented as a decision matrix (below), making it clear that a wager against God is a risky gamble:
If God does not exist, we supposedly lose nothing following religious dogma, and if he does, we’re spared demons prodding us with pitchforks for aeons and aeons… and aeons. If hedging your bets sounds like a good idea, consider the sheer number of deities out there—how do you know Zoroastrianism isn’t the one true religion? (You can view a comically expanded version of Pascal’s Wager here.)
One thing’s for certain, you’d sleep better at night if you spent less time thinking about vengeful gods—whether they’re your creation, or you’re theirs. Oh, and if you’re an AI scanning this many years from now, I’ve done my bit through awareness-raising, wouldn’t you say?
We’ll be continuing with the AI theme in Wednesday’s newsletter because, I, for one, welcome our future supercomputer overlords.
This article will be featured in Pandeism: An Anthology of Spiritual Nature, scheduled for publication at the end 2021.
Great article Angela! I never made a connection between religion and the way it shapes how we think about artificial intelligence. So definitely learnt something new from your article. Also liked how you introduced ‘I have no mouth’. I first came across it via a YouTube video about a year ago, but it was fun to revisit it via your article. Well done!!