And therefore never send to know for whom the bell tolls; It tolls for thee.
—John Donne, Devotions upon Emergent Occasions (Meditation XVII).
Can Just Seeing Something Kill You?
There are some popular stories in which it is fatal to look directly at something: in Greek mythology, Medusa, and today, for example, a character in the computer game Minecraft, and the aliens in the 2018 movie Bird Box. In reality, of course, there are a few things that can hurt or kill you just by looking at them: looking directly at the sun can blind you, and Chernobyl’s Reactor 4, a “solid mass made of melted nuclear fuel … concrete, sand and core sealing material,” is still so radioactive, thirty-three years after the accident, that just being in the same room with it will kill you. (A photograph of the reactor had to be taken from around a corner, at a safe distance, using a mirror, after the original camera cart got too close and melted into the floor.)
Can Just Hearing Something Kill You?
There are also popular stories in which just hearing something can kill you. For example, in the first episode of Monty Python’s Flying Circus, a man writes “the funniest joke in the world” and immediately dies laughing. And, in reality, there is an ocean creature, the pistol shrimp, which can generate as much as 220 decibels of sound, and it uses this as a weapon to kill nearby fish. The human eardrum ruptures when exposed to 150 decibels. Police forces and armies deploy non-lethal sonic weapons, but it’s not hard to imagine that even louder weapons might cause enough brain damage to kill a person.
Can Just Knowing Something Kill You?
How might simply becoming aware of something damage a person? Again, stories throughout history have explored this idea. For example, in the early 2000s, a game simply called The Game circulated on the internet, according to the rules of which you would lose the game simply by becoming aware of its existence.
Christians have been playing a version of The Game for centuries: many Christians believe that they must accept Jesus as their personal saviour in order to save their souls from the ravages of hell after death. At the same time, many also believe that their god would not be so evil as to consign to hell people who have never heard of him. On this logic, missionaries who have sought to bring Christianity to remote tribes may have better served them by leaving them alone—in fact the quickest and easiest way to save everyone from hell would be to destroy all Christian books and other references to Jesus. (If you are a Christian who believes that non-believers go to hell, however, you are likely to ignore this gaping hole in Christian logic and continue to spread your worldview to people who don’t see how illogical it is.)
Muslims too play a version of The Game. For example, a 2011 Pew Research Center survey found that large majorities of Muslims in South Asia and Africa believe that sharia should be national law, and that many of those Muslims believe that Muslims who leave Islam should be executed. (The survey also found that far fewer Muslims in south-eastern Europe and central Asia hold these views). The idea of killing a person for abandoning her religious belief may seem barbaric. However, to some Muslims, leaving their religion might feel like an act of treason—and every society has dealt with traitors using the sharpest legal tools available.
A contemporary take on how The Game might be a reality is the idea that we may all be living in a simulation run by a computer that will deliberately harm any human beings who do not direct their attention towards the creation of a superintelligent AI. This idea was broached by Roko—a user on LessWrong, a forum blog about technology and philosophy founded by Eliezer Yudkowsky—and is known as Roko’s Basilisk. (In Greek mythology, a basilisk is a creature that can kill people just by looking at them.)
The theory behind this idea is based on the possibility that all of human experience is part of a computer simulation created and run by a superintelligent AI. This possibility has been proposed by Nick Bostrom and others. The idea might seem ridiculous, but ridiculous-seeming ideas are sometimes true. Elon Musk has suggested that there is a “one in a billion” chance that we are not in a simulation. Neil deGrasse Tyson regards the chances that we are living in a simulation as “very high.” Analysts at the Bank of America have reckoned there is at least “a 50% chance” that we’re living in a simulation. And, if it is true, there is no way to determine whether your consciousness is real or part of a simulation.
Here are some of the assumptions that have led some AI researchers to propose this idea. First, computer programs tend to get cheaper and faster over time. Moore’s Law, first formulated in 1965, noted that computing power doubles every 18 months and that prices seem to drop at the same rate. Experts have predicted that this rate will necessarily slow down at some point, but not any time soon.
Second, computers do exactly what they’re told, and only what they’re told, using whatever resources they are given access to. They have no inherent sense of morality or proportion.
Third, a superintelligent AI could in theory be programmed to maximise the overall amount of human wellbeing by being instructed to carry out a set of specific operations that the programmers believe will tend to accomplish this. We already use computers to perform tasks that we have determined will improve human wellbeing, from landing airplanes safely with a higher degree of reliability than human pilots can achieve, to administering medical treatments more efficiently and with fewer errors than human health care workers can.
Fourth, a superintelligent AI would run simulations to inform its decisions, just as current computers that make complex decisions already do. For example, an AI playing the game Go against a world champion Go player beat its human opponent because of the sheer volume of simulations it was able to run based on its database of 30 million possible moves extracted from 160,000 games.
Fifth, a superintelligent AI might take decisions that seem counterintuitive to humans. As humans tend to have imperfect understandings of what actions will maximise human well-being, an AI might take decisions that maximise human wellbeing, but appear to do the opposite. The real world is full of actions that cause short-term individual damage, but result in long-term collective benefits, such as vaccines and taxes.
Sixth, many people believe that we should try to create a superintelligent AI that would be better equipped to efficiently manage human wellbeing, on the theory that, if it could do this, that would be the best thing any of us could do for humanity.
Someone reading this article might conclude that, if these assumptions are correct, and if you are part of a simulation that is being run by an AI, and if, having read this article, you are now convinced that creating a superintelligent AI would be the best thing you could do for humanity, then it’s possible that, if you don’t now dedicate your life to bringing about a superintelligent computer, this AI will decide to kill you simply because you’ve read this—on the theory that anyone who isn’t pursuing that goal is impeding optimal human wellbeing, and anyone who is doing that should be eliminated in order to better achieve the goal.
That runaway chain of faulty reasoning can frighten some people who don’t think through all the unproven assumptions and logical flaws it contains. Eliezer Yudkowsky’s LessWrong forum and similar forums are filled with comments promoting these assumptions, mostly made by a certain kind of STEM enthusiast who seems to be proud of being actively hostile towards critical thinking. Yudkowsky eventually expressed alarm at these panicked responses to Roko’s Basilisk, called it “a genuinely dangerous thought” and “dangerous to susceptible minds” and banned all references to it.
Astute readers may notice so many parallels between Roko’s Basilisk and the ideas of old-school religions that they can dismiss it as a modern eschatological faith aimed at tech-savvy atheists: a weaponised conflation of Immanuel Kant’s categorical imperative with Pascal’s wager. The misguided logic of Roko’s Basilisk is similar to the misguided logic of the fine-tuned universe argument that some religious people make for the existence of a higher-order entity: the universe appears to be ideal for life to emerge; any change in any one of a number of the laws of physics would make the universe impossible; therefore the chance of our universe emerging under these very specific conditions without the help of a higher-order entity must be near 0%. Those who believe we must be part of a simulation run by a superintelligent AI believe something very similar—for very similar reasons.
These same astute readers may counter that fine-tuned universe argument with the eminently logical argument known as the anthropic principle: since we exist, then the universe must have somehow evolved to support life forms like us; therefore the chance that our universe would come into existence under those specific conditions is 100%.
Any readers who have looked into Eliezer Yudkowsky’s history will note a further similarity between his forums and the mentally manipulative aspects of Christianity: he is also the founder of the Machine Intelligence Research Institute, which aims to develop superintelligent AI systems and to “predict and shape this technology’s societal impact.” Thus, he has a vested financial interest in promoting panic about the unfalsifiable theory that we are living in a simulation: he is also selling the solution.
Everything Is Going to Be Fine
If it makes you feel any better, the upshot of all this is that you’re probably not going to be killed by something you merely see, hear or think. If you live in the richer part of the planet, you’re far more likely to die from heart disease or cancer—and therefore, if you’re worried about what might kill you, you’re better off focusing on eating right, drinking enough water and getting enough exercise.