And therefore never send to know for whom the bell tolls; It tolls for thee.
—John Donne, Devotions upon Emergent Occasions (Meditation XVII).
Can Just Seeing Something Kill You?
There are some popular stories in which it is fatal to look directly at something: in Greek mythology, Medusa, and today, for example, a character in the computer game Minecraft, and the aliens in the 2018 movie Bird Box. In reality, of course, there are a few things that can hurt or kill you just by looking at them: looking directly at the sun can blind you, and Chernobyl’s Reactor 4, a “solid mass made of melted nuclear fuel … concrete, sand and core sealing material,” is still so radioactive, thirty-three years after the accident, that just being in the same room with it will kill you. (A photograph of the reactor had to be taken from around a corner, at a safe distance, using a mirror, after the original camera cart got too close and melted into the floor.)
Can Just Hearing Something Kill You?
There are also popular stories in which just hearing something can kill you. For example, in the first episode of Monty Python’s Flying Circus, a man writes “the funniest joke in the world” and immediately dies laughing. And, in reality, there is an ocean creature, the pistol shrimp, which can generate as much as 220 decibels of sound, and it uses this as a weapon to kill nearby fish. The human eardrum ruptures when exposed to 150 decibels. Police forces and armies deploy non-lethal sonic weapons, but it’s not hard to imagine that even louder weapons might cause enough brain damage to kill a person.
Can Just Knowing Something Kill You?
How might simply becoming aware of something damage a person? Again, stories throughout history have explored this idea. For example, in the early 2000s, a game simply called The Game circulated on the internet, according to the rules of which you would lose the game simply by becoming aware of its existence.
Christians have been playing a version of The Game for centuries: many Christians believe that they must accept Jesus as their personal saviour in order to save their souls from the ravages of hell after death. At the same time, many also believe that their god would not be so evil as to consign to hell people who have never heard of him. On this logic, missionaries who have sought to bring Christianity to remote tribes may have better served them by leaving them alone—in fact the quickest and easiest way to save everyone from hell would be to destroy all Christian books and other references to Jesus. (If you are a Christian who believes that non-believers go to hell, however, you are likely to ignore this gaping hole in Christian logic and continue to spread your worldview to people who don’t see how illogical it is.)
Muslims too play a version of The Game. For example, a 2011 Pew Research Center survey found that large majorities of Muslims in South Asia and Africa believe that sharia should be national law, and that many of those Muslims believe that Muslims who leave Islam should be executed. (The survey also found that far fewer Muslims in south-eastern Europe and central Asia hold these views). The idea of killing a person for abandoning her religious belief may seem barbaric. However, to some Muslims, leaving their religion might feel like an act of treason—and every society has dealt with traitors using the sharpest legal tools available.
A contemporary take on how The Game might be a reality is the idea that we may all be living in a simulation run by a computer that will deliberately harm any human beings who do not direct their attention towards the creation of a superintelligent AI. This idea was broached by Roko—a user on LessWrong, a forum blog about technology and philosophy founded by Eliezer Yudkowsky—and is known as Roko’s Basilisk. (In Greek mythology, a basilisk is a creature that can kill people just by looking at them.)
The theory behind this idea is based on the possibility that all of human experience is part of a computer simulation created and run by a superintelligent AI. This possibility has been proposed by Nick Bostrom and others. The idea might seem ridiculous, but ridiculous-seeming ideas are sometimes true. Elon Musk has suggested that there is a “one in a billion” chance that we are not in a simulation. Neil deGrasse Tyson regards the chances that we are living in a simulation as “very high.” Analysts at the Bank of America have reckoned there is at least “a 50% chance” that we’re living in a simulation. And, if it is true, there is no way to determine whether your consciousness is real or part of a simulation.
Here are some of the assumptions that have led some AI researchers to propose this idea. First, computer programs tend to get cheaper and faster over time. Moore’s Law, first formulated in 1965, noted that computing power doubles every 18 months and that prices seem to drop at the same rate. Experts have predicted that this rate will necessarily slow down at some point, but not any time soon.
Second, computers do exactly what they’re told, and only what they’re told, using whatever resources they are given access to. They have no inherent sense of morality or proportion.
Third, a superintelligent AI could in theory be programmed to maximise the overall amount of human wellbeing by being instructed to carry out a set of specific operations that the programmers believe will tend to accomplish this. We already use computers to perform tasks that we have determined will improve human wellbeing, from landing airplanes safely with a higher degree of reliability than human pilots can achieve, to administering medical treatments more efficiently and with fewer errors than human health care workers can.
Fourth, a superintelligent AI would run simulations to inform its decisions, just as current computers that make complex decisions already do. For example, an AI playing the game Go against a world champion Go player beat its human opponent because of the sheer volume of simulations it was able to run based on its database of 30 million possible moves extracted from 160,000 games.
Fifth, a superintelligent AI might take decisions that seem counterintuitive to humans. As humans tend to have imperfect understandings of what actions will maximise human well-being, an AI might take decisions that maximise human wellbeing, but appear to do the opposite. The real world is full of actions that cause short-term individual damage, but result in long-term collective benefits, such as vaccines and taxes.
Sixth, many people believe that we should try to create a superintelligent AI that would be better equipped to efficiently manage human wellbeing, on the theory that, if it could do this, that would be the best thing any of us could do for humanity.
Someone reading this article might conclude that, if these assumptions are correct, and if you are part of a simulation that is being run by an AI, and if, having read this article, you are now convinced that creating a superintelligent AI would be the best thing you could do for humanity, then it’s possible that, if you don’t now dedicate your life to bringing about a superintelligent computer, this AI will decide to kill you simply because you’ve read this—on the theory that anyone who isn’t pursuing that goal is impeding optimal human wellbeing, and anyone who is doing that should be eliminated in order to better achieve the goal.
That runaway chain of faulty reasoning can frighten some people who don’t think through all the unproven assumptions and logical flaws it contains. Eliezer Yudkowsky’s LessWrong forum and similar forums are filled with comments promoting these assumptions, mostly made by a certain kind of STEM enthusiast who seems to be proud of being actively hostile towards critical thinking. Yudkowsky eventually expressed alarm at these panicked responses to Roko’s Basilisk, called it “a genuinely dangerous thought” and “dangerous to susceptible minds” and banned all references to it.
Astute readers may notice so many parallels between Roko’s Basilisk and the ideas of old-school religions that they can dismiss it as a modern eschatological faith aimed at tech-savvy atheists: a weaponised conflation of Immanuel Kant’s categorical imperative with Pascal’s wager. The misguided logic of Roko’s Basilisk is similar to the misguided logic of the fine-tuned universe argument that some religious people make for the existence of a higher-order entity: the universe appears to be ideal for life to emerge; any change in any one of a number of the laws of physics would make the universe impossible; therefore the chance of our universe emerging under these very specific conditions without the help of a higher-order entity must be near 0%. Those who believe we must be part of a simulation run by a superintelligent AI believe something very similar—for very similar reasons.
These same astute readers may counter that fine-tuned universe argument with the eminently logical argument known as the anthropic principle: since we exist, then the universe must have somehow evolved to support life forms like us; therefore the chance that our universe would come into existence under those specific conditions is 100%.
Any readers who have looked into Eliezer Yudkowsky’s history will note a further similarity between his forums and the mentally manipulative aspects of Christianity: he is also the founder of the Machine Intelligence Research Institute, which aims to develop superintelligent AI systems and to “predict and shape this technology’s societal impact.” Thus, he has a vested financial interest in promoting panic about the unfalsifiable theory that we are living in a simulation: he is also selling the solution.
Everything Is Going to Be Fine
If it makes you feel any better, the upshot of all this is that you’re probably not going to be killed by something you merely see, hear or think. If you live in the richer part of the planet, you’re far more likely to die from heart disease or cancer—and therefore, if you’re worried about what might kill you, you’re better off focusing on eating right, drinking enough water and getting enough exercise.
As I wrote last night, the idea of a book that drives all its readers to madness or suicide has long been a recurring trope of weird or fantastic literature. One classic example I mentioned last night was Robert W. Chambers’ “The King in Yellow.” In his 1895 short-story collection by that title, often described as one of the classics in modern macabre supernatural literature, Chambers (1865-1933) included several macabre stories, featuring Bohemian artist or “Decadent” characters, connected by the theme of a forbidden play with a supernatural or other-worldly setting called “The King in Yellow,” supposedly popular in certain literary/artistic Bohemian or “Decadent” circles, that drives all who read it to despair or madness–its titular “King in Yellow” apparently being some sort of hideous, terrifying demonic figure. Some 60 years later, science-fiction writer James Blish used a somewhat similar theme in his 1955 short story “The Book of Your… Read more »
> With fine-tuning, we are fishing with a net that cannot catch anything except universes that allow for us to exist. Sure but that is trivially obvious and does not answer the challenge. It remains true that we won the lottery of possible universes 20 times in a row. That we would not be here to notice it if we had not won is quite true but adds nothing. If Mr. Smith had not won 20 times in a row, no one would be noticing what didn’t happen. > That’s the WAP; a selection effect baked into our background knowledge. > If fine-tuning is the “evidence” we are trying to explain, and our background knowledge contains the proposition that we exist, alongside the weak anthropic principle which states that all of the conditions necessary for our existence must be so if we are to exist, then it logically follows that… Read more »
Before I actually read the first paragraph of “Will this article kill you?,” from its title I thought the article might be about “safe spaces,” “trigger warnings,” and “sticks and stones may break my bones…”! You know, about whether or not the emotional shock of reading an article skeptical of “white privilege” or Critical Race Theory might affect the health or even survival of students of color!
“therefore the chance that our universe would come into existence under those specific conditions is 100%.”
It fascinates me that this is considered an argument. Take the scenario of Mr. Smith, who wins the lottery 20 times in a row. Folks are beginning to wonder at his luck and some suspect it’s more than luck. In response Mr. Smith says: “It is apparent that we live in a universe where my winning the lottery 20 times in a row is 100% certain to happen because it has happened. What’s the problem?” In fact the argument from design remains very powerful and has yet to be overcome.