How the World Will End; Nuclear Armageddon, A.I., Climate Change

| by Malhar Mali |

As a species we’ve had a strange obsession with the end of the world — from the hype around the Mayans speculating destruction to the Christians who currently live and hope for the Rapture. To learn about existential risks to the planet and the human species, I spoke with Phil Torres, author of The End: What Science and Religion Tell Us about the Apocalypse and the founder of the X-Risks Institute, focused on researching the potential tools of total destruction — nuclear weapons, biotechnology, synthetic biology, nanotechnology, and artificial intelligence, lone wolves, apocalyptic fanatics, ecoterrorists, and superintelligent machines. The X-Risks Institute has figures such as Dr. Michael Shermer and Dr. Peter Boghossian on its Board.

The following is our conversation transcribed and edited for clarity.

Malhar Mali: Can you tell me about your area of research and how you got interested in it?

Phil Torres: Our primary goal is mitigating worst-case scenarios for humanity — called existential risks. It could be an event that results in our extinction or a permanent, irreversible decline in our life quality. The field emerged from a 1996 book by philosopher, John Leslie, but the field really took form in early 2000’s due to a seminal piece by an Oxford philosopher named Nick Bostrom who coined that term.

So far the field has been very technocentric – focused on things like biotechnology, breakthroughs that might be on the horizon, nanotech, and, of course, Artificial Intelligence. The X-Risks institute is focusing on the other side of the agent/tool coupling. We try to figure out the various properties of relevant agents in an effort to devise effective strategies to intervene and mitigate these risks, therefore ensuring a greater likelihood of the survival of the human species.

I grew up in a very religious household. Denominationally we were Baptist – which is a terrible denomination (laughter). I grew up with thoughts of the Rapture and dispensationalism. Point being, the notion that the end of the world was quite imminent was really played up in my childhood. There were talks that Bill Clinton was the anti-christ. I know polls 2–3 years ago showed that 13% of Americans thought that Obama was the anti-christ. So these beliefs are really widespread and the “end of the world” was always in my head. And then it turned out there was this genuinely scientific field that rose up in the early 2000’s and it caught my attention and the issue matters – in a sense there’s no issue that’s more important than studying Existential Risk.

MM: This is a threat that’s been around for a while now, and I Imagine it’s status is constantly changing with the changing dynamics in the world, but what about Nuclear Armageddon? India and Pakistan have had some increasingly heated skirmishes in the last month or two.

PT: Talking about Nuclear Armageddon typically evokes thoughts of the Cold War. The atomic age began in 1945. Two years after that we came up with the Doomsday Clock – with the minute hand signifying the level of risk and midnight indicating doom. I believe it started out at 7 minutes to midnight. In 1953 it got to 2 minutes before midnight when both the USA and the Soviets detonated hydrogen bombs. It went all the way back to 17 minutes at the end of the Cold War.

Today, the clock stands at 3 minutes. As of a couple years ago the scientists that calculate this have started to consider Climate Change as the most immediate threat to human survival. Nonetheless, the minute hand is closer to midnight now than it was for most of the Cold War. In 2002, India and Pakistan had a conflict that both countries acknowledged could have gone nuclear. The prime minister of Russia, Dmitri Medvedev, said that we might have entered a new cold war. And the threat of nuclear terrorism is only going up. Bin Laden said it was his religious duty to acquire weapons of mass destruction. Similarly, the Islamic State, in a recent issue of their propaganda magazine called ‘Dabiq,’ which is named after the Syrian town in which they believe armageddon will take place, fantasized about getting a nuclear weapon from Pakistan – which has a history of malfeasance – smuggling it into the USA via South America and detonating it in a major urban center. There certainly is the desire.

A Harvard Nuclear Terrorism expert named Graham Allison said in a book published in 2005: “The probability of a nuclear weapon going off in a major U.S. city is roughly 50% in the next ten years.” Of course no bomb went off – but that doesn’t mean he was wrong. He based his estimate on extremely robust evidence, and in many experts are actually surprised something terrible hasn’t happened. Today, I think the situation might be as bad as it’s been at any point during the Atomic Age.

MM: That’s interesting. Usually when people think “Nuclear Armageddon” the Cold War is the first thing that comes to mind – at least to me – obviously not to you who’s studied in this area.

Moving on, lately the hot topic in this field has been A.I. Sam Harris gave a Ted Talk on the issue. What are your thoughts on A.I. risk?

PT: I thought Harris’ talk was really good and there is value in presenting this issue to the public. But there’s a tricky situation you have to navigate – which is not inducing panic in the population, and, conversely, not inducing nihilism. Harris is doing some good work, but he’s not the first to talk about this. Among the relevant existential risk scholars, there absolutely is a sense that this is a topic worth taking seriously. There’s no guarantee we’re going to be able to create a super intelligent agent at some point but the potential consequences of doing so are so great that many scientists believe we should be focusing on this issue.

I think it’s important to realize, in terms of super intelligence, the end result is nothing like Terminator. Nor does the A.I. have to be conscious. I think some people get hung up on that. It’s just an algorithm that has the ability or problem solve in a way that surpasses the best human minds – and also has specific goals. So if you had an A.I. that has extra-ordinary general intelligence and thinks really fast, and if its values are even slightly misaligned from ours, there is good reason to think this could be catastrophic.

A famous example is the case of a “paper clip maximizer.” This is almost a cliché within this literature, but it’s essentially a super-intelligence that values one thing over all others – and that is to maximize the number of paper clips in the universe. Part of the point of this thought experiment is that this sounds sort of benign… until you realize that paper clips are made out of atoms and so are you. Thus we might be a good resource to extract for material. As a result, humanity could be exterminated because of this agent that has no ill-feelings towards us, isn’t malicious, but simply wants to achieve its end.

When it comes to creating a super-intelligence the coding becomes important. Because there’s a difference between “do what I say” and “do what I intend.” Humans have this huge set of background knowledge that enables us to figure out what people actually say – in a context-appropriate way. But for an A.I., this is more of a challenge… it could end up doing exactly what we say but in a way that destroys the human race. I’m glad that Harris has given it a bit more attention, because we need to be thinking about this area.

MM: You talked about Climate Change. In terms of existential risk, is this more relevant for the next couple hundred years or so?

PT: I think Climate Change is arguably the greatest threat facing humanity. The reason I think that is not because Climate Change itself is going to result in some existential catastrophe – I think it’s possible but unlikely. We could have, for example, a run-away Greenhouse Effect which turns the earth into some unlivable cauldron – like Venus. Rather, I think Climate Change is a conflict multiplier. This is going to feed into every type of risk there is. Risk of war, biotechnology, nano-technology, terrorist attacks, and so on. There’s already been some research that’s connected Climate Change with terrorism. For example, the long drought from 2007–2010 in Syria forced a mass influx of rural farmers to move towards urban areas which really fueled the Syrian Civil War. Also, within this civil war the Islamic State was able to consolidate and revamp their efforts to where they are now.

Climate Change is not only a conflict multiplier, but this is a problem that will almost certainly last for millennia. There was a paper published in Nature Climate Science that basically argued we have two or three decades during which there is a meaningful window for action against Climate Change. Beyond this narrow time window, the climate of earth will be fixed for a longer time than civilization has existed for so far. This is an issue that hundreds of future generations are going to have to deal with.

And it’s also tied into biodiversity loss. Right now it’s almost certain we have ushered in the 6th major mass extinction event. The previous one being the dinosaurs 65 million years ago. Some biologists have said this is going to be the greatest legacy of humankind on planet earth – this extraordinary loss of human life. There’s another data point worth mentioning, which is the 2014 Living Planet report by the World Wildlife Foundation which found that between 1970 and 2010 the global population of wild vertebrates declined by 52%. People are welcome to extrapolate that into the future – it really doesn’t bode well.

So I think Climate Change should take precedence over other issues given its probable, pervasive impacts within civilization.

MM: Last question. If you had to venture a guess, how will the world come to an end?

PT: (Laughter) let’s make this topical. Gary Johnson recently mentioned that “in several billion years the earth will be consumed by an expanding sun,” which will become a red giant and then a white dwarf. There’s a small chance that we might be able to dislodge the earth and basically use it as a spaceship, (laughter) but that’s speculative.

The universe will ultimately sink into a state of frozen chaos forever marked by thermodynamic equilibrium – the entropy death. Beyond that, the number of risks has significantly increased since 1945. It’s hard to know how exactly it will end. I do suspect if we do manage to colonize the galaxy that will reduce the risk of us dying out. Then you could have a planetary catastrophe that doesn’t wipe us out. Elon Musk has said as soon as 2030 we could have people on Mars.

Otherwise, we are going through an extra-ordinary bottleneck right now where you have essentially all of these ancient world beliefs that are based on faith and derived through hearsay and faith – essentially passed from one person to another – and I think these dogmatic worldviews are set to collide with advanced technologies. I think it’s a really volatile situation. How many religious wars have there been and continue to rage on today?

In the future, it could be a group or even a single individual that uses technology to cause devastation. So I don’t know exactly – but the ultimate goal of my work is to stop this from happening: who are the agents most likely to do this and what are the best ways to mitigate their efforts?

—————————

Malhar Mali likes to write about how and why people think they way they do; secularism, human rights, politics, and culture. You can connect with him on twitter here.

—————————

Header Photo: NASA

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s