Humanity’s capacity for critical thinking is of paramount importance to our survival. This remains true, even in our hyper real information age, in which Artificial Intelligence (AI) is playing an ever-increasing role in human affairs. Yet, as our universities and higher education institutions become more technically utilitarian, and as our populations succumb more readily to coarse platitudes and ideological orthodoxies, we may not be ready for what is coming. And it could prove fatal to us. I’m going to discuss an incident from the Cold War, which provides a salutary lesson in this regard: we must learn to think differently, or die.
The specter of job ‘automation’ and the impact of dramatic improvements in the anthro-technology nexus has become staple fare in popular commentary over the last few years. Sam Harris has engaged with the dilemmas we will likely face with the advent of AI. The feats of Elon Musk and others, the role of bots in political processes, the embryonic emergence of quantum computing and the possibility of downloadable memories have stirred fascination, bemusement and even fear. For years, popular culture has been replete with the projections of dystopian futures in which machines—technology— become the masters (or ‘Terminators’) of their human creators.
We can no longer take the likely limitations of our technological achievements for granted. In the 1960s, at the dawn of the computer age, who could have foreseen Instagram, Facebook or Twitter? Even by comparison with a few short decades ago, our information management, communication and even production capacities are beyond what anyone from the baby boomer generation could have thought possible. Yet, in popular fiction, our fear of the technical ‘other’ has traditionally been mitigated by the comforting notion that the plucky human protagonists would—and could—win out over the cyborgs, using the power of their frontal lobes.
The real danger is not that we might be subjugated by the Borg, or that ‘we’ might lose control to ‘them.’ The biggest threat comes from the integration of our technology with our primate-based decision-making capacities. When humans make decisions, we rarely employ the logical modality of game theory, which requires clear choices and optimum available information. Decision-making, especially crisis decision-making, is usually messy, made under fire (metaphorical at least), infused with emotion, and without all the available information to hand. Critical thinking is tough work. In practice, human reason suffers from what policy makers sometimes refer to as ‘bounded rationality.’
A test case involving the crucial intersection of human decision-making with cutting edge technology in the context of an acute crisis has already occurred, with potentially devastating consequences. In that high-pressure crucible of man and machine, we almost failed emerge with modern civilization intact. It occurred with breathtaking speed and the stakes had never been higher. It is telling that this profoundly important event, humanity’s closest brush with a self-created technologically-induced apocalypse to date, has not yet made it to big screen blockbuster status. Perhaps it should. Perhaps then we could reflect on the implications of the infusion of the human and the technological in crucial decision-making scenarios.
Russian Roulette
In September 2017, documentary film maker Peter Anthony telephoned a Russian gentleman called Dimitri, asking him to pass along birthday wishes to his father. To his surprise and sadness, Anthony learned that the man’s father, Stanislav Petrov, had already passed away in May, at the age of 77. Stanislav Petrov is buried near his wife in a nondescript cemetery in an obscure suburb of Moscow close to his home. There are no monuments, holidays or memorial days dedicated to Stanislav Petrov. His funeral was attended by only a few family members and friends. Stanislav’s story is largely unknown to the vast majority of people on the planet, although Cold War scholars and security experts have probably heard of the incident in which he was involved. This is strange and telling because Stanislav’s story is everyone’s story—the defining moment of his life affected every single person living at the time and born afterwards.
On the September 26, 1983, Colonel Stanislav Petrov went on shift duty and took over command of Serpukhov 15, a top-secret early warning satellite installation, designed by the then Soviet Union to give the Soviet High Command advance notice of an American nuclear first strike. In the high-stakes of Cold War nuclear strategy, when literally every human life on the planet was within the ambit of a terrifying war game, the Russians suspected that the Reagan administration was toying with the idea of testing Russia’s resolve in combat. The shooting down of a Korean airliner by a Soviet fighter a few weeks earlier had heightened the tension to a dangerous level. Ronald Reagan had addressed the American nation and condemned the Soviet regime. Preparations for Operation Able Archer, annual war games conducted by NATO along the Iron Curtain, roused collective Russian memories of Nazi Germany’s Operation Barbarossa in 1941. In the latter ‘exercise,’ the German units had morphed from training formations into combat troops and promptly punched a hole through the USSR’s frontiers, steamrolling their way to major Russian cities, including Moscow. The Soviets had vowed never to get caught out like that again.
In 1983, under the paranoid leadership of former KGB officer Yuri Andropov, the USSR feared the worst. And prepared for it. US rhetoric about an ‘evil empire’ did not help. Nor did it help that, although the Soviets were wrong to believe the Reagan administration was considering a first-strike policy, such a policy had, in the past, been contemplated by the US military leadership. The infamous 1961 Burris Memo is evidence that a US first strike against the USSR, including the use of nuclear weapons, had been more than just a remote possibility. Kennedy had been briefed on this plan—proposed in 1957—by the then Joint Chiefs of Staff and the Director of the CIA in July 1961. Kennedy walked out of the meeting, ordered total secrecy about its subject matter, and incredulously commented to his Secretary of State, Dean Rusk, ‘And we call ourselves the human race!’
The advanced early warning system developed and deployed by the USSR in 1983 utilized state-of-the-art methods including infrared, which allowed the Soviet military to detect missile launches via thermal readings from the continental United States. In the event that this system detected a launch, it would alert the crew in the Serpukhov 15 bunker. Protocol demanded that the unit commander verify the authenticity of such a launch and alert Soviet High Command, who would very likely assume the worst and prepare a grim retaliatory response.
In his bunker on that fateful night, Colonel Petrov settled into his eight-hour shift. His crew had taken over at midnight. At twelve fifteen am, scarcely fifteen minutes after the shift had started, the large monitor screen before him lit up with the word ‘Launch’ and Stanislav’s world—indeed everyone’s world—turned upside down. The system had detected a launch signal from the Midwestern United States. The dilemma that Colonel Petrov faced was undoubtedly the most important individual decision of consequence ever taken by a single human being. When the Cuban Missile Crisis unfolded in October 1962, President Kennedy assembled a team of his best advisers, took counsel from as many people as he felt could contribute (including retired Republican policy makers like Dean Acheson), consulted, deliberated, waited, negotiated, deliberated and consulted. He strategized and analyzed. Kennedy’s dilemma played out over thirteen days.
Stanislav Petrov had fifteen minutes.
The system for which Petrov had command responsibility had primitive automaticity, to assume a launch on the basis of specific signals. Upon launch, ICBM flight time from the US to the Soviet Union was approximately thirty minutes. Without the new system, Soviet conventional radar would only detect inbound missiles half way through that flight time, as they approached Russian airspace. Ordinarily, that only gave Soviet High Command fifteen minutes to verify an attack and prepare a response. To gain time the new system was designed to detect missile strikes from the moment of launch, by reading the thermal plumes of airborne missiles, effectively doubling the time span for Soviet decision-making.
At 00.15 hours, when Serpukhov’s main screen exploded into life, it was up to him to figure out what to do next. Accounts are patchy, but it is clear that Stanislav sought verification. He informed his superiors that the initial launch signal was a false alarm, while his crew concurrently checked and rechecked the technology they were using. But more missile launches soon appeared on the screen. In total, the system identified five launches. His terrified crew wrestled against the clock to test the system for faults. Hurried diagnostics verified that the system was working perfectly. Logically, and in accordance with protocol, Colonel Petrov had a duty to inform Soviet High Command of the authentic launch detection. There was only one conclusion to be drawn: the US was attacking the Soviet Union with nuclear missiles. There was only one realistic response: a retaliatory strike against the United States.
But Stanislav wasn’t buying it. Quite apart from an instinctive human unwillingness to set a sequence of events in motion that would result in thousands of nuclear missiles flying through the stratosphere, and millions of human casualties, something niggled at Petrov. Why only five missiles? If this was an all-out nuclear strike, which was the only viable option for a US first strike, wouldn’t the Americans have filled the sky with their arsenal, in order to destroy Russia’s retaliatory capacity? He decided to do nothing. He waited. He flouted his own protocol. He placated his subordinates and ordered continuous system checks while the clock ticked down. He wanted verification of the launch from the conventional radar system.
Anthony’s documentary does a commendable job of interlacing a dramatization of these events with Petrov’s own riveting recollections of his thought process, which he intimated on camera to his young Russian translator as they sat in a diner (ironically) in the American Midwest, where the Minute-man missiles he feared might be traveling towards him that night still sit in their silos today. You can see it in her face as she listens: she and her generation can hardly believe that this kind of world existed before they were born.
But what is most striking is not captured by the documentary. Petrov had received a civilian education, unlike most Soviet officers’ standard military academy education. How lucky for us—and for the at least one hundred and fifty million Americans whose lives hung in the balance that night—that Stanislav thought through his decision. How lucky that he reasoned, lucidly and empathically, about the possible intentions of his remote adversaries. How fortunate that, even under a paranoid leadership, with patchy and incomplete information, in a context of ideological indoctrination and faced with unknown new technology, that one man opted for restraint, skepticism and critical thinking.
The world slept peacefully through this breathtakingly close shave with a very real apocalypse, and indeed the world was to remain ignorant (and mostly still is) of the whole incident for years afterwards. It was buried by embarrassed Soviet officials until accounts began surfacing in the 1990s, after the Cold War had ended. Petrov was initially praised and then quickly rebuked by his superiors for not having kept a proper log of events. He was not honored for his real-time critical decision making skills. He resigned a year afterwards, in 1984.
Had a ‘launch on warning’ system been in place, those of us unfortunate enough to survive might now be living on a radioactive planet. Even George W Bush baulked at that idea. But the incident points to problems with advanced technology that we are only now beginning to grapple with. The interweaving of human decision-making processes with such technology is profoundly perilous. Now that the technology itself can make decisions, we are definitively in uncharted territory.
Thinking at the Anthro-Technological Coalface
What’s at issue here is human reasoning in circumstances as new to us as that early warning system was to Colonel Petrov and his generation. Our only hope is that those operating at the coalface of new technologies, those who must make decisions that affect the lives of others, have a similar capacity to reason carefully, reflect and, if necessary, defy orthodoxy, when faced with technologically infused crises. This is a dilemma. Should we outsource elements of our decision-making to automated AI technologies? Should we place our confidence in this when our higher education sector remains manifestly flawed? Both secondary and higher education are decidedly patchy in delivering shrewd and meticulous thinkers to the workforces of the twenty-first century world. Are we ready for what is coming?
As Stanislav Petrov’s case shows, it is often the least among us who end up being shouldered with the heaviest burdens and the toughest calls—at very short notice and with contracted windows of decision-making time. In banking, for example, some of the most highly educated people in our modern economies have demonstrated the capacity for egregious risk taking and failures in critical decision making. Increasingly often, decisions are not taken by humans alone. The ranks of ISIS are filled by educated engineers with technical proficiency but they have still bought into the eschatological nonsense of millennial religious teachings. Educated middle class parents are eschewing life saving vaccinations for their kids. And flat earthers and creationists, with their medieval beliefs, are making a dramatic comeback. We have mastered profoundly world-changing technology just as a new dark age threatens. Are we as a species actually qualified to have access to the technology we’ve developed?
Our bulwark against this darkening of the modern mind should be higher education. It is here that orthodoxies should be least entrenched. It is here where young minds should be instilled with a questioning ethos, where the next generation should be schooled on how to think, and where differences of opinion and debate should be cultivated and nurtured. Instead, professors often singularly fail to or are unwilling to challenge their students (or allow others to challenge them) on the basis of ideological orthodoxies. Orthodoxy, doctrinal purity and thought policing are no longer the preserves of medieval religious orders. They are prevalent at Yale, Evergreen, Berkeley, Concordia, and too many other institutes of higher learning in the west, now, while we are poised at the cusp of quantum computing and drone warfare.
The consequences of this willful stupidity and cloistered thinking will be magnified by our technological accomplishments. Stanislav Petrov will never be placed in that predicament again. But the chances are that someone else will be, as more sophisticated technology infuses more and more aspects of our lives and decisions. Petrov was a critical thinker, able to quickly analyze patterns and detect that something was amiss. Somewhere in the future, we might not be so fortunate as to rely on a Petrov. There is a chronic disconnect between our ingenuity in creating technology and our lack of the common sense to use it without blowing ourselves up. The ultimate bulwark against dystopia is also being eroded too: our ability—indeed our willingness—to criticize. Thinking is not encouraged. Group think, in the age of cyber warfare, is potentially lethal. Consider that Stanislav lived in a closed society, which had no respect for the freedom of expression, let alone freedom of opinion of a serving military officer. We now live in societies in which free expression is under immense pressure, through disciplining, policing and the worst form of pseudo-intellectual mob rule. Ironically, it is under most pressure from those who should be defending it with the greatest vigor: the educators in our universities.
When contemporary and future decision-makers find themselves in a bunker with blinking lights, confronted by analogous dilemmas, under social pressure and with frantic activity around them, will they make the right call? Will they ask: why only five missiles?