Many political and economic forecasters use the beginning of a new calendar year to make predictions about what will happen during the Earth’s next full cycle around the sun. Yet every year we are surprised—from the emergence of Covid-19 to the fall of Kabul, events rarely play out the way we expect them to.
As a former UK intelligence officer, I am deeply familiar with the limitations of our ability to forecast the future—and with the problems that those limitations can cause. When I served in Afghanistan, my priority was gathering intelligence that could enable us to determine who was planting IEDs that targeted coalition troops—and where they were planting them. This task was made more difficult because our knowledge—both of the battlefield and of the enemy—was fragmentary and incomplete. We often felt like the group of blind men in the ancient Indian parable who encounter an elephant for the first time and are asked to describe it: one of them feels only the elephant’s tail, another only its trunk, and so on, and they end up confidently providing very different descriptions of what an elephant looks like.
My colleagues and I operated in a single Afghan province, Helmand. Our movements were limited by the rugged nature of the terrain and by the presence of enemy insurgents. Our enemies, by contrast, could move freely across the country and over the border into Pakistan. We were further limited by how little we understood the language and the culture. My unit specialized in human intelligence: that is, information gathered covertly from people in the local community—ideally within groups of enemy combatants. When we evaluated the usefulness of their information, we had to consider three questions. First, what was their motivation? Was it financial? Were they trying to use us to get rid of a rival? Or were they ideologically motivated? Second, how good was their access? Were they relaying first-hand information or a second-hand rumour? Third, how competent were they? Could they accurately remember and report what they had observed? The intelligence we gathered also needed to be integrated with intelligence from other sources, such as other agencies’ covert operatives, phone and email records, satellite imagery and public reportage.
Information gathering in everyday life doesn’t typically work like this. Most people tend to get their information from only a few sources—and rarely from sources that present conflicting perspectives or originate in countries other than their own. Thus, most people, like the blind men feeling the elephant, only get a small part of the picture. If you want to get a more accurate picture from what you read or listen to, it makes sense, at a minimum, to ask yourself the three questions we asked when we evaluated intelligence sources in Helmand province: what is the writer’s (or speaker’s) motivation, how good is their access to this information, and how much expertise do they have on the topic? Even if all the different agencies and coalition partners were sharing intelligence, the next challenge was to navigate the many thinking errors we all make when analysing information and making decisions.
It is helpful to be aware of these ways in which our common psychological biases and heuristics can lead us to inaccurate conclusions, especially in today’s increasingly polarised yet increasingly interconnected world—in which people often fail to agree on even the basic facts of an issue. As intelligence analysts, we tried to navigate around these pitfalls, many of which were identified by psychologists Daniel Kahneman and Amos Tversky in the 1970s.
One common psychological bias that we struggled with was the so-called fallacy of composition, which is illustrated by the parable of the blind men and the elephant. People tend to assume that, when they have information about one part of an issue, they understand the whole issue. For example, we were exposed to only a few examples of what members of the Taliban thought, but we tended to assume that all Taliban members thought the same way. Specifically, we assumed that the goals and objectives of one local Taliban subgroup would be the same as those of the larger group—when in fact the goals and objectives of different subgroups varied. Similarly, in everyday life, people tend to succumb to the fallacy of composition when they make generalisations about people who belong to groups to which they have very little exposure. This is an especially easy error to make because the media, knowing that extreme views are more likely to capture audience attention, tends to platform the most extreme group members.
Another cognitive trap we found hard to avoid in Afghanistan is present bias. People tend to give stronger weight to payoffs or losses that they expect will occur sooner rather than later. This may be even more tempting in war, when the losses could be the lives of your fellow soldiers. So in Afghanistan, we tended to focus more on finding information about a possible imminent attack on a single patrol than information that could eventually lead to the prevention of multiple attacks on multiple patrol groups in the future.
We also had to fight the human tendency to assume that tomorrow will be much like today, which made it difficult for us to anticipate key changes in our enemy’s strategy even when we had some clues that such a change might be imminent. It was tempting to dismiss such clues because they didn’t fit what we knew about the current situation. Pointing out that things are about to change risks offending those who developed previous assessments. Agreeing with the prevailing opinion is a safer option for any single intelligence officer. And yet circumstances change more often than they stay the same.
Among the many biases Kahneman and Tversky describe, perhaps my favourite is slothful induction. This is the tendency to reach a conclusion that is contrary to the evidence simply because it is easier than making the effort to absorb and process the evidence. For example, even when there is strong evidence that a politician has deliberately lied, if she insists that she was only mistaken, many people will simply choose to believe her. Another tendency that we all share is confirmation bias: we search for information—and recall it—in a way that confirms our existing beliefs. Today’s internet search engines make this tendency worse because their algorithms direct us to information that fits with what we have already shown we believe.
Another pitfall, identified during the Vietnam War, is the so-called McNamara fallacy (named after Robert McNamara, the US secretary of defence at the time). McNamara decided that the key metric for measuring the success of US war efforts should be the enemy body count. He focused on that metric partly because it could be quantified fairly easily, but he paid too little attention to variables that could not be quantified, such as the feelings of the public back home and the people of Vietnam. Variables that cannot be easily measured are often ignored. As a result, the body-count metric started to become the mission: commanders were so focused on enemy kills that they paid too little attention to other key variables that, it turned out, would decide the outcome of the war, such as the rules of engagement and evidence of people’s growing affiliation to the Viet Cong and dissatisfaction with American tactics. Similarly, during the 2000s, the US secretary of defence, Donald Rumsfeld, was determined to prosecute the so-called war on terror by analysing measurable data and setting clear objectives. His approach caused commanders in the field to fixate on meeting arbitrary deadlines and turning metrics green on PowerPoint slides—instead of analysing harder-to-quantify variables related to tangible success on the ground. The McNamara fallacy reminds us that, when we don’t take the whole context into account and focus on the right data, our predictions about the future are much less reliable.
Yet another pitfall is the so-called affect heuristic: people tend to let their likes and dislikes determine their beliefs about the current and future state of the world. For example, our political preferences tend to determine which arguments we find compelling. And we tend to assume that our own limited, subjective experience points to absolute truths, while dismissing the idea that other people’s different limited, subjective experiences may provide a key piece of the puzzle. This heuristic works in tandem with the human temptation to indulge in magical thinking—imagining that things will turn out as we want even though we can’t explain how. This tendency was noted by David Omand, the former head of Government Communications Headquarters (a UK intelligence service) as one of the most common reasons for failing to accurately predict developments, and many government officials have recently fallen prey to it in their slow response to the Covid-19 pandemic.
Knowing about these glitches in how we think does not make us immune to them. But consciously checking our thinking against them can help. Nevertheless, even if a forecast is grounded in a faultless analysis, it is still an exercise in speculation. This is especially true when trying to predict something as complex as the future geopolitical or economic environment. The war zone in which I worked, while not a closed system, nevertheless had many fewer variables than the world at large. And the number of variables that can affect the future has increased as the world has become increasingly interconnected, since information travels ever faster and new technology is constantly evolving. All this makes the systems that we want to analyse increasingly complex and unpredictable.
In complex systems, tiny differences in initial conditions can lead in unpredictable ways to large differences in outcomes over the long term: this is popularly known as the butterfly effect (shorthand for the idea that a butterfly perturbing the air by flapping its wings in Peru could theoretically influence the weather so radically that it causes a tsunami in Japan). The effect was identified in 1961 by the mathematician and meteorologist, Edward Lorenz. One day, he was exploring how weather patterns develop by entering variables into a computer simulation. He decided that he wanted to see a particular sequence of data again. To save time, he started that sequence from a point in the middle of its run, by entering the data that the first run had generated at that point. To his surprise, re-running the simulation this way produced a radically different weather outcome. Lorenz realised that this had happened because the computer program was set to round numbers off to six decimal places (so-called 6-digit precision), but the printout of the results that he had used to restart the sequence rounded numbers off to three decimal places. (For example, when the computer analysed a value of 0.999527, it printed it out as 1.000.) Lorenz had not expected this tiny difference to significantly alter the outcome. And yet, as he put it, “the approximate present does not approximately determine the future.”
Lorenz’s discovery is a key component of what came to be called chaos theory, which posits that, although the outcomes of complex systems can seem random, they are deterministic, and contain underlying patterns and feedback loops. Detecting these can help us forecast more accurately. How far in the future we can feel confident making predictions depends on how accurately we can measure a system’s current state, how rapidly the system’s dynamics tend to change, and how much uncertainty about the forecast we are willing to tolerate. Just as meteorologists can only forecast that there is a certain percentage chance of rain, intelligence professionals can only forecast probabilities—even though their governments (and the general public) want certainties.
Modern communications technology has radically increased the speed at which the global geopolitical and economic systems can change. For example, governments often make foreign policy announcements on social media, and the three or four billion people who use social media platforms have instant access to that new information, which potentially changes their behaviour. This can instantly invalidate even very short-term predictions. The 24-hour news cycle spins at such a dizzying speed that, by the time we analyse the potential impact of one event, our predictions have become outdated: stories about that event disappear, and the next hyperbole-laden headline is thrown into view. And that’s just the tip of the iceberg. Many of the variables that influence human affairs are machine-learning algorithms so complex that they have become black-box systems. Add to that our polarised political environment, in which people have trouble even agreeing on which facts are real, and clearly today we all live in Lorenz’s “approximate present.”
Many people hope that uncertainty about the future can be reduced by crunching big data with increasingly powerful and rapid computers. Just as our ancestors searched for answers in the sky, we search for answers in the digital clouds. But when we throw large amounts of data into a computer, spurious connections and correlations will emerge. To forecast with any degree of confidence, we still need to provide the algorithm with the right parameters, understand the larger context and ask the right questions. We cannot simply rely on artificial intelligence to identify the signal amid the noise, because it lacks our fuller understanding of the wider international context and the complexity of human behaviour.
I was lucky that, when I was providing intelligence to the soldiers in Helmand, they wanted to use my information to help determine next steps. In today’s more politically polarized environment, when new facts emerge that are likely to influence future events, it’s becoming harder to convince policymakers and the public to use that information to change course. For example, forecasts about the possible negative impacts of Brexit are likely to be ignored by Brexit supporters, while forecasts of its possible positive impacts are likely to be ignored by Brexit opponents.
The Covid pandemic has highlighted the degree to which expert forecasting is often ignored. This is partly due to the communications strategies of populist governments: they benefit politically by discrediting experts who question the wisdom of their policies. And it is partly due to people’s awareness of experts’ failure to predict certain major, high-profile events, such as the lack of WMD in Iraq, the 2008 financial crisis and the fall of Kabul. It is also partly due to a misunderstanding of how intelligence works. Most people understand that weather forecasts are helpful even though they are probabilistic and therefore sometimes inaccurate, and they understand that it’s easier to predict tomorrow’s weather than next week’s. But many people fail to understand that intelligence forecasts are helpful for the same reason, even though they too are probabilistic (which means that low-likelihood events are bound to happen sometimes) and even though they too are better at predicting events in the short term than in the long term.
The public (and even us experts) would benefit from better understanding the methods and limitations of modern professional intelligence forecasts, accepting their inherent uncertainty and being more open to unwelcome information. These forecasts help us briefly lift our eyes from the chaotic present to survey the far horizon, plan for possible futures and navigate the flood of information in which we are all drowning. And they do get many predictions right. It would be a tragedy if the public were to simply ignore intelligence forecasts. Today, when old certainties are fracturing, new technologies are proliferating across our multipolar world and we face new and emerging threats from adversaries we do not well understand, ignoring the possibilities to which intelligence forecasts alert us could have serious consequences.