The prevailing view that we’re safe from technological unemployment has a dubious basis. We will continue to create new jobs, but we can’t yet create new human capacities. The presumption that we are irreplaceable in the economy is reminiscent of egocentric illusions of the past.
Since the industrial revolution, people have had dystopian concerns about technology displacing workers and causing economic collapse. The first political movement to coalesce around this issue was the so-called Luddites of the early 1800s, and the fear has bubbled up again and again ever since. These eruptions are usually provoked by a new invention. Beginning around 2008, the debate heated up once again, this time incited by information technology: increasingly fast computer chips, the Internet, and a new wave of artificial intelligence systems.
Historically, technology alarmists have been wrong — or premature, if we’re being charitable. The prominent Depression-era economist John Maynard Keynes famously predicted part-time work would be the norm by the early 2000s. In the 1960s a group of scientists and technologists wrote Lyndon Johnson, warning him of the growing threat of “cybernetics.” But as the years went by, new industries kept popping up, creating new fields of work that offset the ones usurped by ever-evolving machinery. Apart from the Great Depression, employment has been robust throughout the 20th century, so the argument has been generally viewed as discredited and largely ignored.
In this latest incarnation, unemployment concerns are being taken more seriously. Viscerally powerful technologies like self-driving cars have made some once skeptical economists open up to the possibility. Prominent experts such as deep learning pioneer Andrew Ng and former US Treasury Secretary Larry Summers have added new credibility to this perspective. Even so, the majority of economists remain unconvinced. If technology was really having this effect , it’s argued, you would expect to see clear patterns (radically increased productivity, otherwise inexplicable unemployment) in the economic data. The consensus is that the numbers do not bear this out.
Regardless of which side is ultimately correct , the absence of data should not convince us either way. The Larry Summerses of the world are speculating about the future, about which there is no data by definition. Technological unemployment is not inherently hard to measure after it’s occurred. So properly understood, this debate is strictly between two competing forecasts. And unfortunately, forecasting complex phenomena involves unavoidable uncertainty, which enables plausible theories to be generated for either outcome, hence the perpetual debate.
Economists generally argue that people concerned about technological unemployment are committing “the Luddite fallacy.” This fallacy is rooted in a failure of imagination. Luddites are too present-minded. History shows that new industries are constantly being created and are almost impossible to predict in advance. Mid-century typesetters and printers could not foresee that their grandchildren would be search engine optimizers and social media marketers. The economist Joseph Schumpeter called this process “creative destruction.” Embedded within Luddite thinking is a misconception about labor. Luddites wrongfully view demand for labor as a finite lump (the “lump of labor fallacy”). If there is only so much work to go around and a cotton gin can now pick cotton, a human will be permanently pushed out of the labor market — a zero-sum game. But this is mistaken because the size of the economy is not fixed; new demands for labor are constantly emerging because human appetites are insatiable.

The prevailing view that we’re safe from technological unemployment has its merits, but it’s important to acknowledge that there is no mechanistic basis for this belief. The premise that humans will always be needed in the production process is based on the fact that technological unemployment hasn’t happened before — i.e., a precedent. But past precedent isn’t sturdy ground, epistemologically. You don’t know how or why, just that something tends to happen. Many economists would argue that they do understand the mechanisms, but since economists can’t readily do controlled experiments on the macro economy, they are stuck in the purgatory of unverified theory, analogous to theoretical physics without experimental physics.
Economists are correct regarding the Luddite fallacy, but the rebuttal put forth is ultimately the right answer to the wrong question. The danger is not that new jobs will stop being created. The looming danger is that technology is colonizing not jobs, but human capacities. Demand for labor may not be a finite lump, but the human capacity to do work certainly is. Our bodies and minds are not infinite in ability. We cannot dig like a backhoe or calculate like a calculator. And technologists are aiming directly at our most basic abilities.
In this way, humans and machines are in a zero-sum contest as the Luddites feared, just not in the way they feared. Unfortunately, in this competition, machines have all the advantages. Fundamental human capacities — such as language or bipedal locomotion — change via genetic evolution, which moves at a glacial pace as compared to technological development, which advances by the decade. Putting aside the possibility of transhuman self-modification, humans are essentially static in this competition, and economic actors have every incentive to create these technologies as quickly as possible.
Businesses do not optimize for maximum employment of humans; they optimize for the interests of those who own and control them. There’s no systemic bias towards using humans to produce goods and services. We are a means to an end, from a business’s point of view. We use humans because we need them, but if new, better options show up, we’ll go with that. In a competitive economy, an arms race dynamic is the inevitable result, driving the adoption and development of these technologies, even in the absence of any societal intent.
General-purpose technology is having an underappreciated renaissance. Take vision. A human can take in high-resolution visual data and make sense of it with their brain’s visual cortex. Capable visual data sensors have been available for some time, but only recently have systems started achieving human-level object recognition. This ability is used in a wide array of jobs. Computers will soon surpass human performance in vision and we’ll never be able to regain that territory. What is true for computer vision is also true for voice recognition, reading, synthetic speech, and even writing. Robots have advanced to the point where they can not only walk but do backflips.
Technologists and economists steeped in these domains tend not to dispute that technology is encroaching in this way. What they do say is it’s nothing to worry about because humans will just move up the value chain: bots will do the grunt work and we’ll be left to the jobs that require emoting and creating. We’ll be promoted from the brain to the heart. They point to the ballooning elderly population and their ample need for nurses and caregivers. They point to life coaches and yoga teachers, Etsy artisans and podcasters. Technology will enable us to focus on what we do best as a species.
This is all fairly reasonable and seemingly hopeful. These areas are in fact harder to automate and will at a minimum take longer to develop. Humans often prefer to interface with other humans; there’s a reason massage chairs haven’t replaced masseuses. But trying to reach full employment through this shrinking island of remnant capacities is a tall order. The current economy needs to employ some 3 billion humans globally to maintain full employment. Even if technology never fully succeeds in these limited domains, there’s no reason to be confident that full employment is possible under this circumstance. You may be thinking: “Wait, we had the same limited imagination when everyone was a farmer!” It’s true that people were incredulous that there could be sufficient alternatives to farming in the early 1900s. But again, that concern was rooted in a misplaced focus on jobs.
The dilemma is even greater than it may appear. The workforce doesn’t have to be wholly displaced to cause many of the worst consequences — the United States would be in big trouble at a mere 20% unemployment. On top of this, in order to sustain consumer demand, income levels must be preserved. Warren Buffet does not buy 10,000 sofas. The economy cannot be based on billionaires selling each other islands. Our economy requires everyone to consume a high volume of lower-priced goods. Increasingly efficient technology will put downward pressure on wages in addition to threatening jobs outright. If mass consumption craters, everyone suffers.
I believe the tendency to dismiss technological unemployment out of hand stems from what I’ll call the Copernican bias: believing a proposition chiefly because it’s species-serving, exaggerating the power and importance of humans. Of course the namesake example is the assumption that the sun must orbit us, not the other way around. Our initial hypothesis is warped to accommodate the desires of our ego. The same psychological tendency is operating on us when determining our place in production. Our implicit belief is that we must be the dominant species of worker in the universe — the apex laborer.
Following the pattern of egocentric illusions that came before, this belief is plainly false. Humans are not the permanent linchpin of production. We’re just one laborer in the possibility space of laborers. The economic phraseology dividing production into “capital” and “labor” further obscures this truth. We don’t like to group ourselves in with lifeless machinery, just as we don’t like to see ourselves as another member of the animal kingdom . We are “people”; we are “laborers.” But in the end, machines do labor, and these constructs do not capture a fundamental difference . We might even go so far as to think of living organisms as “natural technology.” And as we’ve learned from the endless iPhone iterations, no technology is the final, apex technology. Technology is generational, incrementally improved and sometimes thrown out altogether, as happened with horses — another natural technology whose best days are behind it.
All of this does not necessarily mean that technological unemployment is inevitable. But it is certainly consistent with the laws of physics and there’s good reason to think it will in fact happen, and soon. In the absence of data, I suggest that the default hypothesis should be flipped. The logic behind technological unemployment is stronger than the status quo position of mainstream economics. Our current system is incompatible with that reality. As any reader of history knows, radical societal transformations frequently result in a period of immense turmoil. Economic depression, riots, elites fleeing to their bunkers and New Zealand hideaways, demagogues, and even war could await us. This is a no-win situation, giving everyone, poor and rich, an incentive to avoid it, regardless of ideology. If this phenomenon comes to pass, many questions — both philosophical and practical — will have to be answered, and we will need time to answer them responsibly. As we sail along into the uncharted future, a lack of precedent should offer little solace. After the election of Donald J. Trump, it should be easier for all of us to believe the old saying — there’s a first time for everything.
It is a nice blog, everything in the blog is self-explanatory
Excellent article my friend.
Its probably too late for you to see this Mr Wilson, but it is a very interesting article you’ve written here. Having seen Robin Hanson’s “Age of Em” idea, I’ve been trying to think of what we might do if even “emotional labour” was automated. You could hold aptitude competitions, seek yto better yourself I guess.
Nonetheless, the need to provide for ourselves, to work, would be virtually nonexistent, if I understand this correctly. In such an environment, people who want to just consume will be perfectly adapted to the prevailing conditions, but everyone else might consider suicide. Maybe that’s crude, but I can’t see how we’d cope. The biggest morale boosts I’ve ever had have come from work. For me, living off glorified welfare (including UBI) could be mind-rotting- and fatal.
*shrugs*
Thanks for your article. It seems promising Andrew Yang’s 2020 presidential platform regarding UBI (Universal Basic Income) is the answer to the AI job usurping technology which is progressing at a fast pace:
Nice article. I would be curious to know your thoughts on the ‘bullshit jobs’ thesis proposed by david graeber. If he is correct we would keep inventing useless jobs for people to do. I’m not entirely sold on it but i the book was interesting nonetheless.