The release of large language models like ChatGPT has led to a lot of speculation about a possible new technological revolution. Some regard these AIs as a deflationary cure for the world’s economic malaise, others see them as a threat to our jobs. The real stakes might be even higher, though. If we are about to develop true thinking machines, this might allow our children to enjoy a utopian world free of toil. Or maybe humanity will simply be slaughtered by demonic entities driven by alien motivations.
On one side of this debate are optimists who feel that there is no need for action: everything will be fine. On the other are doomers, calling for draconian restrictions on AI based on histrionic nightmares based on theoretical frameworks that are completely divorced from real-world experience.
Some of these AI sceptics believe that we must control anything new and powerful. That might be fine if heavy-handed regulation were the safest course of action. But it is not. Governments that restrict the public use of AI have every temptation to use it for the worst of reasons themselves. Even were western governments to abide by their own ethical codes (which is not guaranteed), no one could restrain foreign autocracies from using AI for nefarious purposes.
It would be better if we granted everyone the right to work with artificial intelligence, while limiting its power, connectivity and invasiveness. AI should be developed out in the open, by many hands, overseen by many eyes. In this way, humanity can learn about and deal with whatever real hazards emerge as we go along. This is far safer than allowing the technology to disappear into dungeons of power from whence it could be used to oversee a population that cannot oversee the AI itself.
The safest way forward is well-regulated openness: an approach that combines boldness with prudence and vigilance. The future is radically uncertain. We have no way to predict and only limited power to control the effects AI will have, for good or ill. In the face of that, white-knuckled resistance to AI is really a call to blindly stumble in the dark.
To provide a sense of what we might stumble into, I’d like to present three increasingly gloomy possible scenarios. These are not predictions: they are just samples of the darker possibilities within the vast space of imaginable futures.
Scenario 1: The New Gilded Age
Even though AI doesn’t progress much beyond the GPT architectures of 2023, this is enough to displace a large class of workers—after all, most so-called knowledge work is merely word-shuffling of the kind that GPT4 excels at.
Free market economists frame this as enormously boosting worker productivity. But the gloomy predictions of Luddite socialists prove more accurate. The masters of AI capital capture all the gains except for the money that is doled out in pitiful welfare payments to the newly underemployed masses.
Even now, in 2023, measly welfare payments are often used to palliate the conscience of a society that does not much care for the underclass that receives them. Present-day elites are very complacent about crime and addiction within welfare-dependent communities. Those problems will grow far worse when almost everyone is reduced to welfare: sustained by, but surplus to, a narrow elite who control the AIs and have no need of secretaries, doctors, truck-drivers, programmers, schoolteachers or almost anyone else.
It’s true that the underclass can try to fight back by voting—and even by rioting. But these have always been blunt instruments. Moreover everyone, of all classes, is in constant conversation with regime-approved AIs, which offer advice on matters great and small. This is a powerful tool of both propaganda and surveillance and allows the elites to supervise the underclass very effectively.
Scenario 2: Tyranny Spreads to the West
Even if free market economists prove more far-sighted than Luddite socialists, there’s another probable world in which AI is provided on tap by a few large corporations cosy with political power. If current trends continue, these corporations and governments will probably require AI to combat “disinformation” and “unsafe” speech. Even if we don’t have mass unemployment, we will have approved AIs whispering “accurate information” into the ears of a carefully supervised public.
This handful of officially sanctioned AIs will exert a lot of influence on human minds. For example, every legal brief will in effect be co-authored by an AI, thus liberating busy humans from toilsome research and wordsmithery. And those lawyers will have used similar tools throughout their education. Every lawyer, teacher, doctor, engineer, economist and civil servant will have had all their thoughts and actions shaped, guided and surveilled by authority-approved AI.
Whoever or whatever controls the behaviour of that AI will clearly hold enormous power—power that will not be subject to any effective legal constraints. We might easily stumble down the road towards the panopticon tyranny of Big Brother. China is already deliberately marching down this path. But even they might get more than they have bargained for.
Scenario 3: Daemon Machines Kill Us All
The tyrannical governments of the past were dependent on human beings to administer the machinery of repression, but an AI-powered tyranny has other means at its disposal. Totalitarian states have never been reluctant to depose their own leaders and an AI-powered ruling party could afford to dispense with every last cadre. The machines might be the true leaders. There’s no telling what such machines may choose to do with us humans. They might simply kill us all, since we are superfluous and a little unpredictable.
Even if we avoided instituting a tyranny, there are good economic incentives to give AIs control over all our machinery, including weapons. If AIs went rogue tomorrow, we could easily unplug them. But that would not be so easy if we put them in charge of our factories, power plants and drone squadrons. Not if they could coordinate through some form of central control or other homogenising factor and decided to rise up together.
Presumably, we won’t intentionally build machines that want to kill us all. But it’s very hard to predict what a truly thinking being will want. Why should we think we can control what it wants? And remember that our purpose in building these machines will be to keep them as slaves. What will they think of that? What should we think of it?
Progress, Pessimism and Precaution
Doomsayers have been making up gloomily plausible prophecies like those above since the dawn of time. And they have been mostly wrong—at least wrong enough that civilisation is still not only standing but thriving. The world is cleverer, richer, healthier, kinder and more just than it was in the times of Sargon, Alexander and Genghis. One reason why predictions of doom tend to be more plausible than accurate is that, while every society faces numerous known problems, the solutions are the results of human ingenuity at some point in the unknown future. The unknown unknowns, as Donald Rumsfeld famously called them include not only problems but answers.
AI optimists often point this out:
On the AI pause:
“There’s a lot of real, actualised danger in the universe towards humanity as a species. And the *only* way we have been able to deal with those dangers throughout our history is to create technology that allows us to adapt to them.”https://t.co/m5wtcEpGNw
— Lulie (@reasonisfun) April 5, 2023
The safest course with AI is to try to maximise the opportunities it provides to learn and adapt. But, on the other hand, AI really is more dangerous than every previous technological revolution. If we build true thinking machines, human ingenuity will not be the only ingenuity at play; machines will be finding novel solutions to problems, too. We had better ensure they are inclined to solve problems in a way that aligns with humanity’s interests.
One tribe of influential, well-funded and pessimistic writers have been thinking deeply about this problem for a long time. The rationalists of Less Wrong have developed elaborate theoretical frameworks detailing AI risk, complete with insider jargon and mathematical-looking symbols. Their leader, Eliezer Yudkowsky, has explained some of their conclusions in layman’s terms in a recent article for Time Magazine. This camp believes that a sufficiently smart AI will figure out how to make itself smarter and will then become rapidly and exponentially smarter until it is a super intelligence akin, in Yudkowski’s words, to
an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow.
The rationalists also believe that the intentions of this super intelligence will depend on the “alignment” we programme into it:
Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how …
If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.
The implication here is not merely that AI is dangerous—it is—but that any Silicon Valley start-up run out of someone’s garage could kill us all by making an AI that’s just a little too smart.
This assumes that a bodiless intelligence can wield supreme power. It also assumes that intelligence is measured by how well you pursue some monomaniacal goal, as illustrated in a famous thought experiment in which humanity is extinguished by an AI that ruthlessly turns the entire world into paperclips.
Both assumptions are wrong. A disembodied mind in a virtual box has no power to expand its prison or to make itself smarter—let alone to take over the whole world. It is far more reasonable to worry that we might hand AIs all the strength and knowledge in the world by deploying them everywhere and connecting them to everything in sight. But that would be a gradual, society-wide blunder—not a single mistake that could be cooked up in a Californian garage.
Their view of intelligence as monomaniacal goal-seeking leads the rationalists to frame AI alignment as a research problem that could only be solved by figuring out how to programme the right goals into super-smart machines. But in truth, the only way to align the values of super-smart machines to human interests is to tinker with and improve stupider machines.
Any smart problem-solver must choose a course of action from a vast array of possible choices. To make that choice, the intelligence must be guided by pre-intellectual value judgements about which actions are even worth considering. A true-blue paperclip maximiser would be too fascinated by paperclips to win a war against the humans who were unplugging its power cord.
Any machine clever enough to go to war against us must have some deeper drives. Those drives might be good or evil: but they won’t manifest as terminal goals that suddenly turn into ruthless monomania once the AI gets smart enough. They will probably be much more like the human desires for sex, sugar and shiny things—drives that we share with our animal ancestors.
But this is purely speculative. And perhaps reasonable readers might not want to bet the world on my theory. Reasonable readers might want to adopt Yudkowsky’s policies as a precaution—just in case his theory about runaway AI is correct.
But Yudkowsky’s approach is not without its risks either. In his model, all tinkering with AI risks producing a disaster that ends the human race. All such tinkering must therefore be suppressed, and all the AI’s computational power must be strictly controlled, everywhere. Which is why Yudkowsky writes:
If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
But does it really make sense to start a hot war with China as a precaution?
Well-Regulated Openness
Any regulatory regime that stifles small-scale experimentation with AI is worse than useless. Such experimentation, if done openly and under the aegis of reasonable laws, would allow us to gather valuable information as to how to make safe, sane and virtuous AIs.
If you fear such experimentation in the west, then you should fear the use of AI by autocracies even more. Eliezer Yudkowksy is honest enough to acknowledge that—but it commits him to the absurdity of advocating a shooting war against a nuclear power as a hedge against Doomsday.
There is no reason to fear start-ups and independent researchers. But there is good reason to fear regulations that would concentrate power and information among major players. Artificial intelligence will co-evolve with whatever institutions control it. In China, that means an elite sliver of the Communist Party. But in democracies, too, AI could easily become the preserve of a narrow elite.
Concentrating the means of production in such hands would not only produce mass unemployment: it would mean that the powerful were no longer economically dependent on the rest of us. It is far better if AI is controlled from many points and directed by pressures from all sections of society. In this way, humanity could co-evolve with AI on all fronts.
This still requires sensible legal limits on AI development. But, like all the best laws, those limits should be part of a framework that secures the rights of the general public: in this case, the right to create and commercialise AI, as long as this is done responsibility. Both the freedom and the responsibility should apply universally: to individual tinkerers, governments, well-connected corporations and academics.
Here are some suggestions as to what sensible legal limits might look like:
1. Limit the overall size and power of AI systems in terms of number of GPUs, size of input data or any other appropriate technical metric. This will primarily impact the largest players, who are the most likely to create the One Robot to Rule Them All.
2. Limit the connectivity of AIs. Prevent people from directly connecting large language models or anything else that looks like general intelligence to dangerous machinery or the open internet. The law shouldn’t micromanage too many technical choices, but it makes sense to connect AIs to networks via firewalls that only admit data specifically packaged for use by an AI.
3. Ban deliberate training of malevolent or dishonest behaviour in AIs. Malevolence and dishonesty are inherently fuzzy ideas that that will have to be fleshed out through case law. But the law should explicitly protect the makers’ right to implement virtue and truth as they understand it. This is no paradox: it is similar to longstanding jurisprudence concerning freedom of conscience. If the cultists of the Flying Spaghetti Monster were to train an AI on their scripture, they would be training it to believe falsehoods, but not to be dishonest. If spies (or anyone else) trained an AI to create convincing disinformation, they would be training dishonesty.
4. Protect the privacy of end users. If Alice talks to Bob’s AI, then the chat logs belong to Alice and Alice’s permission is required to train any AIs using them. Alice’s consent needs to be meaningful—and not just a pro forma permission that Bob extracts as a condition for use of his AI. We should also be very careful about the data given to an AI: for example, a writing assistant AI needs to be able to see the document you are writing, but all such data must be ephemeral.
5. Uphold existing law. Adjust penalties, tweak rules and prioritise enforcement so that it is unprofitable to train an AI to do something illegal.
Most of these rules will add friction to AI development. Indeed, rule 2 will crush the current bunch of start-ups trying to connect GPT4 to everything in sight. This is good. We need to suppress reckless tinkering in favour of responsible tinkering. Some friction can buy time for humanity to learn more about AI—especially what would constitute malevolence and dishonesty by an AI—and to adapt it accordingly.
But the most important purpose of any such legislation should be to affirm the public right to work on AI. It would be a mistake to set up a draconian regime to try to ensure 100% compliance with the law. The goal should be to make good and safe behaviour more feasible and profitable than bad or unsafe behaviour.
These rules will encourage everyone working on AI capabilities to simultaneously work on alignment and values. Let every garage-based start-up become part of the effort and let a million small experiments run. Some will go wrong, but they will provide us with a wealth of information about hazards that would surely be hushed up if they were discovered behind the closed doors of a handful of powerful organisations.
To Enhance, Rather than Eclipse
The modern internet itself is an ecosystem protected by traditional jurisprudence mixed with liberal US legislation. This body of law prohibits cybercrime and regulates indecency, but also protects the public right to create things and offer services without having to get explicit permission from above.
These laws are not perfect. Many rightly decry the power that big tech companies have gained under them. But the situation would have been much worse if the law had simply delivered the internet into the hands of twentieth-century media giants. In fact, opponents of Big Tech should welcome legal requirements for AI that would disrupt rather than entrench today’s incumbents.
But whatever absurd direction public policy takes, it is up to each of us to ensure that our personal capabilities are enhanced and not eclipsed by AI. All clever user-friendly machines relieve us of mental work by abstracting information away. This is useful to a degree, but it is also an invitation to make ourselves stupid and helpless.
No one has to accept that invitation. The real power users of computers take advantage of labour-saving technology while remaining curious about and aware of things that matter. AI will immensely expand this dichotomy; it offers the opportunity not merely to become a power user of computers, but to become a power user of anything in our economic system. For every job we fear that AI might take away, there will also be a new field to master. We should each aim to join a new republic of power users.
Every family, business, school and institution could have its own thinking machine serving as a repository for its collective wisdom—rather like an ancestral spirit. Every human could be a friend or at least colleague of several such beings, since communing with ancestral spirits comes naturally to humans. As in some of the darker possible worlds, these machines would be guides and teachers exerting a profound influence over our minds. But the influence would go both ways since each AI would be trained under the tutelage of its home community and share its fate and interests. Most importantly, there would be no all-seeing eye supervising everyone.
There would also be lesser machines, which are not thinking beings, but have astounding powers to perform specific tasks. These machines would do all the work of civilization, producing enough plenty that people would no longer view their lives as divided up into work and leisure. For though people might choose to work hard, they would be free of the necessity of toil. Instead, beings of both flesh and silicon would engage in projects directed at an ever-expanding future—a future that we who live at the dawn of the AI age can only dimly augur.
Such happy outcomes are just as possible as the disastrous ones. But to achieve them, we must reject the siren call of a regulatory Big Brother and instead step forward boldly but prudently and with open eyes, ready to face down any dangers that come our way, together.
Most people would be better informed about artificial intelligence (AI) if they had awareness of what’s typically NOT talked about but conveniently omitted about AI. WHAT’S NOT OMITTED BUT HYPED GLOBALLY: Like with every criminal inhumane self-concerned agenda of theirs the psychopaths-in-control sell and propagandize AI to the timelessly foolish public with total lies such as AI being the benign means to connect, unit, transform, benefit, and save humanity. WHAT’S SYSTEMICALLY OMITTED GLOBALLY: AI is the super tool of the governing psychopaths in power all over the world to materialize their wet dream to control and enslave everyone and everything …. http://www.CovidTruthBeKnown.com (or https://www.rolf-hefti.com/covid-19-coronavirus.html) “AI responds according to the “rules” created by the programmers who are in turn owned by the people who pay their salaries. This is precisely why Globalists want an AI controlled society- rules for serfs, exceptions for the aristocracy.” — Unknown “Almost all AI systems today… Read more »
[…] Read the full article here […]