According to a recent National Bureau of Economic Research report by Daron Acemoglu and Pascual Restrepo, 50–70% of changes to wage structures in the US over the last four decades can be attributed to the automation of routine tasks. Numerical control machines, industrial robots and specialized software have been replacing both blue-collar and clerical workers.
Like something out of science fiction, artificial intelligence (AI) has been infiltrating all sorts of fields: medicine, accounting, transportation, food services. AI can even generate human-like text. The autoregressive language model GPT-3 can produce news articles that human evaluators cannot differentiate from human writing.
In mid-2020, Paul Yacoubian, co-founder of CopyAI, sent me a video demonstrating the power of GPT-3. He had fed the thesis of the academic paper “Bullshit Makes the Art Grow Profounder,” on which I am a co-author, into the AI, which produced an impressive short article on bullshitting (citations and all). The model wasn’t simply collecting and stringing together sentences from the internet. Instead, within seconds, it was generating new creative content. It seemed silly—and humbling—that this paper had taken our research team months to write and edit.
In November 2022, OpenAI released ChatGPT. This language model interacts with users conversationally and can retain information and provide follow-up responses. I have personally been using this resource to accelerate the process of generating study notes on the anatomy of the human brain. But with each new question I input into the AI, I find myself becoming increasingly anxious about the redundancy of my field of study.
The term artificial intelligence was coined in 1955, by John McCarthy, a professor of mathematics at Dartmouth University. McCarthy described AI as “the science and engineering of making intelligent machines.” Today, AI refers to both the intelligent machines that are its goal and the science and technology used to develop them.
In a wide range of fields from heart surgery to code writing, AI will one day outperform humans—and probably soon. But human workers will not be entirely redundant. As Jamie Merisotis has pointed out, there are human interactions that, by definition, AI cannot replace because they require qualities such as empathy, compassionate communication and ethics. Some fields will always require the human touch.
In addition, the public perception of AI—how people feel towards it—may be an important determinant of its prevalence in our day-to-day lives.
In one study, researchers gauged perceptions of AI among two groups of participants: members of the UK general public who attended “a public national science museum event” and students from the Science and Engineering Faculty at Manchester Metropolitan University. While 52% of the university students would trust that a car was safe to drive, based on a digital MOT (the UK’s annual vehicle safety test) alone, only 37% of the general public echoed this sentiment. A similar trend was observed in the two groups’ perceptions of the risks of relying on a medical diagnosis produced by an AI system, and of being falsely identified as a hacker by AI. However, both groups viewed the act of giving AI control over cybersecurity (e.g., keeping devices safe from hackers and deleting malicious emails) as entailing only a medium level of risk. Both groups viewed the risk as greater when the consequences of bad decision-making were both personal and serious (e.g., when they involved life-or-death scenarios).
The researchers found that the general public felt uneasy about being unable to keep up with the latest developments in AI research. They were more concerned that they might be unable to understand the decision-making processes these systems use than the students were. The differences the researchers observed between these two groups might be attributable to the fact that science students tend to be more informed about AI than the public.
A 2018 Pew Research Centre study explored Americans’ attitudes towards algorithmic decision-making in finance and in video job interviews. Two thirds of Americans disliked the idea of computer algorithms making important decisions. Their concerns included violation of privacy, unfairness, removal of the human element in important decisions and the inability to capture human nuances.
But attitudes toward AI vary across demographic groups. Older people prefer to receive their news from humans and are less likely to agree that algorithms are bias-free. Older people also attribute greater risk to automated decision-making and find it less useful. Women also find algorithmic decision-making far less useful and attribute slightly more risk to it than men do.
In another study, the researchers explored perceptions about AI decision-making in various contexts. They found that people are generally ambivalent as to its usefulness and fairness, and have concerns about its potential risks. However, when they examined people’s perceptions of AI fairness, utility and risk in specific contexts—such as media, public health and justice—automated decision-making was perceived to be on par with—and sometimes even better than—expert human decision-making.
When they looked at people’s perceptions of AI in specific scenarios (as opposed to exploring their general attitudes), the researchers encountered positive attitudes. In high-impact decisions, automated decision-making was favoured over the use of human experts. In domains such as justice and health, human experts were perceived to be less fair and less useful than AI. In high-impact scenarios, automated decision-making was perceived to have a lower risk than entrusting the same decisions to human experts. When it came to low-impact decision-making, there were no differences in perceived risk, fairness and utility between human experts and AI, with the one exception of justice: an area in which human experts were perceived to be slightly more useful than AI.
So, research on public perceptions of AI has provided mixed results. It seems that attitudes vary depending on the individual and the context. Many researchers are currently studying which factors predict trust in AI, and how to enhance positive perceptions of this technology. Perhaps one day this research could be applied to the project of inspiring more favourable public attitudes toward AI.
Many people have opinions about the rapid advancement of AI, including the Pope. In a 2019 conference, he warned:
If technological advancement became the cause of increasingly evident inequalities, it would not be true and real progress. If mankind’s so-called technological progress were to become an enemy of the common good, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.
Indeed, the evolution of AI presents many ethical dilemmas: we should be alert to threats of widespread unemployment, surveillance capitalism, academic integrity, data harvesting and the development of autonomous weapons. But perhaps the most ambitious AI venture is transhumanism: the physiological, intellectual and psychological enhancement of human beings—which could allow us to transcend our human nature altogether. As Pamela McCorduck writes in her novel Machines Who Think, perhaps “AI began with an ancient wish to forge the gods.”
Decades of research have highlighted the limitations of human decision-making. Unlike artificial intelligence, we tend to make satisfactory decisions, rather than perfectly rational ones. We’re often comfortable giving up on trying to find the optimal decision and accepting an alternative that meets our minimum needs: we often pursue what is enough, rather than what is ideal. I find this endearing. But is this a timeless and unwavering feature of the human condition? I don’t know. But advances in AI may help us find out.
[…] Source link […]
A literary analogue to Gresham’s Law exists: Inferior writing drives good writing out of circulation. The process is more subtle than for coinage, however. In a world where million of titles are self-published every year, a typical Indie author would be happy to net two or three hundred dollars a year. Quality is no guarantee of a market for any given book. And given, they often are, being sold well below cost. To sell more books requires advertising, which is typically overpriced and merely lets the author dig a deeper financial hole. It is a very, very rare writer who can afford to, or has the stamina to, write book after book, all of high quality, at a loss. The average quality of books has plummeted in my lifetime, even before the Indies hit the market. (I’ve opened award-winning books and found horrendous typos on the first two pages, and… Read more »
Equipped — and apparently satisfied with — a superficial understanding of the difference between machine learning and actual general intelligence — the creative, volitional kind — the author ultimately delivers a hand-wavy opinion piece that could, indeed, have been written by a chatbot.
With minimal effort, humans are capable of deep understanding. Yet, here we are… 😞