Last month, Microsoft made a move to license GPT-3, a powerful artificial intelligence model built by OpenAI, a company founded by Elon Musk. The move has focused renewed attention on the technology’s creative potential. GPT-3 has already been used by amateur developers to produce poems, parodies, guitar riffs and even functional computer code. It seems that we are marching towards an AI-powered cultural renaissance. But the same tool can also feed a propaganda machine. GPT-3 could seriously threaten democracy by enabling bad actors to prey on human psychological weaknesses. We cannot take the laissez-faire attitude we have seen with social media regulation: this groundbreaking new technology demands a thorough examination before it becomes part of our social fabric and democracy.
Just Sophisticated Enough to Be Dangerous
GPT-3 has, for the first time, enabled computers to write texts that are “interactive, informational and influential.” It can produce styles or tones that are indistinguishable from the human-created versions. The AI content can be quite engaging, too. A post that made it to the top spot on Hacker News was taken from a fake blog built using GPT-3. Readers had no idea of the blog’s computer-generated origins. The Guardian published an opinion piece explicitly powered by AI, which was shared over fifty thousand times in two days. Given how convenient and cheap it can be, it probably won’t take long for AI-generated content to dominate the internet.
While the recent OpenAI release was greeted by public excitement, Middlebury Center on Terrorism, Extremism and Counterterrorism issued a warning that, if unregulated, the “successful and efficient weaponization” of this type of technology is “likely.” The inevitable information war preceding the 2020 election has made GPT-3 feel like a Pandora’s box.
Dario Amodei, VP of research at OpenAI, has acknowledged that such advanced models may be used to “generate misleading articles,” “impersonate others online” and “automate the production of abusive or fake content.” For this reason, OpenAI has released the tool as a centrally managed service, so that they can shut down malicious actors as needed. However, over the longer term, it is unlikely that we will be able to stop GPT-3 or similar technology from getting into the hands of bad players, leaving us exposed to cybercrimes, financial losses and national insecurity.
Ironically, it seems that GPT-3 itself agrees. In a widely shared Guardian article, the AI writes: “I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.”
We Are More Gullible Than We Think
AI-generated content is dangerous because humans are far more susceptible to influence than we realize. A 1977 study revealed that repeated exposure to information increases our tendency to believe it. The scientists presented a group of college students with a mixture of true and false statements on three occasions, several weeks apart. The students’ confidence in a statement’s truthfulness increased significantly after each exposure, when and only when the statement appeared multiple times. A subsequent study found that when people receive a piece of information, even if they previously believed the opposite, repeated exposure makes the information seem more familiar, and thus more believable. Research has also revealed that people tend to overlook the source of the new knowledge after repeated exposure, which means that the credibility of the source has little impact on our ultimate beliefs. If we see something enough times, we will believe it. We are that gullible.
A Machine to Make People Believe Anything
It is hardly surprising to see human vulnerability being exploited by unethical groups. Political parties and advertising agencies have developed a threefold recipe for influence that involves target audiences, compelling content and effective distribution. First they define the target audience, whose behavior is studied, to discover their habits, interests and triggers. Then comes the development of content: the more unusual, emotive and inflammatory, the better. Finally, distribution involves doing whatever it takes to ensure repeated exposure. Get on the platform your audience uses and exploit their network. Capture their eyeballs, and maybe their friends’ eyeballs as well.
This recipe dates back at least to ancient Rome, when Octavian turned the people against Mark Antony, by strategically dispersing inflammatory claims about his relationship with the foreign queen Cleopatra. Messengers shared the allegations by word of mouth and the news spread like wildfire. Eventually, people believed that Antony would make Rome a monarchy, and Octavian eventually took control as Emperor Augustus.
These tactics were also in play during Brexit, the US 2016 presidential election and the recent US midterms.
Old Tactics Supercharged by Social Media
Social media has automated effective content distribution: a tweet can reach millions of people in a split second. The attention-seeking dynamic of social media adds another layer. Through the use of emotive language, disinformation can gain a significant advantage over truth. A 2018 study, which examined data from millions of users covering the entire life span of Twitter, found that fake news, especially political news, was shared more broadly and reached more people. Researchers concluded that “lies spread faster than truth”: truth takes six times as long to reach the same number of people.
The Weak Link Re-Engineered
Disinformation campaigns used to rely on humans for article length content. The high labor cost, long production window and language barriers made this step difficult.
Now that credible content can now be generated by a computer, the entire process can be automated. Millions of articles can be effortlessly written, and the social media machine will handle the rest. The content will overwhelm the weak misinformation flagging tactics under development. Our vulnerable sense of truth will continue to be challenged.
Act Now or Never
The government should take immediate action, similar to the measures taken in California, to promote transparency around AI-generated content.
We also need to take action as individuals to avoid spreading false information to our friends and families.
Before you hit the share button, ask yourself: How did I receive this piece of information? Is it from a credible source? Have I read the whole thing? Is the language inflammatory? Can the statements contained pass a fact-check?
Ask those questions now—before it’s too late.