An opinion that corn-dealers are starvers of the poor, or that owning private property is robbery ought to be unmolested when simply circulated through the press, but may justly incur punishment when delivered orally to an excited mob assembled before the house of a corn-dealer or when handed about among the same mob in the form of a placard.
―John Stuart Mill
Parler has been censored by Apple and Google; Facebook has shut down Ugandan government officials’ accounts; Twitter has removed 70,000 accounts allegedly linked to QAnon; President Donald Trump has been suspended from Twitter, Facebook and YouTube. These are among the paradigmatic examples of digital oligarchs―from Amazon, Google, Twitter, Apple, Facebook and YouTube―wielding their enormous power to control what is discussed in the online public sphere and political communication in general. In some cases, the decision to permanently suspend an account rests with the vote of a single person in Silicon Valley. That even government officials are at the mercy of Big Tech entrepreneurs raises the questions: what does freedom of speech mean and who should regulate online speech?
Political theorist Teresa Bejan has outlined two competing conceptions of free speech in Ancient Greece. The first, isegoria, refers to equal social or political rights to public speech. In Athens, the social aspect of this meant that everyone―including slaves, metics and the poor―had an equal right to express themselves in social gatherings; and, in the political sense, it meant equal rights to address the assembly. Steps were taken to ensure that everyone participated in public life: for example, the poorest citizens had to serve as jurors or were paid to attend the assembly. The second conception, parrhesia, is a right to say anything one pleases, whenever one pleases: it is a licence to offend.
Disagreements over free speech are intimately connected to the discrepancies between these two notions. Consider Cambridge University’s decision to rescind Jordan Peterson’s visiting fellowship in 2019. Peterson’s opposition to anti-bias training and to the legalisation of self-chosen gender identities is intolerable to some who consider it non-inclusive and a denial of the equal rights of certain groups. The University of Cambridge spokesperson noted that Cambridge “is an inclusive environment and we expect all our staff and visitors to uphold our principles. There is no place here for anyone who cannot.” This is an appeal to isegoria: the argument is that racism and transphobia infringe certain people’s equal rights to public speech and we must therefore censor and no-platform those who express such views. But for Peterson and his supporters, no-platforming or censoring a public speaker because of his controversial opinions is an affront to free speech, since, in a democracy we should be able to air our views in public no matter how offensive or distressing they may be. Free speech, in this context, is a licence to offend. This is an appeal to parrhesia.
We can detect the two rival Greek conceptions of free speech in virtually all disagreements about freedom of speech today. For example, should people have licence to offend devout Muslims by publishing cartoons of Mohammed (the parrhesia view) or does this infringe on the right of devout Muslims to be fully included in society (the isegoria view)?
These competing notions have made it difficult for many to distinguish between free speech and hate speech.
Twitter was right to permanently suspend President Trump’s account. Twitter has the right to ban users who breach its stated rules and Trump has used social media to stoke hatred and incite violence—for example, in his tweets demonising immigrants, Mexicans and Muslims. Trump’s continued refusal to concede electoral defeat and his encouragement of the Capitol Hill rioters are sufficient grounds to ban him. However, Trump’s supporters argue that this is censorship and that freedom of speech should include the liberty to call out what they perceive as a fraudulent election.
But it is important to distinguish between free speech and hate speech. Hate speech is the abuse of free speech to incite violence against a person or group of people. Trump’s rants were clearly incitement under J. S. Mill’s definition (see epigraph).
But does Mill’s distinction imply that all opinions are actions and can have unintended effects or that it is context that determines whether an opinion is harmless or dangerous? Free speech clearly can do harm―it can easily slip into hate speech and this is anti-democratic since concern for the welfare of others should be a requisite of civic participation. Free speech can be used to further good causes or to dehumanise. It can be a knife in the hand of a surgeon—or in the hand of a murderer.
Both those who defend equal rights to speech and those who endorse the licence to offend can be guilty of using free speech to stoke hatred and foment violence online, especially on social media. We cannot rely on Big Tech entrepreneurs to regulate online speech not only because they are usually primarily concerned with profit, but because their reactions to different individuals and groups are inconsistent.
Anti-racist groups have employed Twitter and Facebook to stoke hatred against white people and to encourage violence and looting in the aftermath of the murder of George Floyd. The gruesome killing of Samuel Paty was possible in large part because some French Muslims campaigned for his murder on Twitter. Facebook has been utilised by Myanmar’s military personnel to incite the murders and rapes of Rohingya Muslims. In India communal violence has skyrocketed, partly thanks to false information spread on WhatsApp groups by Hindu nationalists. Two days before the Christchurch massacre, Brenton Tarrant posted his Great Replacement manifesto on Twitter. Anti-refugee Facebooks posts by the far-right Alternative for Germany party have encouraged attacks against migrants and refugees.
As these examples demonstrate, Big Tech has largely failed to regulate online hate speech by both left- and right-wing extremists. Social media oligarchs make money by selling targeted advertising. In some cases they have succumbed to pressure from authoritarian states to gain access to national markets, thereby aiding those regimes to stifle dissent. As Zachary Laub notes,
Social media platforms rely on a combination of artificial intelligence, user reporting, and staff known as content moderators to enforce their rules regarding appropriate content. Moderators, however, are burdened by the sheer volume of content and the trauma that comes from sifting through disturbing posts, and social media companies don’t evenly devote resources across the many markets they serve.
We cannot leave the regulation of online hate speech to Big Tech who are relatively powerless to prevent it. We need international laws to prevent online discrimination. Without them, we risk new atrocities. But first we must reconcile the age-old distinction between parrhesia and isegoria. Speech, as Aristotle reminds us, makes us political animals, but we must regulate it if we wish to flourish. Speech is both our blessing and our burden.