Facebook is a private company: therefore, it can enforce its rules without having to ask anyone for permission. Changes in policies do enrage a few users, but, overall, Facebook has managed to keep growing and increase its value. Nevertheless, being a private company doesn’t mean you shouldn’t listen to the public (in a more meaningful way than just sporadically asking people to click accept after a policy change). Facebook provides a public service and has more users than there are people living in most countries in the world, yet it is fairly unaccountable for its actions.
Normally, Facebook answers only to the investors and companies who flood the network (and its parent social media company, Instagram) with ads. That needs to change. Facebook is indeed a private company, but it provides a vital public service and, as such, must be accountable to both governments and its users.
Facebook has recently banned some far-right users from the platform, as a way to tackle fake news and extremism, but, if Facebook is a public square, in which people can engage in debate, the decision seems problematic. In a world that too often depends on online communication, preventing someone from using such communication amounts to censorship: it forbids that person from speaking in a public forum, to anyone willing to hear. We are giving too much power to a single company, which can prevent people from exercising their freedom of speech—a power it can turn against anyone at any time.
I don’t believe in unrestricted freedom of speech, and I think people should sometimes be banned from social platforms. But Facebook is currently a globally important—some would say the most important—forum for discussion, yet the company is completely free to change its rules at will and ban anyone it wishes, without any accountability.
Unlike in a court of law, on Facebook decisions are made by anonymous, unknown strangers, without any clear appeal process. A friend recently had his Facebook account deleted for breaking the rules, after a campaign by haters to have him banned. His complaints went unheard and the reasons for this ban were never made clear. It’s not in Facebook’s interest to leave room for appeal. Its rules express only the censor’s fickle pleasure.
In the West, we have been horrified to learn that China is using artificial intelligence to monitor its citizens—but is Facebook’s behavior so very different? Sure, we choose to join that social platform—but is it really a choice? Not joining means being excluded from a major part of modern social life. In a sense, the choice is between social ostracism and joining a social platform that scans your life for every little detail and too often shares it with its partners without your consent (remember Cambridge Analytica). We are gently nudged towards joining Facebook and then we relinquish our rights over our data and our online lives.
On Facebook, complaints are rarely heard and the complaints process itself is tortuous and unclear. We don’t know who makes the decisions, how they were chosen and what rules they follow—or even whether the rules are applied uniformly. Users are often suspended without the right of appeal, without even knowing who complained about their accounts and why. The similarity with an authoritarian state is not a coincidence.
My issue is not that extremists are being kicked off Facebook, but that they are being punished on the basis of rules for which no one voted, without the right of defending themselves. There are no judges or lawyers involved. What is happening to them might happen to anyone. Too often, people are suspended for posting something that one of Facebook’s censors thinks is pornographic, offensive or violent—yet videos of beheading usually escape the same censors.
There are countless cases of LGBT people who have been suspended for using expressions that have been reclaimed by the community and are commonly used affectionately among them—but which are considered hate speech by Facebook, no matter the context.
Even in cases of genuine hate speech, throwing unwanted actors out of public spaces can lead to them becoming more radicalised, after retreating to spaces with less control, where hate speech can proliferate unsupervised.
We find ourselves at a digital crossroads. The existing means of verifying and combating hate speech are ineffective and often penalise the innocent. In cases of genuine bigotry, we resort to banishment and censorship, rather than more constructive forms of conflict resolution.
Facebook is not alone in the way it handles these issues. Recently, YouTube and Google announced that they will also delete and ban any content they consider extremist. At first glance, this may not seem like a big deal. After all, who wants Neo-Nazi sermons freely available to all? Who wants to promote videos against vaccination, which pose a clear threat to public health? The problem is that YouTube won’t stop there. Videos containing offences to sexual orientation will also be banned. What does that even mean? To some, commenting on the participation of trans athletes in female sports is an offence to sexual orientation.
Will freedom of opinion be protected? And, if so, how? Companies already struggle to live up to their own standards—how can they assure users that they won’t just censor in bulk, so as to avoid future problems?
Take the BDS—Boycott, Divestment and Sanctions—movement against the Israeli state that is too often described as anti-Semitic, despite the fact that it enjoys the support of many Jews. What if Facebook subscribes to the idea that BDS is hate speech and starts to censor every critic of the Israeli regime? Why should everyone should be subjected to such a decision?
These bannings also help to increase the sense of victimhood felt by broad sectors of the Right. Openly progressive companies defining hate speech risk creating an anti-conservative bias.
Recent research by Harvard’s Berkman Klein Center for Internet and Society has pointed up a fundamental flaw in the YouTube algorithm, which helped promote a pedophile network here in Brazil. YouTube’s automated system recommends videos based on what it learns about viewers’ tastes from their clicks. The algorithms are programmed to promote similar content to that the viewer is already seeing, through the further viewing recommendations. Pedophiles exploited this to create a network of recommendations. The same algorithms also help spread videos made by conspiracy theorists and even those containing the kind of hate speech that YouTube now says it will fight.
Large, monopolistic networks, such as Facebook, act like dictatorial states, denying rights to their citizens. We must not accept this. Infowars can be banned. Milo can be banned. Paul Joseph Watson can be banned. Perhaps they even should be. The problem is the lack of transparency and accountability. We need clear policies; we need to know the rules; we need to have a say in the decisions that affect our online lives.
Large, monopolistic networks, such as Facebook, act like dictatorial states, denying rights to their citizens. We must not accept this. Infowars can be banned. Milo can be banned. Paul Joseph Watson can be banned. Perhaps they even should be. So your comment here defeats the entire purpose of the article and represents bias. “Perhaps they should be” (banned)
“I don’t believe in unrestricted freedom of speech”. Then you are a piece of shit who doesn’t believe in free speech.
The author made a good point regarding You Tube’s banning of videos “offensive to sexual orientation.” There is a major disagreement about this, especially among women, which will not be resolved by banning one side or the other or both.
Good food for thought.