It has become popular lately to call for the breakup of so-called Big Tech. Such calls have come from Senator and Presidential candidate Elizabeth Warren, Facebook co-founder Chris Hughes, the House Antitrust Subcommittee and the Justice Department itself. There is a decent case to be made for such a break-up: but crudely applying antitrust law, as Warren, and Hughes are suggesting, will not work. Warren and Hughes grudgingly acknowledge that big tech doesn’t fail the standard antitrust test of abusing their market position to gouge consumers on price. Instead, they argue that their market shares give consumers little choice but to sign up to more stealthily abusive practices. “It is not actually free,” Hughes tells us, “and it certainly isn’t harmless.” But both seem to believe that Facebook, Google and others succumb to the temptation to inflict such harm solely because they are big. Hence, the solution is to make them smaller. It doesn’t appear to have occurred to either of them that they are big because they inflict such harm.
Facebook and Google are not Standard Oil and AT&T. They operate business models whose network effects tend towards monopoly, due to continuous redeployment of increasing returns to scale. Users pay not with money but with data, which Facebook and Google then turn into productive capital that creates products for another group entirely. The quality of the service to the users—the unknowing and hence unrewarded capital providers—scales quadratically with the size of the network and, since they are free in monetary terms, any serious attempt to compete would require monumentally more capital than could ever generate a worthwhile return. The proper regulatory approach is not to cut off the heads of these hydras one at a time, but to acknowledge that these are fundamentally new economic entities.
Artificial intelligence makes this all the more imperative. By AI, I mean the honing of proprietary algorithms on enormous complexes of unwittingly generated data to identify patterns no human could—identifications that will be re-applied to dynamic pricing decisions and content filtering in order to make what will surely be called efficiency gains and improvements to the user experience. This would all be fine and dandy—as opposed to highly ethically suspect—if the contributors of the data had any idea of their own involvement, either in the contribution itself or in the eventual gain in efficiency. What is really happening here is that information that previously only existed transiently and socially will soon be turned into a kind of productive capital that will only have value in massive aggregations. This is why those who generate the data are happy to do so for free, for it is of no monetary value to them, and it is why the only people who will derive any productive value from it will be the already very well capitalized.
This is an unflattering, but perfectly accurate, description of the business models of Facebook and Google, who stalk you wherever you go on the web, wherever you bring your smartphone, and wherever you interact in any way with one of their trusted partners, all in an effort to manipulate your sensory environment and slip in as many ads as possible. This is so effective that they buy data from outside their platforms to supplement the potency of their manipulations. They are the largest surveillance organizations in the history of the world. Entirely the creation of human beings, the Internet seems to have slowly morphed into something deeply anti-human. And yet the more commonplace view is that it is a kind of sacred ideal of social organization, whose progress only a philistine would consider slowing.
Warren and Hughes both succumb to this way of thinking. Clearly desperate not to be cast aside as luddites, they caveat their agendas with blind veneration of technology in the abstract—technology without humans, which no intelligent human could possibly oppose. Warren claims that her real motivation is to “ensure that the next generation of technology innovation is as vibrant as the last.” Hughes claims that, “even after a breakup, Facebook would be a hugely profitable business with billions to invest in new technologies—and a more competitive market would only encourage those investments.” If the problem isn’t capitalism itself then it is the insufficient working of capitalism due to corrupted markets. The problem is never technology, for technology is a jealous god. It’s as if they had grown several bacteria in a petri dish, returned to find that one had exhausted the available resources and starved the rest to death, and concluded that the problem was this particular unfortunate outcome, rather than the nature of bacteria. Best to cut them all down to size and try again.
What if we built the Internet badly? What if Facebook and Google are not perversions of the high-bandwidth digital milieu, but are its logical endpoint? What if we wasted our opportunity for digital ecological diversity and instead optimized for growth rates, only later realizing that carnivorous bacteria can grow awfully fast?
Let’s take a step back for a moment and imagine an alternate model of consumer services over the Internet. Imagine it is not the case that the likes of Facebook, Google, Twitter, etc. operate enormous servers that authenticate our identities on our behalf, servers with which we interact only as clients, who volunteer the data necessary to run the applications for free. Imagine instead that we ran our own servers, hosting our own data, to which these services connected through APIs. Twitter could show your photos to your followers, and Facebook could show your private messages to your friends, but only because you allowed them to—and you could remove this access with the click of a button. Furthermore, they couldn’t ban you, any more than email can ban you. Nor could they give access to your data to anybody other than the identities you specify, any more than email can secretly send your emails to somebody other than the desired recipient. This model of the Internet would treat everybody as roughly equal nodes, which, bizarrely, is how it is usually disingenuously described today. The Jedi mind trick here is that all nodes are roughly equal in terms of relaying encrypted data, but not in terms of controlling it. When we look at the far more important factor of control of information, rather than temporary possession in transit, the Internet looks a lot more like television than most of us care to admit: a few central hubs broadcast everything to everyone. Even more weirdly, on the Internet, the everything is mostly data about you, that you gave these hubs for free.
It is important to understand why the fantasy outlined above didn’t come to pass. We need to grapple with the fact that the Internet’s being free and open has the inevitable consequence that value, scarcity, identity and consensus—to whatever extent these are desirable qualities—can only exist within centralised applications. We also need to appreciate the difference between an application and a protocol. This will allow us to probe what can be done peer-to-peer and what requires corporate intermediation, and why.
The Internet is free to use, open and simple. If you want to ping packets around the network layer, nothing can stop you. Ping away. Data is not scarce, after all. This enables many of the wonderful features of the Internet we are now all used to. But it also represents a trade-off, because, if something is free, it is difficult if not impossible to discern the kind of meaningful information that one might from a price in a market. The willingness to pay a price indicates a sincere belief and an honest commitment. There are costs to insincere or dishonest behaviour that will simply be dispersed throughout the network, rather than borne by the perpetrator.
Given there is no network-native scarce asset by which we could impose a monetary cost, could we perhaps instead utilise identities, such that dishonest actors can be identified and punished? Sadly, we cannot do this, either—at least not at the network layer. As Lawrence Lessig writes in Code 2.0, describing the minimalism in the Internet’s design, “the core is kept as simple as possible. Thus if authentication about who is using the network is necessary, that functionality should be performed by an application connected to the network, not by the network itself. Or if content needs to be encrypted, that functionality should be performed by an application connected to the network, not by the network itself.”
That at the network layer pings are free and that there are no identities means that running your own server which welcomes API calls from the entire world would be intractable for almost everybody. Taken to its extreme, this is rather like living in a city of silent men in identical suits and masks, walking around conducting business, with nobody certain that they have interacted with anybody else before. This would almost certainly not be a safe place. A dishonest actor could either choke the bandwidth of, or, more perilously, expose weak security in, just about any personal server in the world, and then disappear back into a crowd of faceless men without a trace. Only the extremely well resourced could prevent this, which is in fact exactly what happened in the less fantastical reality that has developed. The network effects that Facebook and Google enjoy exist because such services can only be reliably provided at the scale that follows bacteria-like growth by first and foremost centralising and monetising user data. Furthermore, they retain user lock-in by providing applications for identity that port across other web services, many of which are simply sucked into or replicated within their own walled gardens for simplicity’s sake. If you try to move any of the content outside the walled garden, say to combine with services elsewhere on the web, even if the users give you permission to do so with content that is allegedly theirs, Facebook will sue you out of existence. This is not to say that Facebook is perfect at network security, but it is probably better than you are, and it is definitely better than you and a billion others each running one one-billionth of Facebook.
One major reason for this is that Facebook is running a special kind of application. Unlike Linux, Apache, MySQL and PHP—open source code on which most of the web, including Facebook, is built—Facebook’s application must appear to be a single instance of the same program, running simultaneously for every user. It is really more like the web itself, or email, in that every user is necessarily running the same version. By protocol we might very loosely and nontechnically mean something like the rules by which we all agree to interpret transmitted data for some particular purpose. Protocols need consensus: coordinated agreement by all parties in a network to acknowledge the same truths and do the same things. While the great benefit of open source code is the ability to copy, tinker, and resubmit, letting a thousand flowers bloom, Facebook is a kind of intermediated pseudo-protocol that provides the required consensus unilaterally.
There are open source protocols, to be sure. In fact, the Internet Protocol suite (encapsulating TCP/IP, HTTP, SMTP, and so on) is all open source. However, to return to Lessig, these are minimalistic and push complexity out of the network itself and into applications. None are expected to add new features with any regularity whatsoever. They are explicitly intended to be stable building blocks for further applications, and so necessarily tolerate network congestion as a trade-off in order to remain free and open. This minimalism aids consensus formation.
Notice that for a potentially complex pseudo-protocol, management by a centralised private party elegantly solves many of the problems raised thus far: identity can be centrally issued and authenticated. The complexity of the application can be arbitrarily high without incurring trade-offs in consensus, as users are merely clients. The application can be updated arbitrarily often and quickly for the same reason. Congestion by dishonest actors can be punished based on their issued identities. Although by no means guaranteed from the outset, this will all be extremely difficult to unseat once popular, given the network effects implicit in a protocol, or pseudo-protocol. This provides an incentive not only to try, but to devote significant resources to trying, given the potentially asymmetric payoff of future economic exploitation of the user base of the pseudo-protocol that simply doesn’t exist for an open source alternative. Facebook is inevitable.
To return to the question of what can be done peer-to-peer and what requires corporate intermediation: whatever makes consensus either easily achieved or not required, whatever does not require digital identity, whatever does not require digital value, and whatever does not suffer from the absence of economic incentives due to a fundamental lack of scarcity can be peer-to-peer. Anything else may be possible, but will be very difficult, and will likely be rapidly outspent and defeated by a private, centralised competitor.
Value, scarcity, consensus and identity are closely related. Where scarcity exists, value exists; where value exists, markets exist, and market prices are a kind of consensus. What’s more, they are no mere arbitrary vote, but a consensus reflecting sincere and honest belief. Identity also requires consensus: it is no good if Alice and Bob disagree about who exactly Carol is. If the means of establishing identity are not scarce, then anybody can plausibly claim to be anybody else and the necessary consensus fails. But if a scarce and nonfungible asset can be used as an identifier, consensus follows immediately.
The foundational trade-off of the Internet was to cast value, scarcity, consensus and identity aside in favour of freedom and openness. Where value, scarcity, consensus and identity might have proved useful, private, centralised pseudo-protocol managers have lodged themselves within the ostensibly peer-to-peer network as increasingly unavoidable gatekeepers. The merits of this trade-off are looking more and more suspect, as the theoretical freedom and openness that paved the way for the gatekeepers seem now to be causing the erosion of real openness and real freedom. We do not need total scarcity, total consensus and comprehensive identification, but we do need balance. Thankfully, we can probably get it.
What I describe as a trade-off is really an ahistorical dramatization. Prior to the release of the Bitcoin codebase in January 2009, there was no known way to make data scarce. But today this is very well understood. This is not an advertisement for Bitcoin—and certainly not for the swathe of quasi-fraudulent so-called cryptocurrencies that have followed in its wake—but an honest assessment of the profound technical achievement that Bitcoin represents. The Bitcoin blockchain is the first ever instance of completely distributed trustless consensus. What’s more, it is an open source protocol. It can be built on top of by developers looking to harness digital scarcity, digital value, digital consensus or digital identity, as can the few other viable (real) blockchains that have furthered this blueprint in interesting ways.
There are countless options to choose from, but I will briefly cover two projects that combine aspects of blockchains and open source as technical means to worthwhile ends. Urbit and Gab have little obvious connection with the speculative mania commonly associated with cryptocurrency, but still have the kind of lofty aspirations to which rebuild the Internet does justice.
Urbit aims to allow all users to have a peer-to-peer network computer and associated identity, from which they can securely run any and all online computation. While undoubtedly a fairly ambitious application—an entire operating system and network, akin to Linux and a suite of networking protocols, written from scratch—Urbit’s identities are rooted in the scarcity of well-known blockchain Ethereum, and hence provide centralised authorisation without a central authority. Much like an Ethereum or Bitcoin address, once owned, they cannot be controlled by anybody except their owner. Users utilise their unforgeable identities to access a personal server running the Urbit operating system and communicate within a network of likewise uniquely identified servers. This can be hosted wherever the user likes and can be accessed in any number of ways, provided the user has an Internet connection. Participants in the network are discouraged from being dishonest or insincere by the price and permanence of their scarce identity.
As the project’s website explains, “if IP addresses were cryptographic property, the Internet could have funded its own development with its own address space. With clear ownership, IP addresses would develop clear reputations. Abusers would lose real money in reputation cost and the internet would be a friendly network.” I won’t go into any more detail as the project truly is a mammoth reimagining of how the Internet ought to work, and is not quite at the stage of full roll-out yet. If you can wrap your head around the idea that a perfectly abstract, deterministic computer—a frozen, pure function operating over all the network data it has ever received (concise enough to fit on a T-shirt) that upgrades itself and maintains a log of every event it has ever produced—ought to exist, then you’ll have great fun with Urbit. Curious and more technical readers are encouraged to poke around the website or the GitHub. But all readers should be aware of what the Urbit community is proving can be built.
Gab is a more down-to-earth service, a simple browser and social media site. It is interesting less as revolutionary technology and more as revolutionary corporate ethos. Despite being de-platformed by virtually every major tech gatekeeper (and hence, among other things, barred from receiving payments) Gab has shifted its funding towards Bitcoin, and has powered on, proving that anything Silicon Valley can open source, Gab can open source more! When Brave released the Brave browser, enabling micropayments in the (somewhat) proprietary Basic Attention Token, Gab forked the open source code to the browser, stripped out BAT, and plugged in Bitcoin in its place. When Keybase integrated lesser known cryptocurrency Stellar with its open source encrypted chat app, Gab immediately teased them that it would fork this too, plug it into Gab’s DM system, and rip out Stellar and replace it with Bitcoin. Finally, Gab recently forked open source social media tool Mastodon and switched the backend of the core Gab social networking site to the open source alternative. Gab users can now self-host and communicate via open source protocol ActivityPub, easing the costs of doing so by building in the option to transfer value in—you guessed it—Bitcoin. This is all a little unavoidably technical but, in plain English, this will allow users to run Gab’s service on their own in such a way that nobody—not even Gab itself—can stop the users’ Gab followers from accessing their content. Gab is open sourcing itself. Try to imagine Facebook doing this.
Many involved with the Mastodon project have objected to Gab’s prominent perception as welcoming to white supremacists and have moved to block Gab’s domains. This is not an unwarranted smear: Gab has been correctly identified as hosting openly racist and sometimes criminally violent subgroups of users, and it is entirely understandable to not want to be associated with Gab. But it is also a bit of a non sequitur: Gab doesn’t ban white supremacists because Gab doesn’t ban anybody—just as email doesn’t ban anybody, either. So, it ought to come as no surprise that users who have been banned from other online platforms have flocked to Gab. Gab accepts the kinds of racism Twitter bans and the kinds it doesn’t ban. Besides, why shouldn’t Mastodon block Gab, if they desire to? Gab doesn’t mind. True freedom, online or off, means being able to choose not to listen to people if you don’t want to. But it does not mean being able to choose which people others cannot listen to, if they do want to. So, unlike Twitter or Facebook, the Mastodon community can’t do anything to stop Gab, because the tools Gab is using are free and open source. As Vice puts it, “there’s no functional way for Mastodon to shut down Gab.” Both the social media tool and the Dissenter browser were initially banned by Google, but the new open source version has since shot to number one in the Google Play store. Google issued Gab a nonsensical ultimatum on the basis of objectionable user-generated content that Gab must remove. But the whole point is that even Gab can’t remove it. More importantly, neither can Google.
Were these interlocking visions to come to fruition, we could revisit the fantasy described above. Several clearly desirable features immediately present themselves. For example, the issue of gatekeepers who exist for technical reasons assigning themselves political authority would evaporate. Were Facebook or Twitter run on a distributed network of servers as mere protocols for data exchange, porting the so-called social graph would be a trivial endeavour. The data indicating the personal connections around which your social media experience is built would exist on your server, and on each of your connections’ servers, and would be marked as such in order to be called by the Facebook or Twitter protocol in the first place. A challenger social network would simply tap into the same data, given your permission.
This might seem terribly abstract, but in fact it is exactly how the likes of Telegram and Signal work today with respect to your phone number. This in turn invites a comparison to a concept called Mobile Number Portability (MNP). This means in essence that you have a legal right to retain your mobile phone number when you switch carriers, and has been in force in most of Europe since the early 2000s, in Australia since 2001, the US since 2003, and most of Asia since the late 2000s, and more recently in a handful of other jurisdictions. What is interesting about MNP is that it is technically extremely complex and costly to implement, and furthermore is clearly not at all in the interests of carriers. Without this legal mandate, carriers would have profound network effects allowing them to either build applications such as Telegram and Signal to further lock users in, or rent-seek from those who do. And yet the discourse in this arena could hardly be more different to that surrounding Facebook and Twitter (probably because the Internet is not involved). There is no, why don’t competitors just build a competing service?, or, these companies put in the development and operational costs, so why shouldn’t they have the chance to profit?. There is only, this is obviously fair to consumers, so do it. End of discussion.
To the extent that politicians or regulators feel the need or pressure to do something about big tech, this is probably the only sensible course of action. Arbitrarily large fines from the FTC, seemingly celebrated by Wall Street, will likely seem like an unhelpful quick fix before too long. Hard cases make bad law. We could and probably should mandate portability of personal data under the auspices of providing fairness to consumers and encouraging competition among web services. But it isn’t clear to me that any other form of proposed regulation would treat the disease, rather than everybody’s personal favourite symptoms. As seems to have happened with GDPR, a poorly conceived regulatory sweep will probably have the unintended consequences of further entrenching the monopolists. My preference would be to obviate regulation altogether by catalysing consumer self-sovereignty. Rearchitecting the Internet is a rather utopian goal, but the open source ethos is alive and well at practically every layer of development, up to and including upstart competing applications.
So here’s my plea: stop using big tech and venture into the wild.
As with all things online, this will involve a trade-off. I will not pretend that open source alternative applications such as Gab, Minds, DuckDuckGo, Telegram and Brave are yet as slick, powerful or robust as the Silicon Valley titans they seek to ultimately supplant. Nor is the underlying infrastructure of Urbit, ActivityPub or Bitcoin. The biggest problem of all is really a vicious circle: Facebook and Google have profound network effects, and open source alternatives, so far, do not. But network effects work in reverse too. We tend not to notice because this isn’t as sexy. If a critical mass of users switches away from Google or Facebook, their collapse will be surprisingly quick. This is a very dramatic potential outcome, and I suspect it is more likely that, at a certain rate of user emigration, these companies, and others, will adapt their policies to be more free and open, so as to better compete in this new environment.
What exactly will they be competing with? The networking may be on Urbit or Gab, the identities may be on Ethereum, the value exchange may be in Bitcoin, and the applications may be Minds, Telegram and the rest. But they may not be. The competition between specific services will be superficial. What matters is the competition between systems and ideas. Urbit, Bitcoin and Minds can be copied and adapted by competitors. This is a feature, not a bug. It keeps them honest in a way that Facebook and Google would never deign to be right now—but may have to soon. The system that wins will encourage collaborative creation of culture and capital. It will be antifragile. It will be open. It will be decentralised. Best of all, much of it is already here, and its network effects are growing by the day.
We should, however, be cautious of an overly utopian vision. The problems I claim may soon be solved by the Internet were largely also caused by the Internet. It is an engineering project with trade-offs, not an ethereal force of pure social progress. It can and should be changed to suit human beings, rather than the more popular and more celebrated—but frankly dystopian—inverse. It exists in the form it does because of the understanding of engineering, economics and law that were prevalent during its construction. But times change, understandings change, and the construction of systems can change too. We have had a serious false start, but we may now be moving towards a truly free and open Internet.