Two and a half years ago, I made the social media purge. The decision took some time to build up to, and the act itself was difficult. I spent two days in a state of ambivalence—determining whom I wanted to stay connected with, deciding what content I didn’t want to lose, and overcoming my rationalizations. It wasn’t an easy decision, as there were reasons to keep my accounts, but, exhausted by the feedback loop of narcissism, self-consciousness and extrinsic validation, frustrated by the prevalence of overly emotional reactions to overly shallow content, and disturbed by the continuous force-feeding of dopamine platters with tracking chips sprinkled on top, I knew that deletion was the right decision. I soon learned, however, that disconnecting was not the paramount solution.
It’s no longer a mystery that Facebook, Google, Amazon, Microsoft and their subsidiaries fuel themselves by influencing—or, arguably, controlling—our attention. We’re now in the process of figuring out what to do about it, both on the systematic and individual levels. Systematic changes to the attention economy are inevitably necessary and are slowly getting underway, but they are only part of the solution. Much of the public conversation surrounding the attention crisis focuses on systematic issues, and, while those conversations need to be had, it’s very important that we talk about how to manage our attention as individuals, given our continuously transforming digital existence. Cal Newport recently launched this conversation into the mainstream with his new book Digital Minimalism, which details a philosophy we can all use to take control of our digital lives. I’ve been practicing my own version of this philosophy for over two years now, and it has become clear to me that attention is fragile, limited, variable in quality and easily exploited when it’s not regularly attended to. In a world of utopian marketing ploys, airport iPad stations and pedestrians who don’t look up, the importance of mapping out our relationships with the digital world is clear. This is a process we can all take part in right now—by understanding what the attention economy is doing to us and being intentional about how we live our digital lives.
“Technology doesn’t fix itself when it doesn’t do what it’s supposed to do,” wrote Robert Pirsig, “and neither does the human mind. Diagnostic and intervention is required, and the more complicated the problem, the harder it will be to fix, but the more important it is to solve.”
The Threat of Behavior Modification
Information giants intentionally and effectively modify behavior on an individual basis in order to addict us to their products. They do this by feeding all the information they receive from users into advanced intelligent systems, which filter copious amounts of data to sophisticatedly exploit our vulnerabilities and maximize our usage. They profit from selling our data to advertisers, which incentivizes them to extract as much of our information as possible. This cycle creates a snowball effect, because the more of our information they hijack, the more strategies they have to addict us, and the more strategies they have to addict us, the more of our information they have access to and the more money they make.
These attention-harvesting platforms are built with such sophistication that, even if we’re completely aware of how their addictive schema psychologically affect us, we’re still likely to fall prey to them because they know exactly how to exploit our primitive nature. We are driven by how we feel, even if we don’t know how we feel. Impulsive desires arise from the unconscious and shape what we pay attention to, and attention engineers exploit these desires by influencing us beyond our conscious comprehension. This is true across all attention-harvesting platforms, but social media is particularly dangerous due to its exploitation of our social nature. Our thoughts and behavior are significantly influenced by what others think of us, and information giants exploit our need for social acceptance by way of intermittent reinforcement, which is the most effective way to cultivate an addiction. Substantive drugs are not the same as people drugs. Johann Hari has famously declared that “the opposite of addiction is connection,” but, if we are addicted to connection, our demands on one another will become more and more extreme to fulfill our increasing tolerance to social gratification, to the point of mass social neurosis—especially when we are synthetically programmed to want it more than we inherently would. For example, taking a selfie at the furthest edge of a steep cliff in order to capture the grandest view can lead one to justify the risk of falling off. Risk-taking is a great thing when it’s fueled by a goal worth achieving, but not when it’s merely a means to satisfying an addiction.
Social relationships are not drugs to be administered or consumed, but social media conditions us to frame them that way. Information giants know that addicts are not only addicted to the reward itself, but also to the lack of reward—and this is the key to their program. If you don’t know what reward you’re going to get or when you’re going to get it, the reward will release more dopamine. This is why Tristan Harris refers to smartphones as slot machines: both effectively weaponize intermittent reinforcement. Every time we post on social media, we anchor at least part of our wellbeing to the potential rewards or punishments we may receive from others, which can create a stir of anxiety that would not exist had the post not been made. Research has shown that ambivalent relationships are much more stressful than negative relationships, meaning that it’s better to know where you stand with someone than not to know, even if where you stand with that person is bad. This is why the waiting period that precedes the onset of likes, comments and shares is so addictive: it launches the user into a state of ambivalence, which is intermittent reinforcement’s favorite emotion. Addiction to connection + intermittent reinforcement = existential instability.
Disconnecting from the machine disconnected me from a lot of people, which, for a while, made me feel more isolated than free. The withdrawals I was experiencing from having cut off a significant source of connection were accentuated by the negativity bias, and took some time to get over. The isolation didn’t feel good, but it would have been worse had it been influenced by behavior modification algorithms, which exploit the negativity bias in order to get us hooked on attention-harvesting platforms. This is the reason for the advent of fake news, clickbait articles and online mobbing, which have become prevalent on social media, as the most extreme and negative content attracts the highest amount of emotional resonance, which attracts the highest amount of attention. As a consequence, social media usage pushes us towards the development of outrageously exaggerated online personas, priming us to accept deception, polarization and dehumanization as normal ways of living. Social media is a manipulative machine: the algorithms that fuel it not only cultivate the development of radical and inhumane personas, but also sophisticatedly group similar personas together, analyze the behavior of both that grouping and the individual user in order to display individually tailored content to maximize engagement, and cultivate an echo chamber of personas that pathologically reinforce each other. This grouping de-emphasizes individual value through a collectivist function that feeds on rewarding individuals in alignment with what they relatively perceive to be true, gluing similar perceptions together no matter how valid they are or who articulates them. This leads to a dogmatic online war between alternative perspectives that we don’t even know are genuine. The algorithms don’t know either, nor do they care. Social media hinders individual meaning and the true communities that are created out of that meaning. It trains us to be units in a mass rather than individuals in a group, by completely dehumanizing our interactions with each other. It, paradoxically, disconnects us by connecting us.
While the aforementioned control mechanisms have reached the public sphere of concern, the general public is not doing enough about this. This is primarily because the very products that influence or control our behavior are those we depend on to live normal everyday lives (by modern standards). We are generally willing to let our attention be directed if we believe it’s doing us more good than harm, but the problem is that the good is blatantly clear while the harm is perniciously hidden. The products in which these control mechanisms operate continue to be masqueraded as simplistic, utilitarian, joyful, aesthetically pleasing and world-unifying—the brand strategy of our age. This is the new feel that is built into every shiny app, every gentrified building and every fake smile. This utopian scheme offsets the dystopian threat information giants impose on society: that is, behavior modification can be used to transcend our autonomy. In the words of Yuval Noah Harari, the attention economy “threatens to know us better than we know ourselves.” Natural selection is not going to determine our future: humans are hacking and modifying natural principles like never before. As our ability to transcend our biological nature increases, so does our responsibility to shape humanity in a way that is in our best interests.
Big tech fuels itself on the vast amount of personal information it extracts from us, 99% of which is milked under the radar, and uses that information to attempt to control our attention spans—and, arguably, our lives in general—as best it can. The word information here transcends its literal definition, and consists of all that is unique to our individualities that we agree to make public, via our devices—how we talk, how we act, how we feel, where we are, where we were, where we’re going, and every other documentable mechanism of existence that is packaged into a product, sold to advertisers and fed into complex algorithms that threaten to undermine our autonomy. Autonomy was not meant to be automated, and those intelligent systems are not trained to care about our values. Enough dystopian novels have been written for us to imagine what kind of future this may lead to, but there is much less information out there about how we should handle the situation right now, as individuals, given the ubiquity of attention-harvesting technology.
Lifestyle or Tool?
Despite all its flaws, opting out of social media can put us in a difficult position. Many experience what Jaron Lanier calls universal cognitive blackmail—a term used to describe the widespread obligation of everyone to market and advertise themselves using behavior modification technology. Almost half of the world’s marketing and advertising spending is allocated to behavior modification technology, and many people and businesses rely on this technology to gain a following, maintain a public presence and avoid getting overshadowed by their competition. A while after I deleted my Facebook, I had to create a new profile to maintain access to a public page I used to promote a musical festival I was organizing. Due to the threat posed by universal cognitive blackmail, using Facebook to promote the festival was pragmatically justifiable. I was putting a lot of my own money on the line to make it happen, so I needed to utilize every resource I had to lower the chances of incurring a loss. Deleting my personal Facebook account was a bad enough marketing decision on its own, and I was confident that deleting the Facebook page for the festival would have resulted in a financial loss.
On a deeper level, Lanier’s definition of universal cognitive blackmail can be broadened to describe the social pressure modern individuals face to use attention-harvesting platforms as a necessity for existence. Social media is not a necessity for existence, but we live in a world that treats it as such. “Social media is only necessary because it exists,” as Newport has said, and it’s important for us to recognize that. Using it may be required in some cases, and it can even enhance certain aspects of our lives, but it’s toxic by default. This requires us to take control of how we use it. We need to separate its default pseudo-lifestyle framework from its tool-like benefits in order to gain autonomy over our digital lives.
The threat of behavior modification explains why using attention-harvesting platforms is not in our best interests, but it’s more important to determine how we should or shouldn’t use them. We need to have practical value-centered reasons to make the necessary changes to our digital lives, in order to be motivated to do so. The products that harvest our attention are so deeply entrenched into modern culture that most people will continue to use them, so it’s important that we use them autonomously and intentionally. On its website, the Center For Humane Technology lists many useful and easy-to-implement hacks to gain autonomy over our digital lives and Newport lays out many effective methods in Digital Minimalism: delete social media from your phone; unfollow everyone and everything that de-energizes you; limit your posting to a fixed schedule; bookmark pages to bypass the news feed; choose to get your news from only a few reliable sources; turn your smartphone into a dumb phone; leave your phone at home or in the car; don’t click like, and so on. These are all what I call technical intentions, which is any strategy that relates to changing how we use digital technology. However, changing the way we use it won’t change the way we think or feel about it per se. Technology is not an object: it’s a faculty. The way a person views technology is no different than who they are, insofar as they know who they are or how they view technology. Modern culture conditions us to see technology in a dualistic material sense, which is why we tend to think that either we control it or it controls us. If we see ourselves as no different from the technology we use, however, our intentions will become much clearer than they are from a dualistic perspective, because we won’t see technology as something else. Taking on this monistic perspective will enable us to create our own digital values, which align with exactly who we are.
Any strategy that involves changing the way one thinks and feels about technology is what I call a philosophical intention. Technical intentions are only complements to philosophical intentions, because we can’t change the way we act unless we change the way we think and feel. Below, I outline three philosophical intentions you can use to help formulate your digital values. This is neither an exhaustive list nor a universal formula. It’s a suggestive template that can be filled in, added to, subtracted from or modified in whatever way suits you. Digital value-creation is an individual endeavor, and different things will work for different people. What’s important is to separate how you want to live your digital life from how others want you to live it.
1) Identify What Gives Your Life Meaning and Don’t Share It on Social Media.
What creates meaning in your life? Make the commitment to separate that from your online presence. If you feel compelled to share what gives your life meaning on social media, ask yourself this: does sharing this aspect of my life make it more or less meaningful? You will only know if it makes it less meaningful if you choose not to post about it, or perhaps not to document it at all. Note that documentation is a vague term, so how that is defined depends on your context.
Just as information giants use intermittent reinforcement to exploit our uncertainty about the rewards or punishments we will receive in order to maximize our usage, you can use intermittent reinforcement on yourself to explore the dissociation of your real and digital lives. In other words, just live your life and see what happens. The transition out of the digital will likely make rewards more rewarding and punishments more punitive, at least until you adjust, but when you do, it’ll be worth it. After making the transition myself, the most common rewards I experienced were freedom, presence and meaning, while the most common punishment was isolation. However, as I maintained my digitally minimal lifestyle, focusing on the meaningful aspects of my life without sharing them on social media became more and more rewarding and eventually became my default. I also found that abstaining from an online presence reduced my interest in documenting my life at all. I still do in certain situations, but I don’t feel the constant itch to do so that I did when I was active on social media. My justification is this: the more in the moment I am when engaged in something, the more meaningful it is. If documenting the experience will make it less meaningful, I won’t do it.
As for the isolation, I had withdrawal symptoms from abstaining from my usual reinforcement schedule when I was using social media, having cut off my access to 24-7 sociality and the rewards and punishments associated with that. Once the withdrawal symptoms cleared, however, I was free of the constant feeling of being compelled to share meaningful aspects of my life and seek validation from whoever was willing to give it. I became much more selective about whom I shared things with, which made the sharing of them much more meaningful.
I recommend you abstain from posting for at least a month and analyze how you feel afterwards in comparison to how you felt before your abstinence. For the bold, try abstaining from documenting your life for the same time period. If a month is too long, then start with a week, or even a day, and work up from there. Once you get into the habit of not posting about the meaning in your life, and perhaps not documenting your life at all, it’s likely that your life will become more meaningful, as your life’s meaning will not be mediated by an addictive urge to document it or garner x number of likes, comments and shares. In the words of Jordan Peterson: “Meaning gratifies all impulses, now and forever. That’s why we can detect it.”
2) Make S.M.A.R.T. Commitments to Abstinence.
Making changes to your digital life can be a rewarding endeavor, but that decision should be specific, measurable, attainable, realistic and time-bound. Here is an example of a goal of this sort: I will not use any of my social media accounts for three months so that I can commit the 2 hours per day I usually spend on social media to writing and recording my next album. Making your goals S.M.A.R.T. gives them clear criteria and immunizes them to rationalization.
2a) Specific. You need to have a specific intention for your commitment to abstinence. There needs to be a reason you are making the commitment that informs the specificity of your goal. I want to quit social media is not specific enough. A specific reason might be: I want to quit social media because it is lowering my attention span, which I want to increase so I can focus more deeply on the hobbies that make my life meaningful. Even that may not be specific enough, but the point is that being specific in your reasoning will allow you to clearly define what you want to get out of the abstinence.
2b) Measurable. Newport’s method, which he outlines in Deep Work, is a very effective way to measure progress: every day, log all the hours in which you are in an undistracted state of deep work, and, at the end of each week, measure your progress or lack of progress by comparison with the prior week. Newport’s method can be more broadly applied to the attainment of any goal. For our purposes, I will modify it to the following: log all the hours in which you actively work towards accomplishing the actionable goal that you set out to accomplish when deciding to abstain from digital technology. I recommend using the bullet journal system for logging. It’s an easy system to learn, you can customize it to your lifestyle and it takes you off the screen. Every month, I use my bullet journal to set temporal goals for each category of life I engage in, keep track of the hours I spend on each category, and plan my days and weeks so I will hit those goals. If I spend too much or too little time on a certain category, I will make note of what I need to do more or less of in the following month and state my reasoning for it.
2c) Actionable. Once you define the specific intention for your commitment to abstinence, which in our example is increasing your attention span so you can focus more deeply on the hobbies that make your life more meaningful, you are then tasked to define what those hobbies are and what needs to be done to accomplish your goals in relation to them. If writing music makes your life more meaningful, then writing and recording an album is an actionable goal that would reflect that meaning. This actionable goal can then be broken down into sub-goals, such as rent a practice space, buy the equipment I need and pick a daily time to work. Those sub-goals can then be broken down even further until you know exactly what you need to do.
2d) Realistic. Due to universal cognitive blackmail, many people may feel stuck when they get to the realistic component. Consider organizations that face an economic threat if they choose not to use social media. If these platforms are absolutely required in order for your organization to thrive and quitting them is not an option, I recommend using a post-scheduling service such as Later, which enables you to make all your posts for a specific time period in one go, lessening the time needed to engage on these platforms and making your organization more efficient. I discovered Later last year when I was contracted as a marketing assistant to increase an art gallery’s social media presence. In an effort to avoid attention fragmentation, I used Later to schedule many posts at a time. Rather than working for quick bursts of 15–30 minutes per day, I would dedicate a certain number of hours to one post-scheduling session every week or two. Realistic practices like this are more complicated for individuals, however, as individuals tend to face existential threats from not using these platforms, such as social isolation and opportunity costs, which are generally more subjective and harder to measure than economic threats. Existential threats have to be analyzed on the individual level. This analysis requires the measurement of subjective value, which I explain in the following intention.
2e) Time-bound. The time period in which the goal is attained can be contingent on finite or indefinite circumstances, or can be timely in itself with no contingency. Jaron Lanier refuses to use any social media platforms until they adopt a business model that is in the best interests of humanity, which is an indefinite contingency, because we don’t know when that is going to happen. A college student may decide not to use social media until finals week is over, which is a finite contingency, because there is a definite end point. An anxious and depressed social media addict may decide to deactivate their accounts cold turkey for one month, which is timely in itself with no contingency. Whatever the case, goals that are bound by time are more deliberate than goals that are not.
Making S.M.A.R.T. commitments to abstinence will make you more resilient to impulsivity, distraction, and rationalization, and more laser-focused on accomplishing your goals, because you will have created a meaningful line of sight on which to focus your attention.
3) Measure Subjective Value.
Resuming my use of Facebook after I had made the initial purge was a strenuous and ambivalent decision that felt internally contradictory, given that I had been so adamant about removing social media from my life. Recognition of such internal opposition was necessary, however. It made me realize that, when I quit, I identified with too many of the pathological aspects of these platforms, without giving their virtues proper consideration. When I was in the process of quitting, these virtues were relentlessly reminding me not to, but I hadn’t thought through what a digitally minimal lifestyle looked like, so I decided that quitting was the only solution. The better I adapted to this lifestyle, however, the more I understood that we don’t need to stop using digital technology to not be controlled by it: we just need to determine how we want to be integrated with it.
Next time you aimlessly look at one of your devices, ask yourself this: how useful is this thing, given my particular circumstances? The answer to this question varies from individual to individual. I suggest you apply Newport’s method of using the 80/20 rule to answer that question in the subjective manner in which it needs to be answered. What aspects of your digital integration contribute to 20% of the causes that produce 80% of your desired outcomes? In Deep Work, Newport uses an example that is relevant to all of us: do your interactions with friends on social media account for 80% of your social success? It may not surprise you if the answer is no, but it’s easy to assume the answer to be yes if you don’t ask the question. On the other hand, maybe social media does improve your life. If you’re a photographer with an Instagram page, Instagram may very well be one of the key ingredients of your professional success. There are countless photographers who take great photos, but not as many who know how to market themselves, and Instagram can be a crucial tool for that.
The 80/20 method can be applied to any aspect of life, but it’s a very useful method for determining the utility of your digital integration, because it allows you to quantitatively measure how much value something provides to you as an individual.
Moving Forward
We have the ethical responsibility to take control of our digital lives because the attention economy is not subordinate to ethics. We cannot let information giants define how we live: we need to create those definitions ourselves and make the attention economy subordinate to us. By creating our own digital values and being intentional about how we live our digital lives, not only will we maximize our autonomy, direct our attention towards what matters and make the world a better place, but we will also provide a more accurate picture of how the digital world should be shaped going forward. We live in a time of transition, and it’s important that we act as a true democracy by each making our values and intentions clear, so that those who have the power to make systematic changes to the digital world know that we are not going to play the attention economy’s current game, and that they will need to make the necessary changes to align with the game we want to play. Every time we use our devices, we need to know what we’re actually doing, for the sake of ourselves and everyone else.
The term “digital values” is a diamond 😀