Learning Objectives

  • How data are the basis for much economic activity
  • What is at stake with the widespread collection of personal data

Key Messages

  • There are ethical considerations when working with or along business models that rely on personal data
  • There is a social imperative to be aware of how our individual actions online are part of the larger social context

What is Surveillance Capitalism

Contribute to this book!

The Web, Publishing, and Ourselves is a new open textbook that critically explores the relationship between technology and publishing, as well as the many ways in which technologies are shaping our personal lives.

Notice something that’s missing? Want to recommend additional reading materials? Please let us know by participating in our open review process. We appreciate your feedback, ideas, and suggestions for how to improve this continually evolving resource.

In the last chapter, we discussed five major business models used online—the marketplace model, the subscription model, the membership model, and the freemium model, along with advertising and platform cöoperativism—but of these, none have had more profound effects than those stemming from the collection, sale, and us of our personal data. With the collection of personal data, it has become possible—and indeed profitable—to learn very detailed patterns of behaviour (not just of what we buy, but of who we are, what we do, and what we think) and use them to change and shape our preferences. The result has been a reorientation of our entire economy into what has been termed “surveillance capitalism.”

It has become standard to accept terms of service whenever we use a website, apps, online service, or digital device. In clicking through these terms (too often without reading), we agree to have details about what we do, often beyond what is absolutely necessary for the service in question, and often far beyond what the average users realize.  We all understand than an app like Google Maps needs to know our location to give us directions, but many might not expect that it will track everywhere we go. Even less would suspect that many other apps, like Facebook, do the same thing, even when their functionality does not require them to know where we are (except at specific times when a user ‘checks in’).  Similarly, devices meant to assist us, like Siri and Alexa,  constantly listen to and analyze our conversations (again, even while not in use). On the web, various types of trackers and some browser plug-ins follow us around as we browse and are then able to trade our data about where we spend time online.

In this new surveillance-based economy, our personal data is the resource being mined—but unlike the natural resources which data collection is often compared to, our data supply is infinite. According to Silverman, “[s]urveillance capitalism depends on the constant gathering of ‘behavioural surplus,’ or the data exhaust that we produce as part of the normal course of web browsing, app use, and digital consumption. All of it is potentially revealing, allowing companies to make sophisticated inferences about who we are, what we want and what we’re likely to do.” Every action we take is considered “data” that can be analyzed and fed back into the economy, meaning that there is no end to what can be extracted. In itself, data does not have an intrinsic use value or exchange value; rather, the value of personal data is relative to how socioeconomic and political structures enable it to be utilized for the acquisition of wealth or the consolidation of political power. In the political sphere, profiling data can be integral to election campaigns, while in the business sphere, consumer data fuels the modern billion-dollar advertising industry, and their ability to create targeted ads that are so precise they know exactly when and where to generate.

Targeted ads are the most well-known use of personal data. By now, it is quite common to have the experience of searching for a product or walking into a store, only to have it show up in their online ads later on. While some find targeted ads preferred to irrelevant ones, these judgements should be made with the understanding that effective targeted advertising requires more than identifying the right product, it can require knowing intimate details about individuals. For example, apps used to track women’s cycles are used to sell everything from chocolate bars to hygiene products, make-up, and baby accessories. While that may seem innocent enough, what happens when we consider that “all data is health data“? What does it mean when that same data and detailed knowledge about who we are is used to determine your premiums for health insurance? Whether or not you qualify for a government program? or your eligibility for a job? Are all uses of the data collected for our convenience justifiable?

Why is Data Collection Problematic?

“We rushed to the internet expecting empowerment, the democratization of knowledge, and help with real problems, but surveillance capitalism really was just too lucrative to resist. This economic logic has now spread beyond the tech companies to new surveillance-based ecosystems in virtually every economic sector, from insurance to automobiles to health, education, finance, to every product described as “smart” and every service described as “personalized.” By now it’s very difficult to participate effectively in society without interfacing with these same channels that are supply chains for surveillance capitalism’s data flows. For example, ProPublica recently reported that breathing machines purchased by people with sleep apnea are secretly sending usage data to health insurers, where the information can be used to justify reduced insurance payments.”—Shoshana Zuboff

What is so worrisome about surveillance capitalism? In the simplest terms, surveillance capitalism undermines personal autonomy and democracy. More specifically, surveillance capitalism is problematic because it is:

  1. A Threat to Global Politics—It doesn’t take a deep analysis to see how data has been used for political gains. Take, for example, the role of Facebook in the US 2016 election where Russia is accused of backing over 3000 ads, many of which violated US federal law. The distribution of targeted political propaganda through social media platforms continues to be problematic and is already underway for the next US election.  Furthermore, there is risk in the private world of politics becoming public. According to Landwehr, Boning, and Wulf, “[p]articularly due to the potentially unlimited lifetime of the data and lack of transparency with regard to what personal data was gathered and how it was used in profiling, people in public offices or running for them will have to permanently fear that unpleasant private matters from their past could be dug up. In addition, the pressures toward more and more provocative content drives extremism and social fragmentation.”
  2. Used to Manipulate—For this business model to work effectively, it must capture the user’s attention long enough to show them advertising. This is done through the manipulation of human psychology, such as creating platforms which induce anxiety and compulsive behaviour through self-comparison and creating purposely addictive software. The result is a world where the IT sector (and the few billionaires behind them) can wield extreme power and control over people and real-world situations.
  3. A Threat to Individual Privacy—When we sign up for these services we often do so willingly, and in the process provide information about ourselves. But what we typically do not know is where our personal data goes and how it is used, and there are no mechanisms to ensure transparency in data usage. As a user, we are left uninformed, with no options to opt-out of our data being sold. There are legal restrictions that apply to personal data collection outside of the web, but these do not apply to IT companies, so what happens to our data is anyone’s guess. If we look back historically at how IT companies used personal data, we can see that it can be worryingly problematic. At the extreme end of the spectrum, in 1930s IBM’s census data was sold to the Nazi’s to help them identify Jewish and other victims of the Shoah. A less extreme, but more widespread version, is how this type of personal data is used daily against marginalized groups in ways that threaten their civil liberties. For far too long BIPOC, LGBTQ2, women, religious minorities, persons with disabilities, and those who are generally marginalized by society have been made vulnerable through big data. According to the Civil Rights Coalition, big data collection has targeted marginalized groups in harmful ways including targeted voter suppression and providing misinformation to African American communities, offering predatory money lending, discriminatory government surveillance, and housing discrimination.
  4. Generating Artificial Need—While creating artificial need is not specific to surveillance capitalism, but capitalism itself, surveillance capitalism reinforces feelings of need through the use of effective, tailored, and manipulated advertisements. As Landwehr et al. states, “the generation of artificial needs is valuable to the advertisers, which can serve these needs while having a destructive influence on the individual as well as society. On the individual level the users pay twice for these artificially generated needs, while true needs remain unsatisfied: once with the money of their purchases; and the second time with the time they spend due to the provocative content and in psychological traps.” At a social level Landwehr et al. claims that the gathering of personal data creates bubbles which further fragment groups of people, making global action and social movements difficult, and further supporting an unsustainable and environmentally devastating level of mass production and consumerism.
  5. A Centralization of Power (Threats to Democracy)—Those who have access to large amounts of user data ( such as Facebook, Amazon, and Google) gain their power through predicting and manipulating the ways users behave. These large conglomerates make billions of dollars annually off of user data, and in turn, buy up more and more smaller companies furthering their reach and the power they hold. As Zuboff says, “Instrumentarian power challenges democracy. Big Other knows everything, while its operations remain hidden, eliminating our right to resist. This undermines human autonomy and self-determination, without which democracy cannot survive. Instrumentarian power creates unprecedented asymmetries of knowledge, once associated with pre-modern times. Big Other’s knowledge is about us, but it is not used for us. Big Other knows everything about us, while we know almost nothing about it. This imbalance of power is not illegal, because we do not yet have laws to control it, but it is fundamentally anti-democratic”.
“While the connection between individual datum and actual human beings can appear quite abstract, the scope, scale, and complexity of many forms of big data create a rich ecosystem in which human participants and their communities are deeply embedded and susceptible to harm.”— Ten simple rules for responsible big data research, Zook et al.

The Content Industry and Data Privacy

Why does data privacy matter to us as content creators and publishers? Simply put, data privacy matters to us because we are at the forefront of data collection. We are the gatherers, users, and beneficiaries of other people’s personal information—from selling ads, to targeting book sales, and creating user profiles. Yet, as discussed above data collection can be problematic. It threatens global democracy, challenges individual freedom, and disproportionately affects marginalized groups. As discussed in the previous chapter on business models, one must be aware of the implications (pro or con) that arise from pursuing a particular business model. Because this model is so problematic, how as content creators, should we be using it? How do we find a balance between using user data to our benefit and ensuring that we are operating in an ethical manner? When is it right to collect and utilize user data?

Big Data Ethics

To answer these questions we need to understand that data collection doesn’t always have to challenge personal and social freedoms. While we may, on an individual level, protect the rights of individuals through data privacy, these individual actions can only do so much. Instead, we can work towards a system of data collection that is in line with “big data ethics.” Big data ethics is the “systemising, defending, and recommending concepts of right and wrong” behaviour around the use of personal data. It is primarily concerned with six principles:

  1. Ownership—ensuring individuals own their own data
  2. Transaction Transparency—providing insights into the data that is being used
  3. Consent—providing informed and explicitly expressed consent of what, how, and why data moves
  4. Privacy—making sure an equitable effort is made to protect personal privacy
  5. Currency—ensuring individuals are aware of any financial transactions made as a result of their data
  6. Openness—making aggregated data sets openly available for others to use

While all of these guiding principles are fundamental in creating and maintaining an ethical data framework within publishing, Pentland has argued that nothing is as challenging and important as data ownership. In his “New Deal on Data”, Penland outlines the three classic tenets of property ownership: the rights of possession, use, and disposal. Although these are typically applied to the property as we know it, they can be easily applied to personal data. Within this framework, individuals would have the right to possess their own data and remove it at any point, providing them agency in a business model that has been predicated on exploitation for too long.

New and Emerging Regulations

On January 2020  California adopted the groundbreaking California Consumer Privacy Act privacy law. Under this law, Californians will be able to

  1. Demand companies to disclose what information is collected about them (as well as providing them access to the information)
  2. Delete data upon personal request as well as prohibited from selling data if user requests so.
  3. Provides equal access and treatment to users who request private data (non-discriminatory protection)

This applies only to companies that “have an annual gross revenue in excess of $25m, derive 50% or more of their annual revenue from selling consumers’ personal information, or annually buy, receive, sell, or share the personal information of more than 50,000 consumers, households, or devices for commercial purposes.” (The Guardian)

Q. What do you think this act means for user privacy and data collection? What, if any, precedents will this act set?


  • Do you think that the government should run a campaign to improve people’s awareness of protecting their data privacy? Should it be incorporated into the local curriculum system? How? Is there any existing example?
  • What are the uses of data you are comfortable with and how you would like the data privacy to be regulated?




Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

The Web, Publishing, and Ourselves Copyright © 2020 by Juan Pablo Alperin is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book