//
post
Economics

Surveillance capitalism and its discontents

Social psychologist Shoshana Zuboff’s book on surveillance capitalism reveals the perils and menace of the digital age. I’ve now read all 535 pages, or at least it was mostly read to me in urgent tones as an audiobook at 1.5x speed.

The content was so useful to me as I contemplate the implications of the smart city that I bought the paperback version as well — so I could dip into that to check my progress through the tome, annotate key passages and search out page numbers for passages I would like to quote. I’ll start by covering some of the book’s concern in bland terms, outside the book’s framework of crisis and alarm.

Apps that improve themselves

The innovations evident in the digital age include the obvious: ubiquitous communications, online retail, managing money, the formation of online sociability, responsive computers, wearables, the prospect of driverless cars and the trappings of the smart city. Enabled by the digital age, I can purchase a book online and have it read to me while I walk to work, wash the dishes or sweat at the gym.

A smartphone app that enables me to buy a book or reserve a seat on a train requires that my device transmits data to some processing unit outside of my device. There, the data is processed and stored while the service I require is delivered back to me, instantly. As well as delivering this service, it’s common now for apps to make recommendations for similar, related or supplementary products and services, as if in anticipating my next move or desires, before I even know what I want.

This offsite processing also serves to improve the performance of the app over generations of updates. The purchasing, navigation, translation, image search and speech recognition functions get better over time. Whatever version I am are running, an app (and its servers) may also be designed to improve its performance as it amasses data about my online behaviours.

Improvement is derived from other users as well. This is the way processes improve, from lots of data about inputs, outputs and responses from various user sources.

System improvement operates in various dimensions. There’s human-mediated improvement. You might think of the feedback provided by reviewers on Trip Advisor. The hotelier or restaurant owner sees the reviews and makes adjustments to their service. Responsive designers and developers of apps respond similarly to that kind of feedback.

But some improvements are incremental and are automated. Not all improvements benefit all users all of the time. The improvements may not be something I want in particular. Better targeted advertising, for example, is not high on my personal wish list.

Improvements for some

Targeted ads bring the issue of improvement to the fore. Once the algorithms are designed to extract and process my data, targeted advertising needs no further human developer or designer mediation. Google was the leader of innovation in the area of targeted advertising.

In its earliest incarnations, search words I use in a browser would summon custom ads from a database at Google. Now the ad delivery system gets better at deciding what ads, inducements or persuasions to present with each successive interaction. That’s thanks to sophisticated user profiling, derived not from my immediate search and browsing activity, but a history of search and browsing.

Google algorithms can not only examine my current search terms, but sees them in relation to previous searches I have made, and can correlate that with other input from my interaction with other systems run by Google and its partners: maps, image search, translation, gmail, a fit bit, Google-assist, the Android OS, etc.

Facebook provides similar “improved” capabilities as I feed it my news, images, comments, likes, re-posts and friends.

Even though we sign conditions of service agreements we users are not entirely aware of the data we provide or what benefits, if any, it is bringing to us. Data extraneous to the service being delivered is siphoned from us: keystrokes, clicks, what we spend our money on, how much of it we have, what we like, prefer and loath, our attention, and even our politics.

Much of this can be inferred from our online behaviour. You may never declare online that you wanted Britain to remain in the EU, but that preference can be derived with a high probability from the kind of friends you have, what pictures you upload or like, and other data that feeds your profiling.

Contentless profiling

The algorithms don’t need to examine the content of our messaging to work out our profiles. Zuboff cites research that shows how the frequency of our responses to particular online stimuli, such as liking items on Facebook, can reveal information that people would assume to be private, such as “sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender” (273).

This deep inferential surveillance probes who we are, in private and in public. It can be used to predict the behaviour of people in the categories to which we belong, and of us as individuals. The predictions can be close enough for advertisers, campaigners, and the state to incorporate features into apps, websites and social media feeds designed to influence our actions — to buy products, vote in certain ways, drive, navigate or modify how we manage our bodies and our health.

Behavioural surplus

This cycle of data flows, inferred information, and behaviour change provides the touch points for Zuboff’s critique of the digital age. I’ll inflect what follows to revel her manner of critique.

Information is a commodity, and includes all those incidental keystrokes and micro-operations we make every day. They can be bundled and sold off to others. Zuboff’s strong point is that this surplus information is taken without our consent or knowledge and for the benefit of corporate others. Many of us are not even sure if this information drain is happening, to what extent, the laws being followed or broken, or the implications of this delivery or how it affects us. She asserts that this commodified behavioural surplus “is about us, but it is not for us” (186) (her italics).

Without yielding up this information we would probably be excluded from the services on which we’ve come to depend. For Zuboff, our clicks, likes, comments, sensor feeds, locational coordinates and other signals of our behaviours online, offline and as we move around the Internet of things, constitute this behavioural surplus. In Zuboff’s words:

“we are the objects from which raw materials are extracted and expropriated for Google’s prediction factories. Predictions about our behavior are Google’s products, and they are sold to its actual customers but not to us. We are the means to others’ ends” (94).

We consumers probably think such incidental bits of information of little value compared with the services we access. But it can be harvested, mined, and processed to yield something of value — to corporations, but not to us.

Capitalism’s third wave

She calls the overall hegemonic system “surveillance capitalism” and situates it within the trajectory of capitalism’s (and modernity’s) development (and critique) since the nineteenth century.

The identification of surveillance capitalism’s culpability is a development on the big brother scenario in which the state monitors what we do. She observes that the state that comes closest to this digital hegemony is China, amplified no doubt through its putative introduction of the social credit system that maintains a point score of a citizen’s good and bad behaviour.

Outside of such all-pervasive government systems, for Zuboff the surveillance operations experienced in the UK, Australia, the Americas and elsewhere are driven by private enterprise — Google, Facebook, Amazon, Apple, Microsoft and ISPs. Of course, eager to increase their markets, these enterprises abet, exploit and interact with business practices in China as well.

Raising the alarm

Zuboff delivers her story in strident tones as if there’s a conspiracy in play. She says of the large digital corporations and their partners and dependents:

“They used declarations to take without asking. They camouflaged their purpose with illegible machine operations, moved at extreme velocities, sheltered secretive corporate practices, mastered rhetorical misdirection, taught helplessness, purposefully misappropriated cultural signs and symbols associated with the themes of the second modernity — empowerment, participation, voice, individualization, collaboration — and baldly appealed to the frustrations of second-modernity individuals thwarted in the collision between psychological yearning and institutional indifference” (192).

Much of Zuboff’s argument follows the line of the critical theorist. She identifies leaders in large-scale exploitation rackets, but it is really capitalism itself that holds the reins. Zuckerberg (Facebook) and Schmidt (Google) are amongst the array of agents that Zuboff subjects to informed scrutiny. Though they are direct beneficiaries, they are also minor components in the capitalist machine.

Everything is a resource

Surveillance capitalism reduces everything to the status of a resource. I recognise some of Heidegger’s critique of technological thinking in Zuboff’s analysis, probably via Heidegger’s protege Hannah Arendt, who Zuboff references.

“world, self, and body are reduced to the permanent status of objects as they disappear into the bloodstream of a titanic new conception of markets. His washing machine, her car’s accelerator, and your intestinal flora are collapsed into a single dimension of equivalency as information assets that can be disaggregated, reconstituted, indexed, browsed, manipulated, analyzed, reaggregated, predicted, productized, bought, and sold: anywhere, anytime” (211).

Zuboff couples that with behaviour modification via nudging, herding and conditioning. She reports that a developer she interviewed said:

“When people use our app, we can capture their behaviors and identify good and bad [ones]. Then we develop ‘treatments’ or ‘data pellets’ that select good behaviors. We can test how actionable our cues are for them and how profitable certain behaviors are for us” (296).

This represents a “darkening of the digital dream” (7) formulated initially to improve peoples lives — enabling choice about what to share or keep private.

Surveillance exceptionalism

She also characterises this development of capitalism as “exceptionalism,” where groups assert that the normal rules of behaviour don’t apply to them, the belief that one’s own group or circumstance is an exception to the rest, generally implying that it needs to be considered or treated differently.

We see this faux humility amongst the professions or in academic discourse: “e.g. people have to understand that teaching architecture is fundamentally different to other kinds of education.” The OED provides this helpful definition that reveals the political dimension to exceptionalism.

“The theory that the peaceful capitalism of the United States constitutes an exception to the general economic laws governing national historical development, and esp. to the Marxist law of the inevitability of violent class warfare” (OED).

Zuboff identifies the role of surveillance exceptionalism. Because of the threats posed by global terrorism, heightened by the 11 September 2001 attacks in New York and elsewhere, the state felt the need to break privacy norms by introducing and increasing the surveillance of its citizens. The need for security trumps privacy considerations, constituting the grounds for an exception.

“The predictive insights thus acquired would constitute a world-historic competitive advantage in a new marketplace where low-risk bets about the behavior of individuals are valued, bought, and sold” (81).

In my next post I’ll review what Zuboff identifies as the conditions that have encouraged the rise of surveillance capitalism.

Reference

  • Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books

Note

About Richard Coyne

The cultural, social and spatial implications of computers and pervasive digital media spark my interest ... enjoy architecture, writing, designing, philosophy, coding and media mashups.

Discussion

Trackbacks/Pingbacks

  1. Pingback: Speech to text | Reflections on Technology, Media & Culture - October 17, 2020

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

University of Edinburgh logo

Richard on Facebook

Latest FB image
Or "like" my Facebook page for blog updates.

Try a one year research degree

book cover
book cover

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 492 other followers

Site traffic

  • 232,690 post views

%d bloggers like this: