2018 BCLT Privacy Lecture

11th Annual BCLT Privacy Lecture
by

Professor Shoshana Zuboff
“Privacy Must Fall”
The World According to Surveillance Capitalism

NOVEMBER 8, 2018
BANCROFT HOTEL
2680 BANCROFT WAY
BERKELEY, CA 94704

PROGRAM
3:00 – 3:30 P.M. – REGISTRATION
3:30 – 5:30 P.M. – LECTURE
5:30 – 6:30 P.M. – RECEPTION

PLEASE REGISTER HERE: https://berkeleylaw.wufoo.com/forms/w1xy56wz1t5z2ct/

Professor Shoshana Zuboff, author of The Age of Surveillance Capitalism (forthcoming 2018), joined the Harvard Business School faculty in 1981. One of the first tenured women at the school, she was the Charles Edward Wilson Professor of Business Administration. Her career has been devoted to the study of the rise of the digital, its individual, organizational, and social consequences, and its relationship to the history and future of capitalism.

SHOSHANA ZUBOFF’S THEORY OF SURVEILLANCE CAPITALISM
What is the problem? Not technology… not a corporation… The problem is surveillance capitalism, a new logic of capital accumulation that founds a burgeoning surveillance-based economic order. Surveillance capitalism originates in the unilateral claim to human experience as the source of free raw material for its hidden commercial practices, including the extraction of behavioral data, the fabrication of those data into predictions of human behavior, and the sale of those prediction products in new behavioral futures markets. Surveillance capitalism is a born-digital market form governed by novel and even startling economic imperatives that produce unprecedented asymmetries of knowledge and power. The stark new social inequalities that characterize this market project enable new forms of economic and social domination, while challenging human autonomy, elemental and established human rights, including the right to privacy, and the most basic precepts of a democratic society.

How should we frame this problem? The legal framework (privacy law) and the economic framework (antitrust) are each necessary but insufficient. The higher-order framework is social: the division of learning in society.

With the rise of industrialism in the late nineteenth century, the division of labor became the axial principle of economic and social order. The shift toward information economies means that the division of labor is subordinate to the division of learning as the axial principle of social ordering in the twenty-first century. Three essential questions shape the division of learning in society: Who knows? Who decides? Who decides who decides?

In the year 2000, many foresaw the Internet as a powerful force for the democratization of knowledge and therefore of a more just division of learning. Since then, surveillance capitalism’s abrupt rise to dominance alters this trajectory. Its ever-expanding global architectures of behavioral extraction and datafication continuously widen its advantages in the accumulation of proprietary stores of knowledge and the attendant instrumentarian power to act on that knowledge. Right now it is primarily the surveillance capitalists and their market operations who know, who decide who knows, and who decide who decides.

A century ago Durkheim warned of the dangers of a pathological division of labor in which the principles of organic solidarity were subverted by the private forces of industrial capital. Today the rise of surveillance capitalism represents the unauthorized privatization of the division of learning in society, which becomes the canvas for a bold, extractive, and still poorly understood market project. The result is a pathological division of learning in society that originates in and produces unprecedented inequalities of knowledge and power. This pathology thrives without legitimate political authorization. In the absence of meaningful democratic controls, surveillance capitalism’s economic imperatives dictate that privacy must fall––collateral damage of this hidden coup from above.

What is to be done? Surveillance capitalism and its discontents must be named in order to be tamed. This is our new collective challenge. Ultimately our fate and that of generations to follow depend upon asserting democratic control over the essential questions of the division of learning in society. Who knows? Who decides? Who decides who decides? How will we answer?

Commentary by:

Professor Maria Brincker, Assistant Professor of Philosophy, University of Massachusetts, Boston.
Maria Brincker is an Assistant Professor of Philosophy at the University of Massachusetts, Boston. She has published on a range of topics having to do with concrete aspects of our embodiment and our social and technological embeddedness, and how these dynamically shape our brains, minds and agency. She holds a PhD from the CUNY Graduate Center and was previously an Arts and Neuroscience Fellow at the Italian Academy at Columbia University.

Professor Brett Frischmann, Charles Widger Endowed University Professor in Law, Business and Economics, Villanova University Law School
Professor Frischmann is a renowned scholar in intellectual property and Internet law. Before coming to Villanova, he was director of the Cardozo Intellectual Property and Information Law Program (2011-2016) and the Microsoft Visiting Professor of Information and Technology Policy at Princeton University. Professor Frischmann’s work has appeared in leading scholarly publications, including Columbia Law Review, University of Chicago Law Review, and Review of Law and Economics, among others. He recently published Re-Engineering Humanity (Cambridge University Press 2018) with philosopher Evan Selinger.

Background reading:
Big Other: Surveillance Capitalism and the Prospects of an Information Civilization, 30Journal of Information Technology (2015) 75–89, View paper.

Secrets of Surveillance Capitalism, Frankfurter Allgemeine Zeitung 2016, View paper.

Event Co-sponsored by:

Berkeley Information Privacy Law Association (BIPLA) & STS Working Group

Next steps on Surveillance Capitalism: On Friday, November 9th, Professors Zuboff, Brincker, and Frischmann will facilitate a discussion on “How We Fight Surveillance Capitalism,” focusing on the forms of collective action that will interrupt, eliminate, outlaw, or moderate surveillance capitalism’s grip on the future. For details, visit this page.

Privacy Law in 2017 Panel

Attorneys Lindsey Tonsanger from Covington & Burling and Rafey Balabanian from Edelson spoke at BIPLA’s Monday lunch talk about the current state of privacy law. The attorneys discussed recent cybersecurity issues as well as their regulatory implications and brought in examples from privacy cases they had worked on at their respective firms. Near the end of the talk, Lindsey and Rafey also briefly touched on what their typical day working in the privacy practice might look like.

StingRays and Police Surveillance

On Wednesday, March 8th, BIPLA hosted a talk with Professor Catherine Crump, Brian Hofer and Steve Trush on the use of StingRays in police surveillance

After a brief introduction by Mukund Rathi from BIPLA, Steve Trush started the panel off by explaining the capabilities and technical details of Stingrays.

Professor Catherine Crump then discussed the constitutional issue, implicated rights, and difficulties of challenging the use of StingRays in criminal defense.

Eventually, Brian Hofer talked about the achievements and current efforts of the Oakland Privacy Commission to regulate the use of StingRays by the police.

 

Privacy and Student Analytics: Student Success or Student Surveillance

Data analytics have great potential to help students achieve academic success. However, as often happens with massive data collection, it also raises some serious privacy concerns. The privacy implications of using student analytics to support student success was the subject of a panel discussion organized by UC Berkley to celebrate Data Privacy Day. As Lisa Ho, Campus Privacy Officer, noted, the panel was a place to discuss views and ideas rather than to discuss existing policies.

During the panel discussion, Jenn Stringer, Associate CIO at Academic Engagement of UC Berkeley, noted that one of the most important issues when collecting student data is to allow students to decide what information about them may be collected and how such information is to be used. In this regard, it seems to be crucial that students be given an option to opt-in to the program. However, as noted during the discussion, an opt-in option might not be practically feasible. For example, it is easy to imagine that employers will start to require access to a student’s data when making hiring decisions. If this was the case, students who decide not to permit their data to be collected or shared would be seriously disadvantaged on the job market. Thus, it may be that students, as a practical matter, would be forced to participate in a data analytics program. When thinking about the application of student analytics we must assure that students’ ability to make choices will be real and not merely superficial.

Further, it must be considered how performance-predictions can be reconciled with a student’s individual needs. As Arjun Singh, co-founder of Gradescope, noted, does the fact that based on previous student’s performance we might be able to predict a student’s failure in certain courses means that we should preclude the student from taking those courses? In the world of predicting students’ future performance will there be room for experimenting and finding our way. Amber Norori, representative of the Student Technology Council admitted that she failed two courses before she realized that she didn’t like what she studied. If a student advisor would have been able to predict Amber’s failure in those courses, does it mean that she would have discovered her passion earlier or that she would have never discovered it? “Data collection can limit students’ freedom”, Amber concluded.

An additional question was raised as to whether privacy concerns could slow down innovation in the field of student data analytics. As pointed out by Nils Gilman, Associate Chancellor and Chief of Staff to the Chancellor, UC Berkeley’s diverse culture makes it somehow more difficult to apply analytical solutions. Moreover, the University tries to institutionalize certain rules which are not followed by private entities. This obviously imposes some constraints on the process of collecting and analyzing student data. But maybe there is no need to rush when our privacy is at stake?

When talking about data privacy issues in the context of student analytics there are more questions than answers. Thus it is important that students, faculty and the school administration discuss their ideas and concerns. As all panel members agreed, we need to be sure that the usage of data will be beneficial for students and consistent with their expectations. Paraphrasing Jane Stringer, the fact that the school can collect certain information about students does not mean that it should collect such information.

We hope you all enjoyed Data Privacy Day!

The Short-lived Adventure of India’s Encryption Policy

During his visit to Silicon Valley in September 2015, Indian Prime Minister Narendra Modi said his government was “giving the highest importance to data privacy and security, intellectual property rights and cyber security”. But a proposed national encryption policy circulated only a few days earlier would have achieved the opposite effect.

The policy was comically short-lived. After its poorly-drafted provisions invited ridicule, it was swiftly withdrawn. But the government has promised to return with a fresh attempt to regulate encryption soon. The incident highlights the worrying assault on communications privacy and free speech in India, a concern compounded by the enormous scale of the telecommunications and Internet market.

Even with only around 26 percent of its population online, India is already the world’s second-largest Internet user, recently overtaking the United States. The number of Internet users in India is set to grow exponentially, spurred by ambitious governmental schemes to build a ‘Digital India’ and a country-wide fiber-optic backbone. There will be a corresponding increase in the use of the Internet for communicating and conducting commerce.

Encryption on the Internet

Encryption protects the security of Internet users from invasions of privacy, theft of data, and other attacks. By applying an algorithmic cipher (key), ordinary data (plaintext) is encoded into an unintelligible form (ciphertext), which is decrypted using the key. The ciphertext can be intercepted but will remain unintelligible without the key. The key is secret.

There are several methods of encryption. SSL/TLS, a family of encryption protocols, is commonly used by major websites. But while some companies encrypt sensitive data, such as passwords and financial information, during its transit through the Internet, most data at rest on servers is largely unencrypted. For instance, email providers regularly store plaintext messages on their servers. As a result, governments simply demand and receive backdoor access to information directly from the companies that provide these services.

On the other hand, proper end-to-end encryption – full encryption from the sender to recipient, where the service provider simply passes on the ciphertext without storing it, and deletes the metadata – will defeat backdoors and protect privacy, but may not be profitable. End-to-end encryption alarms the surveillance establishment, which is why British Prime Minister David Cameron wants to ban it, and many in the US government want Silicon Valley companies to stop using it.

Communications privacy

Instead of relying on a company to secure communications, the surest way to achieve end-to-end encryption is for the sender to encrypt the message before it leaves her computer. Since only the sender and intended recipient have the key, even if the data is intercepted in transit or obtained through a backdoor, only the ciphertext will be visible.

For almost all of human history, encryption relied on a single shared key; that is, both the sender and recipient used a pre-determined key. But, like all secrets, the more who know it, the less secure the key becomes. From the 1970s onwards, revolutionary advances in cryptography enabled the generation of a pair of dissimilar keys, one public and one private, which are uniquely and mathematically linked. This is asymmetric or public key cryptography, where the private key remains an exclusive secret. It offers the strongest protection for communications privacy because it returns autonomy to the individual and is immune to backdoors.

For those using public key encryption, Edward Snowden’s revelation that the NSA had cracked several encryption protocols including SSL/TLS was worrying. Brute-force decryption, the use of supercomputers to mathematically attack keys, questions the integrity of public key encryption. But, since the difficulty of code-breaking is directly proportional to key size; notionally, generating longer keys will thwart the NSA, for now.

The crypto-wars in India

Where does India’s withdrawn encryption policy lie in this landscape of encryption and surveillance? It is difficult to say. Because it was so badly drafted, understanding the policy was a challenge. It could have been a ham-handed response to commercial end-to-end encryption, which many major providers such as Apple and WhatsApp are adopting following consumer demand. But curiously, this did not appear to be the case, because the government later exempted WhatsApp and other “mass use encryption products”.

The Indian establishment has a history of battling commercial encryption. From 2008, it wrestled Blackberry for backdoor access to its encrypted communications, coming close to banning the service, which dissipated only once the company lost its market share. There have been similar attempts to force Voice over Internet Protocol providers to fall in line, including Skype and Google. And there is a new thrust underway to regulate over-the-top content providers, including US companies.

The policy could represent a new phase in India’s crypto-wars. The government, emboldened by the sheer scale of the country’s market, might press an unyielding demand for communications backdoors. The policy made no bones of this desire: it sought to bind communications companies by mandatory contracts, regulate key-size and algorithms, compel surrender of encryption products including “working copies” of software (the key generation mechanism), and more.

The motives of regulation

The policy’s deeply intrusive provisions manifest a long-standing effort of the Indian state to dominate communications technology unimpeded by privacy concerns. From wiretaps to Internet metadata, intrusive surveillance is not judicially warranted, does not require the demonstration of probable cause, suffers no external oversight, and is secret. These shortcomings are enabling the creation of a sophisticated surveillance state that sits ill with India’s constitutional values.

Those values are being steadily besieged. India’s Supreme Court is entertaining a surge of clamorous litigation to check an increasingly intrusive state. Only a few months ago, the Attorney-General – the government’s foremost lawyer – argued in court that Indians did not have a right to privacy, relying on 1950s case law which permitted invasive surveillance. Encryption which can inexpensively lock the state out of private communications alarms the Indian government, which is why it has skirmished with commercially-available encryption in the past.

On the other hand, the conflict over encryption is fueled by irregular laws. Telecoms licensing regulations restrict Internet Service Providers to 40-bit symmetric keys, a primitively low standard; higher encryption requires permission and presumably surrender of the shared key to the government. Securities trading on the Internet requires 128-bit SSL/TLS encryption while the country’s central bank is pushing for end-to-end encryption for mobile banking. Seen in this light, the policy could simply be an attempt to rationalize an uneven field.

Encryption and freedom

Perhaps the government was trying to restrict the use of public key encryption and Internet anonymization services, such as Tor or I2P, by individuals. India’s telecommunications minister stated: “The purport of this encryption policy relates only to those who encrypt.” This was not particularly illuminating. If the government wants to pre-empt terrorism – a legitimate duty, this approach is flawed since regardless of the law’s command arguably no terrorist will disclose her key to the government. Besides, since there are very few Internet anonymizers in India who are anyway targeted for special monitoring, it would be more productive for the surveillance establishment to maintain the status quo.

This leaves legitimate encrypters – businesses, journalists, whistle blowers, and innocent privacy enthusiasts. For this group, impediments to encryption interferes with their ability to freely communicate. There is a proportionate link between encryption and the freedom of speech and expression, a fact acknowledged by Special Rapporteur David Kaye of the UN Human Rights Council, where India is a participating member. Kaye notes: “Encryption and anonymity are especially useful for the development and sharing of opinions, which often occur through online correspondence such as e-mail, text messaging, and other online interactions.”

This is because encryption affords privacy which promotes free speech, a relationship reiterated by Frank La Rue, a previous UN Special Rapporteur, in 2013. On the other hand, surveillance has a “chilling effect” on speech. Justice Subba Rao’s famous dissent in the Indian Supreme Court presciently connected privacy and free speech in 1962:

The act of surveillance is certainly a restriction on the [freedom of speech]. It cannot be suggested that the said freedom…will sustain only the mechanics of speech and expression. An illustration will make our point clear. A visitor, whether a wife, son or friend, is allowed to be received by a prisoner in the presence of a guard. The prisoner can speak with the visitor; but, can it be suggested that he is fully enjoying the said freedom? It is impossible for him to express his real and intimate thoughts to the visitor as fully as he would like. To extend the analogy to the present case is to treat the man under surveillance as a prisoner within the confines of our country and the authorities enforcing surveillance as guards. So understood, it must be held that the petitioner’s freedom under [the] Constitution is also infringed.

Kharak Singh v. State of Uttar Pradesh (1964) 1 SCR 332, pr. 30.

Perhaps the policy expressed the government’s discomfort at individual encrypters escaping surveillance, like free agents evading the state’s control. How should the law respond to this problem? Daniel Solove says the security of the state need not compromise individual privacy. On the other hand, as Ronald Dworkin influentially maintained, the freedoms of the individual precede the interests of the state.

Security and trade interests

However, even when assessed from the perspective of India’s security imperatives, the policy would have had harmful consequences. It required users of encryption, including businesses and consumers, to store plaintext versions of their communications for ninety days to surrender to the government upon demand. This outrageously ill-conceived provision would have created real ‘honeypots’ (originally, honeypots are decoy servers to lure hackers) of unencrypted data, ripe for theft. Note that India does not have a data breach law.

The policy’s demand for encryption companies to register their products and give working copies of their software and encryption mechanisms to the Indian government would have flown in the face of trade secrecy and intellectual property protection. The policy’s hurried withdrawal was a public relations exercise on the eve of Prime Minister Modi’s visit to Silicon Valley. It was successful. Modi encountered no criticism of his government’s visceral opposition to privacy, even though the policy would have severely disrupted the business practices of US communications providers operating in India.

Encryption invites a convergence of state interests as well: both countries want to control it. Last month’s joint statement from the US-India Strategic and Commercial Dialogue pledges “further cooperation on internet and cyber issues”. This innocuous claim masks a robust information-gathering and -sharing regime. There is no guarantee against the sharing of any encryption mechanisms or intercepted communications by India.

The government has promised to return with a reworked proposal. It would be in India’s interest for this to be preceded by a broad-based national discussion on encryption and its link to free speech, privacy, security, and commerce.