Interview for City of Asylum, Pittsburgh (USA, 18 Dec. 2015)

One of the most remarkable parts of my 2015 conference tour in the USA was my stay at City of Asylum, a sanctuary for exiled and persecuted writers in Pittsburgh. Before my conference, I had the opportunity to discuss privacy, platforms, and mass surveillance with Caitlyn Christensen. The result of our conversation is now published on the online magazine Sampsonia Way, in the series The Writer’s Block.

The Writer’s Block Transcripts: A Q&A with Antonio Casilli

In the wake of Edward Snowden’s 2013 revelation of global surveillance programs run by the NSA, a global debate has emerged about mass surveillance, individuals’ rights to privacy, and national security. Despite claims that privacy is becoming nonexistent, digital researcher Antonio Casilli maintains that Internet users and civil society organizations are engaged in a culture war with digital industry and government agencies over the issues of confidentiality, anonymity, and secrecy – a war that they could very well win.

Antonio Casilli is associate professor of Digital Humanities at Telecom ParisTech (Telecommunication College of the Paris Institute of Technology) and the author or co-author of five books on subjects that include digital labor, the Internet and social structures, and communicational violence. In October of 2015 he came to City of Asylum to present his research on online privacy and the impact of the Internet on the private sphere.

In the book Against the Hypothesis of the End of Privacy (Springer, 2014), co-authored with Yasaman Sarabi and Paola Tubaro, he argues for the resiliency of online privacy and the impact the Internet is having on our life. Before he presented his research, Sampsonia Way interviewed Antonio Casilli about his findings and what it means for digital consumers.

What is the Internet doing to our private sphere?

It is redefining the way we understand privacy. One of the main points of my research is countering the common sense according to which there is no more privacy on the Internet.

The most important part to me and my colleagues’ work on these topics is to help users become actors of their privacy, and to gain control over their data so that they are not only the puppets of big platforms and of data brokers, big companies that sell this data on the market for advertising and, in some cases, for political repression. There is today a growing market of surveillance and restriction of civil liberties. In this market the different parts that are involved are, of course, companies from the private sector and on the other side, governments. We have to take this into account so that we can help users develop strategies to counter this kind of situation and to become aware of these risks for their privacy. What matters most is to show them that privacy is not dead. This is not a lost battle. On the contrary, it’s an ongoing cultural and political war, and we have to create the conditions to empower civil society in this war.

There is a prominent narrative that people are not concerned with privacy, particularly the younger generations of Internet users who are accused of “oversharing” on social media. Do you think that people really are not concerned with privacy?

On the contrary, people are extremely concerned with their privacy. The care of privacy is something that has become more and more common than in the past. In the pre-Internet era, privacy was something that only celebrities or politicians were entitled to because they were persons who were supposed to have the biggest social capital. Of course, also, they were the richest in terms of financial capital. With the Internet, what happens is a democratization of this concern for privacy and of course more and more people are interested.

Think about what is happening in Europe with the Right to be Forgotten, which is of course a very controversial issue. A year and a half ago, a controversial decision by the European court of justice implemented the right to be forgotten. After that, (each week) 250,000 people asked Google to be de-referenced, meaning not featured in their search engine. This tells us about the kinds of concerns we have in some places of the globe.

It is unequally distributed, of course. There are some countries that are more concerned with these issues and others that are under a certain ideological cape that pushes people not to be attentive to their privacy. Then again, we have lines that separate people when it comes to privacy. Their expectations of privacy fracture along race, gender, age, and class lines. All these things are important factors in deciding who is more or less attentive to his or her privacy. Thinking about how privacy can be construed differently by somebody who now comes from a working class background, and who is under constant surveillance from his or her employer, from government officials, and the way they are aware and deal with this situation of constant surveillance. They have to quickly develop tactics and strategies to counter it.

They interiorize these tactics and eventually develop alternative identities or pseudonymous and anonymous ways of interacting. The kind of precautions they have to take in order to have a free exchange of information and opinions in a completely controlled and constrained environment is characteristic of the Internet today.

You mentioned two strategies: anonymous identities and pseudonyms. What are some other ways that people control their information on the Internet?

There are many, many ways of course. There are some advanced tools like cryptography, use of cryptographic tools, but these are mostly for people who are more proficient in terms of their computer skills.

On the other side you have democratization of this kind of tools, if you think about Tor, which is a software that encrypts communication, it is something that is pretty easy to use and relatively efficient when it comes to protecting anonymity. In some other cases there are information tactics like, for instance, creating spaces where you can interact anonymously. And of course, the most common tactic consists in using pseudonyms to disguise your identity. Despite its pervasiveness, this is far from being an effective tool to protect free speech.

There are some other informal activities, like obfuscation or simple data evasion. Obfuscation means you create a lot of noise around the information you are sharing. You don’t share one picture, but you share 1,000 pictures. Only one is the relevant one that you actually want to share. The rest is noise to create some kind of barrier between you and the people who might censor you or track what you are saying online.

Finally we have data evasion, which means that we have to come up with ways of not leaving traces of our activities and choices online, like footprints. Think about people who do not want to be the target of personalized advertising and the way they can use pre-paid cards or computers in public spaces to buy things so that their customer profile is not completely personalized and they do not get bombarded by ads or the same kind of product, or the same kind of offers which are, in some cases, extremely dangerous in the long term. They can result in redlining, bad credit score, refusal of access to services like health, housing, and so on.

Yes, everybody wants to be eventually targeted by one ad, once or twice, who knows. But nobody wants to become the constant target of advertising attention, to say nothing about the kind of interactions or crossings that you might have between companies who sell you products and insurance companies or banks, and the way they create reputation rankings for people. These can follow you forever in your life. The decision you are making today, in terms of leaving your Facebook page accessible to anybody, can eventually turn into some very invasive commercial profiling that is also insurance profiling and credit score profiling.

Eventually, these things can determine individuals’ lives for a number of years. In 30 years you will be haunted by the decision that you made to share with people the pictures all the pizzas you ate. Because if you take a picture of the pizza you eat today, and then the next day, and then the next day, and then in 30 years you develop a cardiovascular disease, an insurance company might say to you, “Well, ok you did develop this disease but I won’t pay you because you’ve been eating these pizzas for 30 years and so you took this health risk.” This is the kind of situation that many people face today, and many more people will face in the future if they do not develop some effective strategies to protect their privacy, according to the context and according to the platform that they are on.

What are the revolutionary possibilities of developing strategies to protect yourself?

There are many possibilities that might be associated with social justice, social change, and political innovation. Some of those result in revolutionary changes in many countries. If we think about what happened in 2010 to 2013, mainly in the Middle East and North Africa and what failed to happen in some places in Europe, Spain and Greece (all movements loosely associated with indignados). During the Arab Spring, protesters had to develop quickly some strategies that took into account the surveillance that the governments were implementing and enforcing on the population and the need for a free exchange of information to coordinate, to develop political directions in some cases.

And of course, to avoid repression. In this case, in these countries, the main activity was basically hacking and cryptography, and then the development of alternative platforms of communications. In some cases — I’m thinking about Egypt, for instance — the situation became so critical that Mubarak, the former dictator, decided to switch off the Internet altogether. That was actually the moment where the dynamic of political contention became actually revolutionary, and they actually toppled the government. The day after Mubarak was forced to flee Egypt.

How did they do it? Well they did it firstly because not everything was a Facebook revolution. That is something that needs to be stressed. Facebook actually had a very limited role in it as a platform. Social media platforms like Facebook where more focused on raising international public awareness about what was going on in Egypt. But political activists in the country hardly used it to coordinate protests. Only (31) percent of the people living in Egypt had access to the Internet at the time. Only a portion of those were actually communicating on Facebook. Alternative movements or groups of hackers around the world were helping those activists communicate on alternative lines. They put in place dial-up lines, or alternatives to the Internet itself. In some cases you actually have to create your own infrastructures. In some other cases you have to develop your codes of communication. Sometimes the language you speak or the coded language you use while you communicate online can actually be used to protect your privacy because you are creating your own space for free speech by restricting your audience by the very language that you use.

I’m interested in what you said about the Egyptian Spring becoming a revolution when the government shut down the Internet. Is this a pattern that you see unfolding elsewhere?

I see that actually there is a link between the restriction of communication and the explosiveness of the situation that you might face from a believable point of view. Part of my research is about riots. Riots are not always revolutionary riots. Sometimes they are extremely reactionary in nature, sometimes they are actually associated with social change. What happened, for instance, in England in 2011 was something that was reminiscent of riots that are associated with movements for social change and social justice. This is because of the places where those English riots were taking place, in disenfranchised and impoverished neighborhoods, and because they did not emerge from in a vacuum. They were associated with other movements, and other protests that took place in the months before that. When civil violence exploded in the UK during the summer of 2011, the government recommended censorship of all the social media so that the rioters could be stopped from coordinating and restrained from looting. We did an evaluation of these censorship policies, and me and my colleague Paola Tubero came up with a clear pattern that indicates that if you restrict the use of social media the situation becomes explosive. You have mass violence. This is basically because if you restrict the vision that individuals might have of the situation around them, if they don’t know what’s happening in another neighborhood or in another town or in another region, they actually lose awareness and become more ready to take risk and to actually use violence in their political protest. This is something that can be observed in some cases. I don’t know and I cannot say if it is a general movement, but there are some instances and examples that indicate that restriction of the Internet and censorship can actually make the situation extremely volatile.

What is the most important aspect of your research for the average social media user to understand?

From a statistical point of view, the average social media user does not exist. From the point of view of social science, we now face a situation which is completely different if compared to the situation some demographers or sociologists or anthropologists might face or might have faced 20 or 30 years ago when they were making big censuses. When it comes to the Internet and digital platforms in general, the average does not exist because everything points to personalization and to micro-segmentations of small parts of the population. It’s so multidimensional that actually the average user does not exist. This was a big methodological caveat that I had to introduce.

When you think about the user of mainstream digital platforms like Facebook or Airbnb, they basically are always associated with some kind of injunction to be fair, to be honest, to be transparent in your interactions. This is a moral discourse that goes with the platforms that we now use whether they be social media sharing platforms or on-demand platforms. The design of such services is predicated on the fact that users comply with a frank and open participation, and this means being transparent and being honest. Of course beyond this moral discourse there is an ideological discourse that actually is functional to the type of business model that these social media and digital platforms want to implement. Privacy factors in in this situation: it is something that digital platforms are interested in removing from the equation. They do not want people to be attentive to their privacy or to care about their personal information, so that actually perfect and frictionless sharing might take place and frictionless sharing means data extraction. It means that personal data are taken and exploited from a financial point of view but also from an informational point of view.

This kind of moral and ideological discourse that we face today, actually pushes every user to a different degree, to overshare. And again, what I am saying is not a normative discourse. I’m not saying that they should not do it, but that the conditions today are extremely risky for the mainstream user. Even in the terms of service of any digital platform there is no clear indication of the final use of those data. Who is going to buy them? Who is going to use them? Who is going to cross them with other data? Something which is probably completely anodyne and mundane, if I share it with my friends, becomes extremely dangerous if I share it with my banker, with my physician, whatever.

Again, the fact is that these platforms operate by adopting this kind of moral discourse: “You have to be open, you have to be transparent.” They are basically trying to implement a system of frictionless data extraction. Of course, it is profitable to them. This is the part of the social media that usually the mainstream user does not take into account. This is also due to the kind of dishonesty and lies that social media CEOs constantly convey: “Okay, your data will not be looked at.” Think about what happened to Snapchat a couple of years ago when eventually the Federal Trade Commission (FTC) noticed that what they were promising was simply not true. The commercial promise of that platform was that your picture will eventually be erased after a few seconds and that only the people that you chose as viewers would view them. On the contrary, what the Federal Trade Commission found was that Snapchat’s contents and all the metadata were exploited to an extent which is unimaginable to their users.

How are social media platforms able to establish control over their users through the illusion of privacy settings?

The point is that today’s platforms, because they are so encroached upon by advertising, are extremely data costly. Meaning that they ask you for a lot of data just to have a normal functioning, nothing special. Think about Facebook’s real name policy. Why do I have to use my civil identity, the name that is actually written on my ID card? Basically because governments and companies want to trace you through platforms. They want to know if the person who is chatting on Facebook is the same person whose credit card is used for some kind of dating service, or that the same person has a certain social security number. They want to cross all this information. This is why basically they always come up with data extraction ruses. Everything on platforms such as Facebook can be construed as a glorified commercial questionnaire. Everything is always about what you like, what would you buy, and eventually what do your friends like and what do your buy? Do they share your interest for obscure music or for this special movies and so on? If you bought this, would they buy that?

It’s also related to political control. I don’t want to only stress the commercial part of it. Some of that data are sold to governments and I’m thinking about your government and mine. They buy this data, and they actually publish the price they pay for it. If they want to buy your emails, they have to pay a certain amount of money. In some cases (and this is where the situation becomes extremely disturbing and dangerous from a political point of view) they do not pay. This is usually when a surveillance scandal breaks, like the one that was originated by Edward Snowden in 2013 when eventually Google discovered that some US authorities were just pulling their data like it was an open bar. They were getting in their servers, taking the data without undergoing any kind of control or audit on what they were doing, and so on and so on. Ultimately the leaks like the ones associated with Chelsea Manning or Julian Assange or Edward Snowden, are important because they show the extent of this military-industrial complex of surveillance in which social media, digital platforms, advertising, and government agencies are involved. They are all in bed together. And the bed is called mass surveillance.

How are the issues of privacy that we face now different from those we faced in the past?

I think that today we are facing a change in the paradigm of privacy. What our grandparents used to call privacy was something that they inherited from a legal and judicial tradition that dates back to the 19th century. If you think about the first definitions, especially in the US, of privacy, you have to look back at 1890 where two famous lawyers, Louis Brandeis and Samuel Warren, published this seminal article called “The Right to Privacy.” “The Right to Privacy” was based on a model that me and my colleagues define as “privacy as penetration.” Meaning that’s something that can be penetrated, that can be invaded by malevolent government agencies or criminals or anyone who want to go to grab the core of sensitive information that you have around you. In this model, this information was always sensitive for everyone: health, sexual orientation, political preferences, religious opinions, or beliefs.

Now what happens is that today we do not have a clear definition of what is sensitive information. This definition changes from one person to another, from one country to another, from one platform to another. If I’m online, I want to share my medical records with my doctor, if I’m on a health-oriented platform. But I don’t want to share it with my friends on mainstream social media, not on Twitter for instance. I want to share my religious beliefs or sexual preferences if I am on a dating site, but I don’t want to share it with my professional contacts on LinkedIn.

So the fact of having to constantly negotiate through all of these platforms actually puts us in a situation that is completely different. We call this new paradigm of privacy, “privacy as negotiation.” It’s a negotiation between any one of us, between you and me, you and your friends, and so on and so on. Of course, it is also a negotiation with corporate actors and with government bodies, and we have to take into account all these persons, and all these institutions, whenever we decide to publish a picture of our cat playing the piano, for example. We always have to take into account the consequences that this type of post or message online might create. Again, this is a change of privacy as a notion, but also a change of our collective attitudes toward privacy.

This is also just goes to show that we are not facing the end of privacy. On the contrary, it is a transformation of privacy, such a radical transformation that sometimes we don’t even recognize those behaviors as privacy behaviors — privacy preserving, privacy protecting behaviors. Sometimes I’m publishing something under a pseudonym when I am guarding my privacy. These actions have now to be taken into account even from a legal point of view and considered privacy protecting behaviors.

And yet today the law is not up to date. Probably, policymakers have yet to understand that people who are communicating anonymously online or exchanging information are not always cyber criminals, are not always dangerous hate-speech partisans. Sometimes they are just people who don’t want to be the constant targets of government surveillance or commercial surveillance.

So Internet users have some autonomy over the status of their privacy in the digital sphere?

We have some autonomy. What happens today is that in this cultural war around privacy, nobody has won yet. Especially not the digital platforms. On the contrary, they are in a very bad situation. From a legal point of view there is definitely a counter attack going on. I’m thinking about Europe because that’s where I live. What happens over there is that there have been a number of judicial decisions over the last few years that go in the direction of forcing social media and some governments to be more respectful towards personal data, and more accountable about what these platforms do with data.

Think about the recent European Court of Justice’s ruling about Safe Harbor. Of course it was presented in the media as some disastrous decision. But it was actually a way of attracting the attention of lawmakers in what a foreign country (namely the US) does with the data of European citizens. And now investors and owners of social media and digital platforms have to comply with existing privacy legislation if they want to continue making business in Europe.

Think about the European Right to be Forgotten decision a few years ago. Consider all the class actions that have been launched against big platforms since then. What do they do with our content, with the content we put online? Are they able and should they be able to sell our pictures to turn those pictures in some kind of testimonial or endorsements for products? Or should they be able to sell our data to create big databases and to sell those to data brokers who are humongous international organizations that aggregate our data internationally and create personalized profiles?

These data brokers are impressive and are completely not regulated, so we are now starting to think that a regulation is necessary. This happened in many other sectors like in the past, like insurance companies: they had to be regulated. Or energy companies: they had to be regulated too. Eventually that happens in every industrial sector. The digital platform industry is no different from that point of view.