Monthly Archives: December 2015

Interview for City of Asylum, Pittsburgh (USA, 18 Dec. 2015)

One of the most remarkable parts of my 2015 conference tour in the USA was my stay at City of Asylum, a sanctuary for exiled and persecuted writers in Pittsburgh. Before my conference, I had the opportunity to discuss privacy, platforms, and mass surveillance with Caitlyn Christensen. The result of our conversation is now published on the online magazine Sampsonia Way, in the series The Writer’s Block.

The Writer’s Block Transcripts: A Q&A with Antonio Casilli

In the wake of Edward Snowden’s 2013 revelation of global surveillance programs run by the NSA, a global debate has emerged about mass surveillance, individuals’ rights to privacy, and national security. Despite claims that privacy is becoming nonexistent, digital researcher Antonio Casilli maintains that Internet users and civil society organizations are engaged in a culture war with digital industry and government agencies over the issues of confidentiality, anonymity, and secrecy – a war that they could very well win.

Antonio Casilli is associate professor of Digital Humanities at Telecom ParisTech (Telecommunication College of the Paris Institute of Technology) and the author or co-author of five books on subjects that include digital labor, the Internet and social structures, and communicational violence. In October of 2015 he came to City of Asylum to present his research on online privacy and the impact of the Internet on the private sphere.

In the book Against the Hypothesis of the End of Privacy (Springer, 2014), co-authored with Yasaman Sarabi and Paola Tubaro, he argues for the resiliency of online privacy and the impact the Internet is having on our life. Before he presented his research, Sampsonia Way interviewed Antonio Casilli about his findings and what it means for digital consumers.

What is the Internet doing to our private sphere?

It is redefining the way we understand privacy. One of the main points of my research is countering the common sense according to which there is no more privacy on the Internet.

The most important part to me and my colleagues’ work on these topics is to help users become actors of their privacy, and to gain control over their data so that they are not only the puppets of big platforms and of data brokers, big companies that sell this data on the market for advertising and, in some cases, for political repression. There is today a growing market of surveillance and restriction of civil liberties. In this market the different parts that are involved are, of course, companies from the private sector and on the other side, governments. We have to take this into account so that we can help users develop strategies to counter this kind of situation and to become aware of these risks for their privacy. What matters most is to show them that privacy is not dead. This is not a lost battle. On the contrary, it’s an ongoing cultural and political war, and we have to create the conditions to empower civil society in this war.

There is a prominent narrative that people are not concerned with privacy, particularly the younger generations of Internet users who are accused of “oversharing” on social media. Do you think that people really are not concerned with privacy?

On the contrary, people are extremely concerned with their privacy. The care of privacy is something that has become more and more common than in the past. In the pre-Internet era, privacy was something that only celebrities or politicians were entitled to because they were persons who were supposed to have the biggest social capital. Of course, also, they were the richest in terms of financial capital. With the Internet, what happens is a democratization of this concern for privacy and of course more and more people are interested.

Think about what is happening in Europe with the Right to be Forgotten, which is of course a very controversial issue. A year and a half ago, a controversial decision by the European court of justice implemented the right to be forgotten. After that, (each week) 250,000 people asked Google to be de-referenced, meaning not featured in their search engine. This tells us about the kinds of concerns we have in some places of the globe.

It is unequally distributed, of course. There are some countries that are more concerned with these issues and others that are under a certain ideological cape that pushes people not to be attentive to their privacy. Then again, we have lines that separate people when it comes to privacy. Their expectations of privacy fracture along race, gender, age, and class lines. All these things are important factors in deciding who is more or less attentive to his or her privacy. Thinking about how privacy can be construed differently by somebody who now comes from a working class background, and who is under constant surveillance from his or her employer, from government officials, and the way they are aware and deal with this situation of constant surveillance. They have to quickly develop tactics and strategies to counter it.

They interiorize these tactics and eventually develop alternative identities or pseudonymous and anonymous ways of interacting. The kind of precautions they have to take in order to have a free exchange of information and opinions in a completely controlled and constrained environment is characteristic of the Internet today.

You mentioned two strategies: anonymous identities and pseudonyms. What are some other ways that people control their information on the Internet?

There are many, many ways of course. There are some advanced tools like cryptography, use of cryptographic tools, but these are mostly for people who are more proficient in terms of their computer skills.

On the other side you have democratization of this kind of tools, if you think about Tor, which is a software that encrypts communication, it is something that is pretty easy to use and relatively efficient when it comes to protecting anonymity. In some other cases there are information tactics like, for instance, creating spaces where you can interact anonymously. And of course, the most common tactic consists in using pseudonyms to disguise your identity. Despite its pervasiveness, this is far from being an effective tool to protect free speech.

There are some other informal activities, like obfuscation or simple data evasion. Obfuscation means you create a lot of noise around the information you are sharing. You don’t share one picture, but you share 1,000 pictures. Only one is the relevant one that you actually want to share. The rest is noise to create some kind of barrier between you and the people who might censor you or track what you are saying online.

Finally we have data evasion, which means that we have to come up with ways of not leaving traces of our activities and choices online, like footprints. Think about people who do not want to be the target of personalized advertising and the way they can use pre-paid cards or computers in public spaces to buy things so that their customer profile is not completely personalized and they do not get bombarded by ads or the same kind of product, or the same kind of offers which are, in some cases, extremely dangerous in the long term. They can result in redlining, bad credit score, refusal of access to services like health, housing, and so on.

Yes, everybody wants to be eventually targeted by one ad, once or twice, who knows. But nobody wants to become the constant target of advertising attention, to say nothing about the kind of interactions or crossings that you might have between companies who sell you products and insurance companies or banks, and the way they create reputation rankings for people. These can follow you forever in your life. The decision you are making today, in terms of leaving your Facebook page accessible to anybody, can eventually turn into some very invasive commercial profiling that is also insurance profiling and credit score profiling.

Eventually, these things can determine individuals’ lives for a number of years. In 30 years you will be haunted by the decision that you made to share with people the pictures all the pizzas you ate. Because if you take a picture of the pizza you eat today, and then the next day, and then the next day, and then in 30 years you develop a cardiovascular disease, an insurance company might say to you, “Well, ok you did develop this disease but I won’t pay you because you’ve been eating these pizzas for 30 years and so you took this health risk.” This is the kind of situation that many people face today, and many more people will face in the future if they do not develop some effective strategies to protect their privacy, according to the context and according to the platform that they are on.

What are the revolutionary possibilities of developing strategies to protect yourself?

There are many possibilities that might be associated with social justice, social change, and political innovation. Some of those result in revolutionary changes in many countries. If we think about what happened in 2010 to 2013, mainly in the Middle East and North Africa and what failed to happen in some places in Europe, Spain and Greece (all movements loosely associated with indignados). During the Arab Spring, protesters had to develop quickly some strategies that took into account the surveillance that the governments were implementing and enforcing on the population and the need for a free exchange of information to coordinate, to develop political directions in some cases.

And of course, to avoid repression. In this case, in these countries, the main activity was basically hacking and cryptography, and then the development of alternative platforms of communications. In some cases — I’m thinking about Egypt, for instance — the situation became so critical that Mubarak, the former dictator, decided to switch off the Internet altogether. That was actually the moment where the dynamic of political contention became actually revolutionary, and they actually toppled the government. The day after Mubarak was forced to flee Egypt.

How did they do it? Well they did it firstly because not everything was a Facebook revolution. That is something that needs to be stressed. Facebook actually had a very limited role in it as a platform. Social media platforms like Facebook where more focused on raising international public awareness about what was going on in Egypt. But political activists in the country hardly used it to coordinate protests. Only (31) percent of the people living in Egypt had access to the Internet at the time. Only a portion of those were actually communicating on Facebook. Alternative movements or groups of hackers around the world were helping those activists communicate on alternative lines. They put in place dial-up lines, or alternatives to the Internet itself. In some cases you actually have to create your own infrastructures. In some other cases you have to develop your codes of communication. Sometimes the language you speak or the coded language you use while you communicate online can actually be used to protect your privacy because you are creating your own space for free speech by restricting your audience by the very language that you use.

I’m interested in what you said about the Egyptian Spring becoming a revolution when the government shut down the Internet. Is this a pattern that you see unfolding elsewhere?

I see that actually there is a link between the restriction of communication and the explosiveness of the situation that you might face from a believable point of view. Part of my research is about riots. Riots are not always revolutionary riots. Sometimes they are extremely reactionary in nature, sometimes they are actually associated with social change. What happened, for instance, in England in 2011 was something that was reminiscent of riots that are associated with movements for social change and social justice. This is because of the places where those English riots were taking place, in disenfranchised and impoverished neighborhoods, and because they did not emerge from in a vacuum. They were associated with other movements, and other protests that took place in the months before that. When civil violence exploded in the UK during the summer of 2011, the government recommended censorship of all the social media so that the rioters could be stopped from coordinating and restrained from looting. We did an evaluation of these censorship policies, and me and my colleague Paola Tubero came up with a clear pattern that indicates that if you restrict the use of social media the situation becomes explosive. You have mass violence. This is basically because if you restrict the vision that individuals might have of the situation around them, if they don’t know what’s happening in another neighborhood or in another town or in another region, they actually lose awareness and become more ready to take risk and to actually use violence in their political protest. This is something that can be observed in some cases. I don’t know and I cannot say if it is a general movement, but there are some instances and examples that indicate that restriction of the Internet and censorship can actually make the situation extremely volatile.

What is the most important aspect of your research for the average social media user to understand?

From a statistical point of view, the average social media user does not exist. From the point of view of social science, we now face a situation which is completely different if compared to the situation some demographers or sociologists or anthropologists might face or might have faced 20 or 30 years ago when they were making big censuses. When it comes to the Internet and digital platforms in general, the average does not exist because everything points to personalization and to micro-segmentations of small parts of the population. It’s so multidimensional that actually the average user does not exist. This was a big methodological caveat that I had to introduce.

When you think about the user of mainstream digital platforms like Facebook or Airbnb, they basically are always associated with some kind of injunction to be fair, to be honest, to be transparent in your interactions. This is a moral discourse that goes with the platforms that we now use whether they be social media sharing platforms or on-demand platforms. The design of such services is predicated on the fact that users comply with a frank and open participation, and this means being transparent and being honest. Of course beyond this moral discourse there is an ideological discourse that actually is functional to the type of business model that these social media and digital platforms want to implement. Privacy factors in in this situation: it is something that digital platforms are interested in removing from the equation. They do not want people to be attentive to their privacy or to care about their personal information, so that actually perfect and frictionless sharing might take place and frictionless sharing means data extraction. It means that personal data are taken and exploited from a financial point of view but also from an informational point of view.

This kind of moral and ideological discourse that we face today, actually pushes every user to a different degree, to overshare. And again, what I am saying is not a normative discourse. I’m not saying that they should not do it, but that the conditions today are extremely risky for the mainstream user. Even in the terms of service of any digital platform there is no clear indication of the final use of those data. Who is going to buy them? Who is going to use them? Who is going to cross them with other data? Something which is probably completely anodyne and mundane, if I share it with my friends, becomes extremely dangerous if I share it with my banker, with my physician, whatever.

Again, the fact is that these platforms operate by adopting this kind of moral discourse: “You have to be open, you have to be transparent.” They are basically trying to implement a system of frictionless data extraction. Of course, it is profitable to them. This is the part of the social media that usually the mainstream user does not take into account. This is also due to the kind of dishonesty and lies that social media CEOs constantly convey: “Okay, your data will not be looked at.” Think about what happened to Snapchat a couple of years ago when eventually the Federal Trade Commission (FTC) noticed that what they were promising was simply not true. The commercial promise of that platform was that your picture will eventually be erased after a few seconds and that only the people that you chose as viewers would view them. On the contrary, what the Federal Trade Commission found was that Snapchat’s contents and all the metadata were exploited to an extent which is unimaginable to their users.

How are social media platforms able to establish control over their users through the illusion of privacy settings?

The point is that today’s platforms, because they are so encroached upon by advertising, are extremely data costly. Meaning that they ask you for a lot of data just to have a normal functioning, nothing special. Think about Facebook’s real name policy. Why do I have to use my civil identity, the name that is actually written on my ID card? Basically because governments and companies want to trace you through platforms. They want to know if the person who is chatting on Facebook is the same person whose credit card is used for some kind of dating service, or that the same person has a certain social security number. They want to cross all this information. This is why basically they always come up with data extraction ruses. Everything on platforms such as Facebook can be construed as a glorified commercial questionnaire. Everything is always about what you like, what would you buy, and eventually what do your friends like and what do your buy? Do they share your interest for obscure music or for this special movies and so on? If you bought this, would they buy that?

It’s also related to political control. I don’t want to only stress the commercial part of it. Some of that data are sold to governments and I’m thinking about your government and mine. They buy this data, and they actually publish the price they pay for it. If they want to buy your emails, they have to pay a certain amount of money. In some cases (and this is where the situation becomes extremely disturbing and dangerous from a political point of view) they do not pay. This is usually when a surveillance scandal breaks, like the one that was originated by Edward Snowden in 2013 when eventually Google discovered that some US authorities were just pulling their data like it was an open bar. They were getting in their servers, taking the data without undergoing any kind of control or audit on what they were doing, and so on and so on. Ultimately the leaks like the ones associated with Chelsea Manning or Julian Assange or Edward Snowden, are important because they show the extent of this military-industrial complex of surveillance in which social media, digital platforms, advertising, and government agencies are involved. They are all in bed together. And the bed is called mass surveillance.

How are the issues of privacy that we face now different from those we faced in the past?

I think that today we are facing a change in the paradigm of privacy. What our grandparents used to call privacy was something that they inherited from a legal and judicial tradition that dates back to the 19th century. If you think about the first definitions, especially in the US, of privacy, you have to look back at 1890 where two famous lawyers, Louis Brandeis and Samuel Warren, published this seminal article called “The Right to Privacy.” “The Right to Privacy” was based on a model that me and my colleagues define as “privacy as penetration.” Meaning that’s something that can be penetrated, that can be invaded by malevolent government agencies or criminals or anyone who want to go to grab the core of sensitive information that you have around you. In this model, this information was always sensitive for everyone: health, sexual orientation, political preferences, religious opinions, or beliefs.

Now what happens is that today we do not have a clear definition of what is sensitive information. This definition changes from one person to another, from one country to another, from one platform to another. If I’m online, I want to share my medical records with my doctor, if I’m on a health-oriented platform. But I don’t want to share it with my friends on mainstream social media, not on Twitter for instance. I want to share my religious beliefs or sexual preferences if I am on a dating site, but I don’t want to share it with my professional contacts on LinkedIn.

So the fact of having to constantly negotiate through all of these platforms actually puts us in a situation that is completely different. We call this new paradigm of privacy, “privacy as negotiation.” It’s a negotiation between any one of us, between you and me, you and your friends, and so on and so on. Of course, it is also a negotiation with corporate actors and with government bodies, and we have to take into account all these persons, and all these institutions, whenever we decide to publish a picture of our cat playing the piano, for example. We always have to take into account the consequences that this type of post or message online might create. Again, this is a change of privacy as a notion, but also a change of our collective attitudes toward privacy.

This is also just goes to show that we are not facing the end of privacy. On the contrary, it is a transformation of privacy, such a radical transformation that sometimes we don’t even recognize those behaviors as privacy behaviors — privacy preserving, privacy protecting behaviors. Sometimes I’m publishing something under a pseudonym when I am guarding my privacy. These actions have now to be taken into account even from a legal point of view and considered privacy protecting behaviors.

And yet today the law is not up to date. Probably, policymakers have yet to understand that people who are communicating anonymously online or exchanging information are not always cyber criminals, are not always dangerous hate-speech partisans. Sometimes they are just people who don’t want to be the constant targets of government surveillance or commercial surveillance.

So Internet users have some autonomy over the status of their privacy in the digital sphere?

We have some autonomy. What happens today is that in this cultural war around privacy, nobody has won yet. Especially not the digital platforms. On the contrary, they are in a very bad situation. From a legal point of view there is definitely a counter attack going on. I’m thinking about Europe because that’s where I live. What happens over there is that there have been a number of judicial decisions over the last few years that go in the direction of forcing social media and some governments to be more respectful towards personal data, and more accountable about what these platforms do with data.

Think about the recent European Court of Justice’s ruling about Safe Harbor. Of course it was presented in the media as some disastrous decision. But it was actually a way of attracting the attention of lawmakers in what a foreign country (namely the US) does with the data of European citizens. And now investors and owners of social media and digital platforms have to comply with existing privacy legislation if they want to continue making business in Europe.

Think about the European Right to be Forgotten decision a few years ago. Consider all the class actions that have been launched against big platforms since then. What do they do with our content, with the content we put online? Are they able and should they be able to sell our pictures to turn those pictures in some kind of testimonial or endorsements for products? Or should they be able to sell our data to create big databases and to sell those to data brokers who are humongous international organizations that aggregate our data internationally and create personalized profiles?

These data brokers are impressive and are completely not regulated, so we are now starting to think that a regulation is necessary. This happened in many other sectors like in the past, like insurance companies: they had to be regulated. Or energy companies: they had to be regulated too. Eventually that happens in every industrial sector. The digital platform industry is no different from that point of view.

[Slides] Séminaire #ecnEHESS “Combien vaut un clic ?” (4 janv. 2016)

Dans le cadre de mon séminaire EHESS Étudier les cultures du numérique : approches théoriques et empiriques, le 4 janvier 2016 nous avous eu le plaisir d’accueillir Geoffrey Delcroix (pilote des projets de prospective, Direction des Technologies et de l’Innovation, CNIL), Martin Quinn (Telecom Paristech, Chaire VPIP) et Vincent Toubiana (service de l’expertise technologique, CNIL) pour une séance sur la valorisation des données personnelles dans les industries culturelles et par le biais des plateformes d’optimisation d’achat et de vente d’espaces publicitaires.


Retrouvez le livetweet du séminaire sur Twitter : hashtag #ecnEHESS.

Titre : “Combien vaut un clic ? Données, industries culturelles et publicité”.

Intervenants : Geoffrey Delcroix (CNIL), Martin Quinn (CVPIP), Vincent Toubiana (CNIL)

Résumé : Dans le domaine des contenus culturels, la création de valeur semble se concentrer autour des enjeux de personnalisation et de recommandation. Derrière la magie des algorithmes, quelle « valeur » est réellement créée pour l’utilisateur, au cœur de quels modèles économiques ? Que veut dire lire, écouter, regarder et jouer à l’heure de la personnalisation, des algorithmes et du big data ?
Quels modèles économiques coexistent et s’hybrident autour du rôle des données (à la fois par les contenus créés par l’utilisateur et par les traces « inconscientes ») dans ces secteurs pionniers ? Par ailleurs, et pas seulement dans le seul domaine culturel, lorsque l’internaute consomme des contenus en ligne (gratuits ou payant), il crée aussi de la valeur pour les éditeurs (en alimentant ses algorithmes ou par  la publicité ciblée). Dans certaines conditions cette valeur peut être très précisément observée, et nous permet ainsi de mieux comprendre les algorithmes qui gouvernent par exemple l’affichage publicitaire. Quelle valeur représente un internaute spécifique dans une situation précise ? Quels comportements individuels et collectifs les annonceurs adoptent dans le but de maximiser leurs profits?

Compte- rendus des séances précédentes :

Prochaines séances :

  • 1 février 2016Yann Moulier-Boutang “Capitalisme cognitif et digital labor”.
  • 7 mars 2016Jérôme Denis (Télécom ParisTech) et Karën Fort (Université Paris-Sorbonne) “Petites mains et micro-travail”.
  • 4 avril 2016Camille Alloing (Université de Poitiers) et Julien Pierre (Université Stendhal Grenoble 3) “Questionner le digital labor par le prisme des émotions”.
  • 2 mai 2016Judith Rochfeld (Paris 1 Panthéon-Sorbonne) et Valérie-Laure Benabou (UVSQ) “Le partage de la valeur à l’heure des plateformes”.
  • 6 juin 2016Bruno Vétel (Télécom ParisTech) et Mathieu Cocq (ENS) “Les univers de travail dans les jeux vidéos”.

Dans Le Monde : récension de “Qu’est-ce que le digital labor?” (10 déc. 2015)

Dans le quotidien Le Monde du 10 décembre 2015, David Larousserie nous livre un compte-rendu amusé et amusant de notre ouvrage Qu’est-ce que le digital labor? (INA éditions, 2015).


Quand Internet n’est plus « sympa »

David Larousserie

Qui a dit que les joutes intellectuelles avaient disparu ? En tout cas, pas deux des plus réputés sociologues français spécialistes des usages numériques, comme ils le montrent dans ce vivifiant essai consacré à une question émergente : le digital labor . Autrement dit, ce « travail » gratuit que les utilisateurs de plates- formes de réseaux sociaux, de ventes en ligne, de moteur de recherche effectuent en recommandant, « aimant », lançant des requêtes, interagissant, et que les entreprises monétisent auprès de publicitaires ou d’autres acteurs. L’expression a émergé à partir de 2009 aux Etats-Unis dans le champ académique pour de- venir un domaine de recherche actif. Production de valeur, mesures de performances, cadre contractuel (par les illisibles « conditions générales d’utilisation »), rappel à l’ordre pour pro- duire (par les notifications, alertes ou invitations diverses). Tout cela est bien du travail, décrit Antonio Casilli, sociologue à Télécom ParisTech, dans la première partie du livre. Et, dès lors, avec d’autres, il s’interroge sur les dispositifs d’exploitation, voire d’aliénation à l’œuvre ici comme dans n’importe quelle activité laborieuse. Le ton devient alors plus critique sur ces dérives marchandes qui accaparent la vie privée ou les biens communs.
Dominique Cardon, sociologue aux Orange Labs, dans une deuxième partie, commence par esquiver en prenant un recul original. La notion de digital labor relève plus de la posture que de l’analyse profonde. Elle se place à l’extérieur des sujets d’étude, donc au-dessus des internautes, pour leur dévoiler une aliénation qu’ils ignorent. Il raille donc ce point de vue, tout en détaillant les raisons intellectuelles et sociologiques qui ont amené à ce déferlement de critiques. « Internet était sympa, il ne l’est plus » , comme il le résume ironiquement. Bien sûr, les réseaux d’aujourd’hui n’ont plus rien à voir avec ceux d’hier avec marchandisation, espionnage à grande échelle, domination de quelques géants. Mais, face à ce constat sombre, il préfère retenir la grande diversité des usages et la démocratisation de l’expression, qui sont toujours vivantes.
A distance, les mots doux s’envolent entre les deux spécialistes lors d’une troisième partie construite comme un dialogue : « aristocrate ! », « libéral ! », « paternaliste ! », « incohérent ! » . Cependant, les deux tombent d’accord sur un point. Dominique Cardon regrette la mainmise d’une vision « économiciste » dans les analyses (aliénation, exploitation, valeur, etc.). Antonio Casilli aussi en somme, en rejetant les solutions consistant à rétribuer les internautes échangeant sur les plates-formes, comme certains l’ont proposé. Il préférerait une « rémunération » qui « redonne aux communs ce qui a été pris aux communs », par exemple sous forme d’un revenu de base ou bien d’une taxation des entreprises liées aux données qu’elles exploitent. Au fil des échanges se dégage une vision particulièrement riche des mutations à l’œuvre à propos d’Internet et de ses utilisateurs.

Qu’est-ce que le digital labor ?
de Dominique Cardon et Antonio Casilli
INA Editions, 104 p., 6 euros