admin

Comment se fabriquent les “fake news” ? (entretien France Inter, 8 juin 2018)

J’étais l’invité de Sonia Devillers à l’Instant M de France Inter pour parler du Projet de loi sur les fausses informations actuellement en discussion à l’Assemblée Nationale.

Le vote de “la loi contre la manipulation de l’information” s’est embourbée cette nuit à l’Assemblée. Le sociologue et spécialiste d’internet Antonio Casilli nous explique ce matin les dessous de la fabrication et diffusion des fake news. Pas d’adoption en première lecture. La majorité a sous-estimé les assauts de la France Insoumise, du Front National et des Républicains qui hurlent à la mort de la liberté d’expression. « Cette loi les empoisonnent », écrit l’Obs ce matin. Pourtant dans son édito, Le Monde la juge « délibérément inefficace pour qu’elle ne soit pas dangereuse ». Après la matinale d’Inter, ce matin, L’Instant M vous propose de creuser le sujet. Pas de discussion sur la confiance ou la vérité, mais un focus sur ces armées de l’ombre payées pour répandre des fausses nouvelles. Nous vous racontons comment ce système est complètement artificiel.

Les brèves de L’Instant M Coup d’arrêt, hier, net et brutal pour Buzzfeed France. Décision totalement inattendue de son actionnaire américain en difficulté financière. Ce site – une équipe de quatorze personnes – avait développé une ligne éditoriale très nouvelle et très remuante. Au départ, des classements rigolos et anecdotiques partagés à gogo sur les réseaux sociaux. Puis, en France comme aux Etats-Unis, un deuxième fil, d’info celui-là, recrutant des journalistes d’investigation. Scoops et révélations : le restaurant L’Avenue refusant les clients arabes, enquête sur les candidats aux législatives du Front National, actes de violence commis par Jean-Michel Baylet, ancien ministre à l’encontre d’une collaboratrice … Fin de partie.

An open letter to tell Google to commit to not weaponize its technology (May 17, 2018)

Following an invitation by Prof. Lilly Irani (UCSD), I was among the first signatories of this ICRAC “Open Letter in Support of Google Employees and Tech Workers”. The letter is a petition in solidarity with the 3100+ Google employees, joined by other technology workers, who have opposed Google’s participation in Project Maven.

Following our joint action, on June 7 2017 Google has released a set of principles to guide its work in AI in a document titled “Artificial Intelligence at Google: our principles,”. Although the company pledges not to develop AI weapons, it does says it will still work with the military.

Open Letter in Support of Google Employees and Tech Workers

Researchers in Support of Google Employees: Google should withdraw from Project Maven and commit to not weaponizing its technology.

An Open Letter To:

Larry Page, CEO of Alphabet;
Sundar Pichai, CEO of Google;
Diane Greene, CEO of Google Cloud;
and Fei-Fei Li, Chief Scientist of AI/ML and Vice President, Google Cloud,

As scholars, academics, and researchers who study, teach about, and develop information technology, we write in solidarity with the 3100+ Google employees, joined by other technology workers, who oppose Google’s participation in Project Maven. We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes. The extent to which military funding has been a driver of research and development in computing historically should not determine the field’s path going forward. We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems.

Google has long sought to organize and enhance the usefulness of the world’s information. Beyond searching for relevant webpages on the internet, Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto “Don’t Be Evil” famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense.

According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras…that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international[1] and U.S. law.[2] These operations also have raised significant questions of racial and gender bias (most notoriously, the blanket categorization of adult males as militants) in target identification and strike analysis.[3] These problems cannot be reduced to the accuracy of image analysis algorithms, but can only be addressed through greater accountability to international institutions and deeper understanding of geopolitical situations on the ground.

While the reports on Project Maven currently emphasize the role of human analysts, these technologies are poised to become a basis for automated target recognition and autonomous weapon systems. As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems. According to Defense One, the DoD already plans to install image analysis technologies on-board the drones themselves, including armed drones. We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control. If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection – no technology has higher stakes – than algorithms meant to target and kill at a distance and without public accountability.

We are also deeply concerned about the possible integration of Google’s data on people’s everyday lives with military surveillance data, and its combined application to targeted killing. Google has moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief.

Should Google decide to use global internet users’ personal data for military purposes, it would violate the public trust that is fundamental to its business by putting its users’ lives and human rights in jeopardy. The responsibilities of global companies like Google must be commensurate with the transnational makeup of their users. The DoD contracts under consideration by Google, and similar contracts already in place at Microsoft and Amazon, signal a dangerous alliance between the private tech industry, currently in possession of vast quantities of sensitive personal data collected from people across the globe, and one country’s military. They also signal a failure to engage with global civil society and diplomatic institutions that have already highlighted the ethical stakes of these technologies.

We are at a critical moment. The Cambridge Analytica scandal demonstrates growing public concern over allowing the tech industries to wield so much power. This has shone only one spotlight on the increasingly high stakes of information technology infrastructures, and the inadequacy of current national and international governance frameworks to safeguard public trust. Nowhere is this more true than in the case of systems engaged in adjudicating who lives and who dies.
We thus ask Google, and its parent company Alphabet, to:

  • Terminate its Project Maven contract with the DoD.
  • Commit not to develop military technologies, nor to allow the personal data it has collected to be used for military operations.
  • Pledge to neither participate in nor support the development, manufacture, trade or use of autonomous weapons; and to support efforts to ban autonomous weapons.

__________________________
[1] See statements by Ben Emmerson, UN Special Rapporteur on Counter-Terrorism and Human Rights and by Christof Heyns, UN Special Rapporteur on Extrajudicial, Summary and Arbitrary Executions.

[2] See for example Murphy & Radsan 2009.

[3] See analyses by Reaching Critical Will 2014, and Wilke 2014.

[Séminaire #ecnEHESS] Juan Carlos De Martin “L’université à l’heure des algorithmes : sortir du cauchemar néolibéral” (11 juin 2018, 17h)

Enseignement ouvert aux auditeurs libres. Pour s’inscrire, merci de renseigner le formulaire.

Pour la dernière séance de l’année de notre séminaire Étudier les cultures du numérique nous aurons le plaisir d’accueillir Juan Carlos De Martin, professeur à l’École polytechnique de Turin et chercheur associé au Berkman Klein Center de Harvard. Il est le co-directeur du centre Nexa Internet & Société et l’auteur de The Digital Public Domain: Foundations for an Open Culture (avec Melanie Dulong de Rosnay, OpenBookPublishers, 2012) et Università futura. Tra democrazia e bit (Codice Edizioni, 2017). Son intervention sera discutée par Francesca Musiani (ISCC-CNRS).

⚠️ La séance aura lieu le lundi 11 juin 2018 de 17h à 19h30, Salle 9, EHESS, 105 bd Raspail, 75006 Paris. ⚠️


Titre : University and Neoliberalism: What Is To Be Done?

Intervenant : Juan Carlos De Martin (École polytechnique de Turin)
Discutante : Francesca Musiani (ISCC-CNRS)

In France, UK, Canada, recent social movements have highlighted the way both students life and faculty activity is impacted by what is described as the neoliberal university. Precarious jobs, tenure under attack, opaque “algorithmic” managerial logics, less and less funding for curiosity-driven research, increasingly hierarchical governance, students as customers, extensive quantification, rankings, “publish or perish”, student debt and many other increasingly well-studied trends are corrupting the University beyond recognition. Now that we are more and more aware of the situation, the focus of our attention and energies should shift towards praxis, i.e., towards what is to be done. During this lecture, I will argue that we need an idea of University suitable for our age, a normative model to orient our thoughts and actions; one of the main weaknesses, in fact, of dealing with neoliberalism has been the incapacity of offering alternative, credible models. I will then share with the audience a few ideas of possible actions by the academic community.


La présentation et les débats se dérouleront en anglais.

Le RGPD, un premier pas dans la bonne direction (grand entretien Libération, 25 mai 2018)

Pour le sociologue Antonio Casilli, le RGPD est un premier pas pour assainir la relation que citoyens et entreprises ont établie autour des données fournies par les premiers aux secondes.

Sociologue, Antonio Casilli est enseignant-chercheur à Télécom ParisTech et chercheur associé à l’EHESS. Pour lui, l’enjeu du règlement général sur la protection des données (RGPD) est de permettre au «travailleur de la donnée» qu’est devenu homo numericus de se réapproprier un capital social numérique que les grandes plateformes avaient jusqu’ici confisqué à leur avantage.

Que représente le règlement européen à l’aune du combat déjà ancien pour la maîtrise de nos données personnelles ?

La question du contrôle de nos vies privées a radicalement changé de nature à l’ère des réseaux. Alors qu’il s’agissait auparavant d’un droit individuel à être «laissé en paix», cette vision très exclusive n’a plus guère de sens aujourd’hui pour les milliards d’usagers connectés en permanence, avides de partager leurs expériences. Nos données personnelles sont en quelque sorte devenues des données sociales et collectives, ce qui ne signifie évidemment pas qu’il faille faire une croix sur l’exploitation qui en est faite. C’est même tout le contraire.

Comment le RGPD s’inscrit-il dans ce mouvement ?

Ce texte est l’aboutissement d’un processus d’adaptation à l’omniprésence des grandes plateformes numériques dans notre quotidien. Dans le Far West réglementaire qui a prévalu ces dernières années, leurs marges de manœuvre étaient considérables pour utiliser et valoriser les données personnelles comme bon leur semblait. Avec le RGPD, il devient possible de défendre collectivement nos données. Le fait que le règlement ouvre la possibilité de recours collectifs en justice est très révélateur de cette nouvelle approche.

En quoi le RGPD peut-il faciliter nos vies d’usagers et de «travailleurs» de la donnée ?

En actant le fait que nos données ne sont plus «chez nous» mais disséminées sur une pluralité de plateformes, dans les profils de nos proches, les bases de données des commerçants ou des «boîtes noires» algorithmiques, le RGPD cherche à harmoniser les pratiques de tous les acteurs, privés mais aussi publics, qui veulent y accéder. D’où l’idée d’un «guichet unique» pour les usagers, qui établit que c’est le pays de résidence qui est compétent pour gérer les litiges, et non le lieu d’implantation de l’entreprise qui a accès aux données. Cela n’aurait aucun sens alors que celles-ci circulent partout.

Si ces données sont le résultat de notre propre production en ligne, ne devrait-on pas disposer d’un droit à les monétiser ?

Ce n’est pas la philosophie du RGPD, qui ne conçoit pas la donnée dite personnelle comme un objet privatisable, mais plutôt comme un objet social collectif dont nous pouvons désormais contrôler l’usage. Les données sont devenues un enjeu de négociation collective, non pas au sens commercial du terme comme l’imaginent certains, mais plutôt syndical : il y a là l’idée d’un consentement sous conditions dans lequel les deux parties fixent des obligations réciproques. C’est très différent d’une vision marchande qui risquerait d’instituer ce que l’on appelle un «marché répugnant», dans lequel on monétiserait des aspects inaliénables de ce qui fonde notre identité.

Le diable ne se situe-t-il pas dans les fameuses «conditions générales d’utilisation» (CGU) que tous les services s’empressent de modifier, mais que personne ne lit ?

C’est une des limites actuelles du RGPD. Les «Gafa» [Google, Apple, Facebook et Amazon, ndlr] restent en position ultradominante, et nous bombardent de CGU qui pour l’instant ne modifient pas l’équilibre des pouvoirs. Il existe un vrai flou sur notre consentement présupposé à ces «contrats» que l’on nous somme d’approuver.

Pouvez-vous donner des exemples ?

Lorsque Facebook explique que la reconnaissance faciale de nos photos est utile pour lutter contre le revenge porn [la publication en ligne de photos sexuellement explicites d’une personne sans son accord], il s’abstient de préciser que dans certains contextes, elle peut également servir à certains régimes politiques pour identifier des personnes. Il circule actuellement une pétition dénonçant le projet «Maven», que Google mène en collaboration avec l’armée américaine afin que ses technologies d’intelligence artificielle servent à de la reconnaissance d’images filmées par des drones. Le problème, c’est que les mêmes technologies sont utilisées pour améliorer nos usages. Mais on n’a pas signé pour que nos données servent à améliorer les outils du Pentagone.

Le RGPD va-t-il aider à un rééquilibrage entre petits et très gros acteurs d’Internet, comme le dit la Commission européenne ?

Il serait illusoire de croire que la régulation de nos données personnelles pourra faire ce que d’autres lois devraient faire. Les grandes plateformes du numérique vont appliquer ou faire semblant d’appliquer le RGPD, parce qu’il est vital pour elles de continuer à accéder au marché européen, mais les petits vont continuer à souffrir de leur concurrence. Pour parvenir à un rééquilibrage économique, il vaut mieux se concentrer sur la réforme de la fiscalité du numérique, qui jusqu’ici n’a pas vraiment avancé malgré toutes les promesses des politiques.

We need a political subject capable to think an alternative to digital labor (interview Green European Journal, vol. 17, 2018)

[Update: this interview has been translated in portuguese by Priscila Pedrosa]
An interview with Yours Truly and political activist Lorenzo Marsili, published in vol. 17 (“Work on the Horizon: Tracking Employment’s Transformation in Europe”), pp. 80-88 of the Green European Journal. You can download the entire issue here.

Earn Money Online: The Politics of Microwork and Machines

With hype around automation and robotisation at fever pitch, many argue that we will soon see mass labour disappear altogether. Sociologist Antonio Casilli begs to differ. Work is not disappearing, he argues in this interview with Lorenzo Marsili, but is being transformed by the giants of the digital economy. Understanding how the world of work is changing, and in whose interest, is the key political question of the future.

Lorenzo Marsili: You claim that fears of automation are one of the most recurrent human concerns. Do you think the alarm about “robots taking our jobs” should be toned down?

Antonio Casilli: We are afraid of a ‘great substitution’ of humans by machines. This is quite an old concept, one we can trace back to early industrial capitalism. In the 18th and 19th centuries, thinkers like Thomas Mortimer and David Ricardo asked whether the rise of steam power or mechanised mills implied the “superseding of the human race.” This vision was clearly a dystopian prophecy that was never realised in the form originally predicted.

But when jobs were lost, it was because managers and investors decided to use machines – as they still do – as a political tool to put pressure on workers. Such pressures serve to push down wages and, by extension, to expand the profits made by capital. Machines therefore have a precise ideological alignment that typically benefits the part of society which possesses financial means, at the expense of that which works. As a result, the rhetoric around machines as inevitable and neutral job destroyers has been used for two centuries to squeeze the workforce and silence its demands. The discourse that surrounds automation today, with the accompanying fear of robots, is a reproduction of this same rhetoric.

Let’s take a step back. The ‘gig economy’ has become synonymous with underpaid, precarious employment. You choose to focus on the concept of the ‘microtask’. What does this concept refer to?

Microtasks are fragmented and under-remunerated productive processes. Examples include translating one line of a one-page text, watching 10 seconds of an hour-long surveillance video, and tagging the content of five images. Microworkers are usually paid a few cents per task. These tasks are usually posted on microwork platforms which function as labour markets or job search websites. Microworkers can choose the task they want to perform and are allocated a few minutes to complete it. Microtasks are becoming increasingly important in domains as wide-ranging as marketing, computer vision, and logistics, to name just a few. One of the smallest microtasks is the single click, which can be paid as little as one thousandth of a dollar.

The rhetoric around machines as inevitable and neutral job destroyers has been used for two centuries to squeeze the workforce and silence its demands

Are we talking about a significant new phenomenon or is it more of a niche area?

We are faced with a statistical problem when investigating microwork, one shared with the gig economy and indeed every type of informal, atypical, or undeclared work. Their scale and pervasiveness are difficult to gauge with the usual statistical resources such as large-scale surveys, models like the Labour Force Survey, data from the International Labour Organization, or businesses themselves supplying information voluntarily. As far as microwork alone goes, estimates vary wildly. The most conservative, like those of the World Bank, point to just 40 million microworkers. The most exaggerated, meanwhile, describe 300 million in China alone. Personally, I would estimate that there are around 100 million such workers in the world. But the real question is whether these 100 million are the seeds of a much broader tendency. If microwork indicates a way of working that is becoming the norm, how many workers are transforming into microworkers?

And would you say that all work is starting to resemble microwork?

If we look in detail at the evolution of a few particular professions, we can see that they are becoming fragmented and standardised. Take journalists and graphic designers. Instead of producing a campaign, an investigation, or some other project, like 10 or 20 years ago, they find themselves increasingly tasked with producing a small part of a larger project. They are assigned microtasks, to edit a line or to change the colour in a logo, while the rest is distributed to other people. The future of journalism is not threatened by algorithms that write pieces in place of humans, but by the owners of ‘content mills’ that do not demand entire articles but three lines which are used to optimise algorithms. Because the websites in which these texts appear are found by search engines and not by readers, the texts are tailored with the algorithms in mind. Similar kinds of transformations seem to be taking place across a number of sectors.

One interesting aspect of these microjobs is the symbiosis between automated and manual processes. There are jobs that require ‘teaching’ machines and algorithms to make them more efficient for a given task, such as autonomous driving or image recognition. It seems like Star Trek in reverse, where it is no longer the machines that work for the humans but the humans that work for the machines.

In a certain sense, we are seeing the old idea that computers are there for us to command overturned. What’s happening now is that these objects that are a part of our everyday lives – our smartphones, our cars, our personal computers, and many more objects in our homes – are often used to run the automatic processes we call artificial intelligence. By artificial intelligence we mean processes that take decisions in a more or less automatic manner, and which learn, solve problems, and ultimately make decisions, including purchases, in our place. But the problem is that we have this false idea that artificial intelligence is intelligent from its very inception. On the contrary, artificial intelligence needs to be trained, which is why we use terms like ‘machine learning.’ But who teaches artificial intelligence? If we still think the answer is engineers and data scientists, then we are making a big mistake. What artificial intelligence really requires is a huge quantity of examples, and these come from our own personal data. The problem is that this raw information we produce needs to be refined, cleaned, and corrected.

So this is where microwork comes in?

Yes, who wants to do this degrading, routine work? Many people recruited by microwork platforms come from developing countries where the labour market is so precarious and fragmented that they accept minimal remuneration. In return, they perform tasks that might include, for example, copying down a car license plate to provide data for the algorithm managing motorway speeding tickets, or to recognise 10 images, which might be used to provide data on pattern recognition.

But how does this expansion of microwork relate to the stagnation of labour markets in the more advanced capitalist economies? In the UK, for example, there is almost full employment but jobs are increasingly precarious and wages flat.

There is a longer-term trend here that became marked at the end of the 20th century. It consists in the segmentation of the labour market through a pronounced division between ‘insiders’, those who work in ‘formal’ jobs, and ‘outsiders’, who live on ‘odd jobs.’ The so-called outsiders, who are used to moving from one job to another, are the first candidates on microwork platforms. What’s also happening, however, is that insider jobs are becoming less and less formal. The decline of formal work is the result of a political assault on the rights and numbers of salaried workers with the goal of increasing the profit share relative to the wage share. What we see as a result in Western labour markets is an ongoing movement of people from jobs that were traditionally in the formal sector into informal work. This trend is both a result of the huge wave of layoffs seen in recent years, as well as of the outsourcing of productive processes. Outsourcing sees people leave formal jobs to become informal providers for the same company that previously employed them. These people are sometimes asked to leave companies to create their own small businesses and become subcontractors of their former employer.

So labour is not so much destroyed as transformed. Can this development be explained by today’s new monopoly capitalism, with a few large monopolies each dominating a specific platform service?

I would say that there is a process of concentration of capitalism but I don’t agree completely with the notion of monopoly capitalism. I tend to follow the school of thought presented by Nikos Smyrnaios, a Greek researcher, who wrote a book about oligopolistic capitalism, specifically regarding online and digital platforms. The point of his analysis is that there is no such thing as a monopolistic approach to the digital economy. What actually happens is that, for structural and political reasons, these platforms tend to become big oligopolistic economic agents and tend to create what economists would describe as ‘oligopsonies’, or markets dominated by a few buyers, in this case buyers of labour. Thus a handful of big platforms buys labour from a myriad of providers, as happens on microtask services like Amazon Mechanical Turk. These platforms cannot become actual monopolies because they tend to compete amongst themselves.

Citizens are facing relentless efforts deployed by digital capitalists to fragment, standardise, and ‘taskify’ their activities

One way of describing it today is by using quick acronyms like the GAFAM (Google, Apple, Facebook, Amazon, and Microsoft). There are four or five big actors, big platforms, which despite being known for a specific product – whether it is the Google search engine or the Amazon catalogue – don’t really have a ‘typical’ product either. Instead, they are ready to regularly shift to new products and new models. Look at Google’s parent company, Alphabet: it trades in everything from military robot-dogs to think-tanks to fighting corruption. The only thing that is constant for these platforms across products and services is that they rely heavily on data and automated processes, that which we now call artificial intelligence. To capture the data they need to nourish the artificial intelligence they create and sell, they need people to create and refine this data. And so we are back to our role as digital producers of data.

So you would agree with the late Stephen Hawking: the problem is not the robots, but capitalism or, put differently, whoever controls the algorithmic means of production.

This has always been the main problem. The point today is that the algorithmic means of production have become an excuse for capitalists to take certain decisions that would otherwise cause popular uproar. If I were a CEO of a big platform and I declared that my intention was to “destroy the labour market”, I would of course provoke a serious social backlash. But if I said, “I’m not destroying anything, this is just progress, and you cannot stop it”, nobody would react. Nobody wants to be identified with obscurantism or backwardness, especially on the Western Left, whose entire identity is rooted in historical materialism and social progress. So the cultural discourse of “robots who are definitely going to take our jobs” is designed to relieve industrial and political decision-makers from their responsibilities, and to defuse any criticism, reaction, or resistance.

So we need to push against the portrayal of these transformations as natural or magical events, as opposed to political choices. As you know, in the 1970s there was an early re-reading of Marx’s Fragment on Machines, led by Toni Negri and others, which developed the idea of a ‘cognitariat’ as a new political class that could rise up from new forms of immaterial labour. Where do you think that a political force to contest top-down automation might come from?

My own personal history is rooted in a specific intellectual milieu: Italian post-workerism. Nevertheless, some of its hypotheses need to be critically reappraised. I can think of three in particular. The first one is the Marxist notion of a general intellect. With today’s platforms, we are not facing such a phenomenon. Our use of contemporary digital platforms is extremely fragmented and there is no such thing as progress of the collective intelligence of the entire working class or society. Citizens are facing relentless efforts deployed by digital capitalists to fragment, standardise, and ‘taskify’ their activities and their very existences.

The second point is that the bulk of ‘Italian theory’ is based on the notion of immaterial labour. But if we look at digital platforms, and the way they command labour, we see that there is no such thing as a dematerialisation of tasks. The work of Uber drivers or Deliveroo riders relies on physical, material tasks. Even their data is produced by a very tangible process, resting on a series of clicks that an actual finger has to perform.

And finally, we need to dispute the idea that such a political entity, a class of proletarians whose work depends on their cognitive capacities, actually exists. Even if it did, can we really characterise this political subjectivity as a cognitariat? If you read Richard Barbrook’s 2006 book The Class of the New, you’ll see there’s a long list of candidates for the role of Left-sponsored ‘emerging political subjectivities’, one for each time we experience technological or economic change. Between the ‘lumpenproletariat’, the ‘cognitariat’, the ‘cybertariat’, the ‘virtual class’, and the ‘vectorialist class’, the list could go on forever. But which one of these political and social entities is best suited to defending rights and advancing the conditions of its members? And more importantly, which is able to overcome itself?

What do you mean by overcome itself?

The world doesn’t need a new class that simply establishes digital labour and the gig economy as the only way to be. We need a political subject that is able to think about an alternative.

What do you think should be the role of the state? It seems that the only two national ecosystems trying to govern artificial intelligence are the US and China: Silicon Valley and the state-driven ‘Great Firewall of China’. Where does this leave Europe?

There is a question of what the role of the nation state is in a situation where you have a dozen big players internationally whose power, influence, and economic weight are so vast that in some cases they surpass those of the states themselves. Yet states and platforms are not competitors; they collude. U.S. multinationals are just as state driven as Chinese ones. U.S. government funds and big agency contracts have been keeping Silicon Valley afloat for decades. Moreover, there’s a clear revolving door effect: Silicon Valley CEOs going to work for Washington think tanks or for the Pentagon, like Google’s Eric Schmidt for example.

To be extremely blunt, states should heavily regulate these multinationals, but at the same time they should adopt a policy of extreme laissez faire when it comes to individuals, citizens, and civil society at large. Yet so far exactly the opposite has happened: generally speaking, states are repressing any kind of development or experimentation coming from civil society. They stigmatise independent projects by accusing them of being possible receptacles for terrorists, sexual deviants, and hostiles. Meanwhile, the big platforms are left free to do whatever they want. This situation has to change if we are to have actual political and economic progress.

La déconnexion selon les GAFAM : une stratégie de distraction (Le Figaro, 11 mai 2018)

Dans Le Figaro, Elisa Braun se penche sur les nouvelles fonctionnalités introduites par Google pour aider les utilisateurs Android à se déconnecter. Interviewé par la journaliste, voilà mon commentaire :

“La déconnexion n’est pas qu’une mode ou un mouvement des élites technologiques. Les études sérieuses se multiplient sur les effets nocifs des écrans sur le sommeil, le développement cognitif des enfants en bas âge et la surexposition. Chacun peut aussi ressentir la saturation en regardant son centre de notifications ou la quantité de mails non désirés sur sa messagerie. Mais le premier domaine dans lequel l’excès de connexion a été strictement encadré est en fait celui du travail. Voté dans le cadre de la loi Travail, dans le nouvel article L2242-8 du Code du travail, ce droit à la déconnexion est entré en vigueur au 1er janvier 2017. Il concerne les entreprises de plus de 50 salariés. Afin d’assurer le respect des temps de repos et de congés ainsi que l’équilibre entre vie professionnelle et vie privée, les entreprises concernées doivent mettre en place «des instruments de régulation de l’outil numérique», souligne le législateur.

Mais la question du travail a progressivement été écartée des débats sur la déconnexion. «Les applications de déconnexion existent depuis plusieurs années, à l’image de Freedom. Elles étaient auparavant rangées dans la catégorie «productivité» sur les magasins d’applications, et sont désormais passées dans la catégorie «bien être»» note ainsi Antonio Casilli, professeur de sociologie à Télécom Paris Tech et à l’EHESS. Pour ce spécialiste du «digital labor», la question de la déconnexion n’a pourtant pas tant à faire avec le bien-être que mettent en avant les équipes marketing de Google. «Ces entreprises invisibilisent le travail, accuse Antonio Casilli. Leurs nouvelles fonctionnalités prétendent nous libérer de l’injonction à l’hyper-connexion qu’elles ont elles-mêmes créée. Mais leur réponse reprend toutes les métriques de la productivité: temps d’affichage, nombre de clics… Elles installent une énième couche d’attention pour nous faire intérioriser le fait qu’une fois que nous sortons de ce mode de déconnexion, nous devons répondre à tant de mails en tant de secondes autorisées».

La déconnexion serait-elle paradoxalement en train de devenir un leurre pour nous faire plus travailler? «Il y a des positions de bonne foi sur la déconnexion, comme celle qui consiste à veiller à ne pas créer de nouvelles heures supplémentaires aux travailleurs jugés d’astreinte à cause de la pollution de notifications de ces sites.» nuance Antonio Casilli. Mais les intentions d’un Google ne sont pas forcément du même acabit. En attirant l’attention sur cette question de l’hyper-connexion, Google ou Facebook trouvent aussi un bon sujet de préoccupation collective autour de la réglementation des technologies. Un sujet nettement moins fâcheux, par exemple, que celui des données personnelles qu’ils exploitent. Et un sujet qu’ils peuvent maîtriser, sur lequel ils peuvent imposer leurs normes et leurs discours. Avec ces nouvelles fonctionnalités de contrôle sur lesquelles Google a pleinement la main, le géant s’arroge ainsi le droit de devenir celui qui contrôle où et comment l’attention est répartie, tout en laissant l’impression à l’utilisateur de recouvrer sa liberté. Alors qu’il l’empêche un peu plus de partir.”

Why Facebook users’ “strikes” don’t work (and how can we fix them)?

Another day, another call for a Facebook “users’ strike”. This one would allegedly run from May 25 to June 1, 2018. It seems to be a one-man stunt, though (“As a collective, I propose we log out of our Facebook…”). Also, it claims to be “the first ever” strike of this kind.

Yet, since the Cambridge Analytica scandal, new calls for strikes have been popping up every few days. As far as I know, the first one dates back to March 21 (although the actual strike is scheduled on May 18, 2018). The organizer is a seasoned Boston Globe journo, who likely just discovered what digital labor is and is SO excited to tell you:

“I like the idea of a strike, because we users are the company’s real labor force. We crank out the millions of posts and photos and likes and links that keep people coming back for more.”

On April 9, 2018 an an obscure Sicilian newspaper called a strike action (which, in Italian, sounds like “sciopero degli utenti di Facebook”). It actually turned out to be an article about the “Faceblock” which did take place on April 11, 2018. It was a 24-hour boycott against Instagram, FB and WhatsApp organized by someone who describe themselves as “a couple of friends with roots in Belgium, Denmark, Ireland, Italy, Malta, Mexico, the UK and the US” (a tad confusing, if you ask me).

Of course, May 1st was a perfect time to call for other Facebook strikes. This one, for instance, is organized by a pseudonymous “Odd Bert”, who also mentions a fictional Internet User Union (which seems to be banned by Facebook). This other one looked a bit like some kind of e-commerce/email scam, but produced three sets of grievances.

“On May 1st, 2018, Facebook users are going on strike unless the company agrees to the following terms:

A. Full Transparency for American and British Voters

  1. Facebook shares the exact date it discovered Russian operatives had purchased ads.
  2. Facebook shares the exact dollar amount Russian operatives spent on political ads.
  3. Facebook shares the 3,000+ ads that Russian operatives ran during 2016.
  4. Facebook reveals how many users saw the fake news stories highlighted by BuzzFeed.
  5. Facebook lets an independent organization audit all political ads run during 2016.
  6. Facebook gives investigators all “Custom Lists” used for targeting 2016 political ads.
  7. Facebook stops running paid political ads until January 1st, 2019.
  8. Mark Zuckerberg (CEO) and Sheryl Sandberg (COO) testify before Congress in an open-door (televised) session.
  9. Mark Zuckerberg and Sheryl Sandberg testify before the UK parliament.

B. Full Transparency and Increased Privacy for Facebook Users

  1. Facebook explains to users exactly what personal data is being used for advertising.
  2. Facebook asks users for itemized consent to use photos, messages, etc. for advertising.
  3. Facebook gives users the ability to see a “history” of all the ads they have viewed.
  4. Facebook lets an independent organization investigate all data breaches since 2007.
  5. Facebook agrees to be audited monthly to make sure it is complying with local laws.
  6. Facebook allows users to easily delete their “history” on the platform.

C. Better Safety for Children on Facebook

  1. Facebook increases the minimum age for Facebook users from 13 to 16.
  2. Facebook shuts down the Messenger Kids product.”

Users’ strikes are hardly new. In 2009, the Spanish social media Tuenti was concerned by a huelga de los usuarios against their terms of service. In 2015, Reddit users disrupted the platform when they revolted en masse in solidarity with a wrongly terminated employee. On Facebook, users’ collective action is inherent to the life of the platform, whose history is replete with examples of petitions, lawsuits, and class actions. After the introduction of Beacon in 2007, a 50,000-strong petition led to its discontinuation. In 2010 several users’ groups organized and lobbied US senators and the Federal Trade Commission (FTC) to oppose the introduction of the ‘like’ button social plugin on external websites. In 2011, the association Europe versus Facebook filed numerous complaints with the Irish Data Protection Commissioner (DPC) and a class action which is presently discussed by the Court of Justice of the European Union. In 2016, the real-life protests and general mobilization against the introduction of Free Basics in India led to its successful ban by the telecommunication authority TRAI, over net neutrality and privacy concerns.

As my co-authors and I argued in our 2014 book Against the Hypothesis of the ‘End of Privacy’, the adoption of pervasive data collection practices by social platforms has been highly contentious, with frequent and cyclical privacy incidents followed by strong mass reactions. What these reactions have in common is that they are strategic, organized, collectives actions that rely on existing communities. Which could provide essential clues as to why the 2018 Facebook strikes are so ineffective. Their do not seem to be organized by active members of existing communities and they certainly do not engage with elected officials or institutional bodies. They are launched by journalists, startup bros, anonymous users trying to get noticed. These persons’ idea of grassroots is a naive one: one heroic individual (or a nonexistent “union”) sparks a revolt, hence the masses follow.

Importantly, they seem to put excessive faith in strikes seen as their only available tactic. A recent article published by a group of Stanford and Microsoft computer scientists captures this romanticized vision of “powerful” industrial actions where users stand hand in hand and nobody crosses the picket line:

“Data laborers could organize a “data labor union” that would collectively bargain with siren servers. While no individual user has much bargaining power, a union that filters platform access to user data could credibly call a powerful strike. Such a union could be an access gateway, making a strike easy to enforce and on a social network, where users would be pressured by friends not to break a strike, this might be particularly effective.”

Nevertheless, as past experiences on social platforms have taught us, successful actions adopt a specific repertoire of contention dominated not by strikes (which are usually costly and difficult to coordinate) but by lobbying and litigation. If we expect Facebook users grievances to be heard, a comprehensive and wide-ranging strategy is necessary to boost their rights. Community, organization, and the selection of effective tools are the three pillars of collective action.

Ailleurs dans les médias (mars 2018-juin 2018)

» (08 juin 2018) Digital labour et travail domestique : quand l’exploitation capitaliste s’étend aux hommes blancs, La Quadrature du Net

» (05 juin 2018) Décoloniser l’enseignement, Nonfiction.fr

» (21 mai 2018) Assistants vocaux: nos conversations sont écoutées par des travailleurs bien réels, BFMTV

» (15 mai 2018) Prendre en charge les troubles du comportement alimentaire, Mondes Sociaux

» (01 mai 2018) Ανθρωποι που αρνούνται να εργαστούν, Iefemerida

» (26 avril 2018) Les réseaux sociaux ont-ils fini par ériger une justice 2.0 ?, Siècle Digital

» (23 avril 2018) Comment Facebook a transformé l’amitié en donnée mesurable, Usbek & Rica

» (12 avril 2018) Comment être méchant ?, Vice

» (09 avril 2018) Dans La Tête De Jeff Bezos, La Plus Grosse Fortune Mondiale, Forbes

» (04 avril 2018) Pour une téléologie du numérique, Internetactu.net

» (02 avril 2018) Une automatisation en trompe-l’œil, Interactons UTC

» (31 mars 2018) Le Troll, un ami qui vous veut du bien (ou presque). Partie 5 : Sociologie du troll, C’est données !

» (27 mars 2018) Alors comme ça, on veut quitter Facebook?, Makery

» (22 mars 2018) Haverá proteção contra o capitalismo de vigilância?, Outras Palavras

» (14 mars 2018) Les fake news auront-elles la peau de la liberté de la presse?, Mediapart

» (13 mars 2018) Spotify va faire bosser ses utilisateurs pour compléter ses métadonnées, Mashable

» (13 mars 2018) I’m a digital worker, killing an arab. Chronique de la guerre algorithmique, Affordance.info

» (12 mars 2018) Proposition de Loi sur les “Fake News” : Nécessité impérieuse ou fausse bonne nouvelle ?, Universdoc

» (5 mars 2018) Propriété personnelle des « data » : le dernier combat de la secte libérale- Pour un Commissariat à la souveraineté numérique, Viv(r)e La Recherche

[Séminaire #ecnEHESS] Big data, rencontres en ligne et formation du couple (14 mai 2018, 17h)

Enseignement ouvert aux auditeurs libres. Pour s’inscrire, merci de renseigner le formulaire.

Pour notre séminaire Étudier les cultures du numérique nous aurons le plaisir d’accueillir Marie Bergström, chercheuse à l’Institut National d’Études Démographiques et membre du comité éditorial de la revue RESET et du comité de rédaction de la revue Sociologie, pour une séance dans laquelle nous interrogerons ce que les big data nous apprennent sur la formation des couples à l’heure des rencontres en ligne.

⚠️ CHANGEMENT DE SALLE : La séance aura lieu le lundi 14 mai 2018 de 17h à 20h, Salle 1, EHESS, 105 bd Raspail, 75006 Paris. ⚠️


Titre : Le « choix du conjoint » sous la loupe des rencontres en ligne

Intervenante : Marie Bergström (INED)
Discutant : Fred Pailler (Université de Nantes)

“En moins de quinze ans, l’usage de sites et d’applications de rencontres est devenue une pratique courante en France comme dans d’autres pays occidentaux. Ces services non seulement changent la manière de trouver des partenaires mais renouvellent notre savoir sur la rencontre. Les chercheurs en sciences sociales sont de plus en plus nombreux à mobilisent les données « massives » issues de ces plateformes pour étudier l’appariement des partenaires. Cette innovation méthodologique se solde souvent par des résultats inédits comme le montre la présentation. À partir d’un exemple concret, mobilisant les données d’une plateforme parmi les plus utilisées en France, on s’intéresse aux comportements de contact entre utilisateurs. Ce faisant, l’objectif n’est pas seulement de décrire les interactions (qui échange avec qui ?) et les logiques sociales et sexuées qui sous-tendent ce processus. La présentation montre, plus généralement, la manière dont les rencontres en ligne mettent au défi les théories habituelles du « choix » des partenaires.”

[Podcast] Médias sociaux : paniques morales et critique des plateformes (France Culture, 27 avril 2108)

J’étais l’invité de Serge Tisseron sur France Culture pour l’épisode de l’émission Matières à penser de vendredi 27 avril 2018, consacré aux médias sociaux, leurs opportunités et leurs risques.

“Les années 2000 ont vu se développer des travaux autour de la façon dont la culture numérique était porteuse d’un nouveau rapport au territoire, au don (qui n’est plus un acte de bienfaisance unilatérale, mais une obligation sociale réciproque) et même à la politique, avec l’émergence possible de nouvelles vertus démocratiques. Quinze ans plus tard, les réseaux sociaux sont accablés de critiques. Alors, celles-ci sont-elles fondées ou relèvent-elles d’un climat de panique morale ?”

>> Podcast Splendeurs et misères des réseaux sociaux (45′)