artificial intelligence

[Séminaire #ecnEHESS] Mary L. Gray “Amazon MTurk: les coulisses de l’intelligence artificielle” (10 avril 2017, 17h)

Enseignement ouvert aux auditeurs libres. Pour s’inscrire, merci de renseigner le formulaire.

Pour la séance du 10 avril 2017 EHESS Etudier les cultures du numérique, nous avons l’honneur d’accueillir Mary L. Gray, chercheuse chez Microsoft Research et membre du Berkman Center for Internet and Society de l’Université de Harvard. Mary Gray a été l’une des pionnières des études sur Amazon Mechanical Turk et sur les liens entre micro-travail et intelligence artificielle.

Pour suivre le séminaire sur Twitter : hashtag #ecnEHESS.

ATTENTION : Le siège habituel étant fermé pour les vacances universitaire, cette  séance se déroulera le lundi 10 avril 2017, de 17h à 20h, amphi Opale, 6e étage, Télécom ParisTech, 46 rue Barrault, 13e arr. Paris.

Title: What is Going On Behind the API? Artificial Intelligence, Digital Labor and the Paradox of Automation’s “Last Mile.”

Speaker: Mary L. Gray

Abstract: On-demand digital labor has become the core “operating system” for a range of on-demand services. It is also vital to the advancement of artificial intelligence (AI) systems built to supplement or replace humans in industries ranging from tax preparation, like LegalZoom, to digital personal assistants, like Alexa. This presentation shares research that starts from the position that on-demand “crowdwork”—intelligent systems that blend AI and humans-in-the-loop to deliver paid services through an application programming interface (API)—will dominate the future of work by both buttressing the operations of future enterprises and advancing automation. For 2 years Mary L Gray and computer scientist Siddharth Suri have combined ethnographic fieldwork and computational analysis to understand the demographics, motivations, resources, skills and strategies workers drawn on to optimize their participation in this nascent but growing form of employment.  Crowdwork systems are not, simply, technologies. They are sites of labor with complicated social dynamics that, ultimately, hold value and require recognition to be sustainable forms of work.

La présentation et les débats se dérouleront en anglais.

Séminaire organisé en collaboration avec ENDL (European Network on Digital Labour).

The business performativity of Mark Zuckerberg’s manifesto

Whenever I hear a businessman talk about building a “healthy society”, my sociologist sense tingles… And although I haven’t been discussing Zuckerberg’s recent tirade, I feel reassured by the fast and compelling reactions I’ve read. Countering Zuckerberg’s brand of simplistic technodeterminism is crucial. For instance, you might want to examine Aral Balkan’s piece, or appreciate the relentless logic of Annalee Newitz, exposing contradictions and dangers in the manifesto.

Of course, we’ve been here before. Facebook’s founder customarily posts messages, rants, and edicts. And unfortunately their criticism is not enough. Because there is a performativity to Zuckerberg’s essays. Although they are constantly spinned to the media as heart-felt cries from Mark-Zuckerberg-the-person, they actually serve as program frameworks for the company run by Mark-Zuckerberg-the-CEO. The fact of stemming from “trainwrecks” (term used by both Annalee Newitz and danah boyd in a seminal paper penned almost a decade ago) doesn’t diminish the power of these pronouncements.

Capitalism feeds on crises. And Facebook (being the ultimate capitalist scheme) feeds on “trainwrecks”— it uses them as devices to establish its dominance. So the 2008/9 “privacy trainwreck” jumpstarted its extensive market for personal data. The 2013 “connectivity crisis” spawned Free Basics. And what will the 2016 “fake news disaster” be exploited for? Smart money says: “turning Facebook’s colossal user-base in a training ground for AI”.

Admittedly, this doesn’t come as a surprise. The ambition to “solve AI” by extracting free/micropaid digital labor from users is evident. Facebook AI Research (FAIR) division is devoted to “advancing the field of machine intelligence and to give people better ways to communicate” by relying on quality datasets produced by… people communicating on Facebook.

What is new is how the “fake news trainwreck” has ended up supporting this ambition by turning Facebook human users into a “social infrastructure” for AI (cf. Zuckerberg). More importantly, it provides a rationale for the company’s strategy. And throws in “terror” for good measure, to render it unavoidable…:

“A healthy society needs these communities to support our personal, emotional and spiritual needs. In a world where this physical social infrastructure has been declining, we have a real opportunity to help strengthen these communities and the social fabric of our society. (…) The guiding principles are that the Community Standards should reflect the cultural norms of our community, that each person should see as little objectionable content as possible, and each person should be able to share what they want while being told they cannot share something as little as possible. The approach is to combine creating a large-scale democratic process to determine standards with AI to help enforce them. (…) Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization. This is technically difficult as it requires building AI that can read and understand news, but we need to work on this to help fight terrorism worldwide. (…) The path forward is to recognize that a global community needs social infrastructure to keep us safe from threats around the world, and that our community is uniquely positioned to prevent disasters, help during crises, and rebuild afterwards. Keeping the global community safe is an important part of our mission — and an important part of how we’ll measure our progress going forward.”