ChatGPT and Labor Precarization: Interviewed by Swiss Media

I was interviewed in Swiss online media on global labor market and AI. The interview is available in French.

“AI will not replace human work, but it has already made it precarious”

Professor of sociology at the Polytechnic Institute of Paris, Antonio Casilli has been interested in the uses of digital technologies for over twenty years. Gradually, his research has turned to “digital labor”: the workers of the click, the invisible of the digital economy that describes his book published in 2019, “En attendant les robots” (Ed. Seuil). Antonio Casilli explains to why artificial intelligence does not augur the disappearance of work but reinforces its casualization.

Why this is interesting. The DiPLab (Digital Platform Labor) research team coordinated by Antonio Casilli has been interested in recent years in the human work hidden behind the apparent autonomy of artificial intelligence. In particular, the poor micro-laborers employed by Western AI developers in Latin America and Africa. Based on numerous field surveys, this research describes the micro-tasks of annotation, verification and sometimes imitation of AI. Despite their invisibility, they play a crucial role in the development of applications like ChatGPT and their number is exploding. – What is the role of human work in AI and in particular generative AI like ChatGPT?

Antonio Casilli – There is a lot of debate about the jobs that might supposedly disappear because of AIs but the first question is the persistence of human labor in the development of these programs.

These technologies are largely based on machine learning. This requires huge amounts of data. But this data is not always of good quality. For example, OpenAI (the developer of ChaGPT) used data from Common Crawl. It does harvesting to provide snapshots of the web in which you will find very different quality information, between say Wikipedia and Reddit. Hence the requirement to outsource to another platform, Sama, the recruitment of Kenyan workers who, according to an investigation by Time, filtered the data at the source in order to prevent the most problematic content from entering the ChatGPT model.

You give the example of Sama already involved in a micro-labor scandal that caused a lawsuit against Facebook in Kenya in 2022. Are AI micro workers the same as those employed by social networks?

Time’s January 2023 investigation revealed the practices in Kenya of Sama that the magazine had already identified in the case of an investigation a year earlier into the miserable working conditions and wages of Facebook moderators. From this point of view, one can say that there is a certain continuity between the new generative AI like ChatGPT and the micro-work required for recommendation algorithms that are a form of AI used by social networks. It makes sense because it’s pretty close. Instead of predicting the next purchase, ChatGPT predicts the next word. Therefore, the training principle is not fundamentally different.

That said, Sama is not a central player in this ecosystem but a small platform that is currently weakened by the death of its founder. It aspired to do “impact sourcing”, in other words to have an ethical approach to micro-labor. Recent cases show that it is difficult…

So who are the big players in this ecosystem?

OpenAI says it uses larger platforms like Upwork, which offers a wide range from freelancing to click farms. There is also ScaleAI and Lionbridge. These are the most central and well-known platforms with micro workers in the Philippines, South Africa, Turkey and India. The latter has been the center of microwork for the past 15 years. We are also starting to see the emergence of countries where click work is cheaper, such as Bangladesh, Madagascar, Nigeria, Egypt and Venezuela. AI producers recruit subcontractors along two geographical axes. A West-East axis, from Europe and North America to South-East Asia, and a North-South axis to Latin America and Africa. A geography that largely reproduces colonial trajectories, with people paid to work for a few cents.

The platforms you mention are Anglo-Saxon. Is this the general case? 

Upwork and Scale are in the US and Lionbridge in Canada. But you also have others like the Chinese giant Witmart with its 15 million workers. In the German-speaking world, the market is dominated by Clickworker and in the French-speaking world by Yappers or IsaHit. You also have new players like the Russian Tolokota. It is a booming market now fueled by geopolitical competition to the development of artificial intelligence. The advances in AI are undeniable. But behind these successes, what we are seeing is that the number of little hands of the click is also increasing. It has even exploded.

How so?

When we started about ten years ago, digital micro-work platforms were recruiting a few hundred people, then a few thousand, and for the biggest ones up to a million. In the last two or three years – admittedly also thanks to the pandemic – these platforms have been employing tens of millions of micro-workers.

This demographic shift indicates something important: AIs need a lot of human workers not only in the training phase but also beyond to verify the results. Developers of these products need to verify that they work correctly: that a voice assistant interprets precisely what it is told, that a GPS route is really optimized or that a search engine displays the best results. Finally, the third type of work is that of AI imitation. In many cases, the machine malfunctions and human intervention is necessary. In these cases, we will have micro-workers who will pretend to be AIs in order to make users believe that the system works automatically.

Is this true?

Yes, in our field surveys where we interviewed micro workers, we met some in Madagascar who pretended to be an intelligent camera in supermarkets in France. They were the ones who, by looking at the images from a distance, detected and reported thefts in the shelves, not the machine. Delivery people from a German delivery platform also showed us how the company’s spatial recognition system was in fact based on people authorizing access to the routes. That said, if to imitate AIs and train them we use remote micro-workers, it’s a bit different for the verification tasks needed once an AI is deployed.

In what way?

Generative AI relies on hypertrophic models with masses of data that require masses of workers because you need to have large samples. These models are divided into millions of small training models that don’t all require linguistic knowledge, such as how to divert or label an image or differentiate a cat from a tiger. For verification on the other hand, we are starting to see micro-work in close proximity.

A voice assistant deployed in Quebec, France, Switzerland, Belgium… will speak French. But as there are slight local differences, there is a strong chance that a local person will be recruited to check that what the assistant says is adapted to each context. Most often it will be an expat from the region concerned. They are better paid than the micro-workers employed during the training phases, but they are still precarious with atypical contracts such as fixed-term contracts, internships, temporary work…

Isn’t this verification work destined to disappear with the improvement of these technologies that continue to learn through their interactions with users?

When ChatGPT first arrived, many of my colleagues had fun asking it to write their biographies. The results showed a lot of “hallucinations” because they are based on verisimilitude and not on truth. As the model continues to learn through its interactions with users (thumbs up or down based on responses) ChatGPT assures that it will get better. But there will always be the possibility of new forms of hallucination and therefore the need for micro workers like fact-checkers.

Next to that you have the hostile uses of AI like the massive creation of fake news or spam with micro workers coming up with tens of thousands of prompts to select the most effective.