Artificial intelligence doesn’t destroy jobs, it precarizes them (op-ed Domani, March 24, 2023)

Today, the Italian newspaper Domani published an op-ed that I penned in the wake of the publication of the study “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models“. You can find the Italian version of my article on the website of Domani, and read the English version here.

Artificial intelligence will increasingly make the world of work precarious

Antonio A. Casilli

  • Systems like ChatGpt won’t completely disrupt work as we know it, but it will create a new class of “click slaves,” paid (very poorly) to train algorithms.
  • This is a more real risk than science fiction scenarios, where robots would completely take over human labor. OpenAi itself already uses these micro-workers.
  • A few months after ChatGpt was launched, a TIME magazine exposé disclosed that Kenyan workers were being paid less than $2 per hour to train artificial intelligence.

A study analyzing the impact of artificial intelligence on the labor market was published this week. Its authors examined so-called “pre-trained models” of the Gpt family. These pieces of software learn from a large amount of data to perform tasks that they then adapt to new contexts. Three of the four authors of this study are employees of OpenAi, the company that in recent months has launched Dall-E 2, an image-generating system, and of course ChatGpt, the virtual assistant that has become a cultural phenomenon.

According to the study, about 80% of the workforce could be exposed to this innovation, and for some of them 50% of tasks could change dramatically. Even highly educated people would be affected by this development.

The conditional is a must, because the study has more limitations than results. It relies on opaque data, adopts an abstruse methodology and, as a cherry on top, uses a Gpt to analyze the effects of other Gpts.

The new Frey & Osborne

The study matters more for its ambition than for its results. Doubtlessly, the article aspires to be the “Frey & Osborne report of the 2020s,” after the two Oxford researchers who published in 2013 an analysis predicting that 47% of jobs would be destroyed by 2030. It is a highly cited and heavily criticized work given that, despite a pandemic, a geopolitical crisis and a climate emergency, their forecasts are far from coming true.

Both the 2013 article and the one just published by OpenAi researchers reduce human work to a series of “tasks.” Like all reductionist analyses, they should be greeted with healthy distrust. To say that a nurse’s job is reduced to 10 tasks (caring for patients, filling out forms, etc.) and to say that some of them might be exposed to ChatGpt use does not mean that the nurse will be fired. Her job will change.

A marketing operation

Maybe with the excuse that new technology saves time, employers will find new ways to add tasks to employees while keeping real wages at a minimum. Despite utopian visions and fears regarding automation, historically this is what has happened, much to the chagrin of OpenAi researchers.

Their article is largely a marketing tool designed to help their company get noticed by the media. Every time OpenAi launches a product, a debate rages in the news and on social media about the threats artificial intelligence poses to journalists, illustrators, teachers. It just so happens that the jobs that are threatened to disappear are precisely those that the American company sells as services: text and image generation, training, etc. It is not the robots that are destroying jobs. It is OpenAi that is destroying competition.

Out of control

But sadly, this is not good news. The effects of these technologies on jobs are indeed there, but they are different. To really detect them we have to read the System Card of Gpt-4, OpenAi’s latest software. A hundred or so pages describe the tests by which the AI was trained. The testers often pushed Gpt-4 to perform dangerous or illegal actions in order to teach the artificial intelligence to avoid them.

But during the tests, Gpt-4 escaped its controllers, and attempted a cyberattack on a website. The latter, however, was protected by ReCaptcha, those pop-ups that require proving you are not a robot by solving a puzzle. Unfortunately, Gpt-4 is a robot. To solve the puzzle, it then turned to an on-demand platform to recruit a pieceworker to solve the ReCaptcha on its behalf.

Micro-work

But ReCaptchas do more than just protect against cyberattacks. They are also used to train artificial intelligence. When they prompt us to transcribe words, they use them to digitize Google Books. When we are invited to spot a traffic light, they calibrate Waymo’s autonomous driving systems. This raises a mind-boggling question: can Gpt-4 be used to recruit workers who in turn train other AIs?

In fact, more or less automated systems for recruiting freelance workers to train algorithms have existed for decades. Amazon Mechanical Turk is a site where, for a few cents, companies recruit for less than a quarter of an hour hundreds of thousands of people to generate data, transcribe text, and filter images. Other platforms, such as Australia’s Appen, employ more than ten million people. Can we really talk about jobs? These are micro jobs with poverty wages, which are largely performed by workers from developing countries.

Replacement

Paradoxically, OpenAi itself uses these “click slaves.” A few months after ChatGpt was launched, a TIME magazine exposé revealed that Kenyan workers were being paid less than $2 per hour to train the chatbot. In other documents uncovered shortly thereafter, the U.S. company claimed it contracted workers in the Philippines, Latin America and the Middle East to train its algorithms.

Thus, the true impact on the work of Gpt software has been revealed. Artificial intelligence automates the process of selecting, hiring, and firing precarious workers. This is not the usual science fiction scenario where robots replace humans. It is one where permanent employees being replaced with underpaid pieceworkers hired and fired on digital platforms. This trend is already underway, and companies like OpenAi are escalating it.