Urgent Measures Needed to Protect Data Workers in the Age of AI (op-ed, Libération, 1 Nov. 2023)

The rapid advancements in Artificial Intelligence owe their success to the efforts of at least 150 million individuals worldwide, assisting in tasks ranging from distinguishing a banana from a yogurt pot. Despite this significant contribution, there exists no tangible regulation for the sector, as asserted in an op-ed published in French newspaper Libération by myself and a collective of legal experts of the NGO “Intérêt à Agir”.


While concerns about the impact of AI on jobs in developed countries are well-documented, little attention is given to the workers crucial for the development and maintenance of AI systems. Beyond computer engineers and data scientists, individuals are indispensable in various stages of the AI creation process, including training algorithms with raw data and correcting biases to enhance performance.

The World Bank estimates that 154 to 435 million people globally are employed by digital platforms, constituting 4.4% to 12.5% of the global workforce. Among them are data workers, facing particularly challenging conditions such as exposure to violent content, repression of union activities, long working hours spanning different time zones, low or absent remuneration, precarious contracts, and informality.

Existing attempts at regulating the AI sector rightly focus on the impact on end-users in developed countries. However, there is a notable absence of equivalent efforts to safeguard the social rights of data workers in developing regions. The Fair Work project by the University of Oxford highlights a deterioration in working conditions since 2021, emphasizing issues such as wage equity, non-discrimination, and the right to union representation in micro-work platforms.

Viewing AI as a new manifestation of globalization reveals similarities with the organizational structure of the global economy. Legal frameworks, such as the UN Guiding Principles on Business and Human Rights adopted in 2011, designed to regulate multinational corporations, can be applied to address the challenges posed by AI.

Proposed Measures

  1. Clear State Requirements: Governments should establish clear expectations for businesses regarding the importance of respecting human rights in their value chains. For example, the ongoing negotiation of the AI Act could include broader responsibilities for producers, importers, and professional users of AI solutions concerning the social conditions in which they are developed.
  2. Corporate Accountability: Companies must consider the negative impacts their activities may have on human rights in their value chains. Profiting from AI solutions built on human labor should prompt active policies to identify and mitigate risks in the supply chain.
  3. Access to Remedies: States should ensure access to legal remedies for victims. Existing laws, like the French duty of vigilance, which mandates parent companies to prevent serious risks to fundamental social rights in their value chains, can be powerful tools for workers’ rights protection if actively employed.