The Meta-Scale AI Partnership: When Superintelligent Chickens Come Home to Roost

TL;DR Meta was defrauded of $15 billion through what appeared to be an AI Ponzi scheme setup.

Scale AI, the data annotation company led by Alexandr Wang, was acquired by Meta for nearly $15 billion, based largely on promises of superior AI training datasets produced by highly qualified specialists. To compete with the supposedly “PhD-smart” GPT-5, you’d need AI trained by PhDs, right? Scale AI positioned itself as a premium alternative to what’s derogatorily called “sweatshop” data labeling in low-income countries. According to some self-proclaimed experts, this is the end of low-paid, low-quality data annotation. As a testament to this new era, Wang joined Meta as Chief AI Officer, co-leading the newly established Meta Superintelligence Labs.

But problems surfaced almost immediately. Internal documents leaked in June revealed that Scale AI’s “expert” programs, marketed as staffed by experts and specialists, were actually conducting mass hiring without proper vetting. The company continued to train AI on the cheap, deliberately mismatching workers with tasks to justify low compensation.

By August, Africa Uncensored’s year-long investigation had uncovered a broader pattern. Thousands of highly credentialed workers, recruited as “AI tutors” around the world, were actually sidelined and left idle for months, while others, paid minimal wages, carried on with the work. The aggressive hiring of highly educated data annotators served as window dressing for Scale AI to attract venture capital and gullible investors—including Meta.

It all started coming together in September. Meta, according to TechCrunch, has discovered that Scale AI’s deliverables do not meet promised standards. Their data proved subpar, prompting Zuckerberg’s mothership to shift to competitors like Mercor and Surge for AI training. Ruben Mayer, a former Scale AI executive hired to oversee data operations, left Meta within two months.

This highlights a troubling pattern: Scale AI built its billion-dollar narrative on promises of elite expertise, yet failed to properly utilize or fairly compensate the workers behind its data operations. According to another exposé published by Algorithmwatch, Outlier, one of Scale AI’s data annotation platforms, has documented exploitative practices and low wages despite promises of high wages.

The core issue isn’t the quality of data annotation, which of course is essential to AI development, but rather the huge gap between Scale AI’s investor-focused narratives and its poor workforce management practices. When tech titans promise “superintelligence,” look past the marketing smoke screen. Are they paying PhD wages or peanuts for their supposedly AGI-level data annotation? An AI built on deception, exploitation, and Ponzi schemes is not only technologically flawed. It’s morally bankrupt.