An open letter to tell Google to commit to not weaponize its technology (May 17, 2018)

Following an invitation by Prof. Lilly Irani (UCSD), I was among the first signatories of this ICRAC “Open Letter in Support of Google Employees and Tech Workers”. The letter is a petition in solidarity with the 3100+ Google employees, joined by other technology workers, who have opposed Google’s participation in Project Maven.

Following our joint action, on June 7 2017 Google has released a set of principles to guide its work in AI in a document titled “Artificial Intelligence at Google: our principles,”. Although the company pledges not to develop AI weapons, it does says it will still work with the military.

Open Letter in Support of Google Employees and Tech Workers

Researchers in Support of Google Employees: Google should withdraw from Project Maven and commit to not weaponizing its technology.

An Open Letter To:

Larry Page, CEO of Alphabet;
Sundar Pichai, CEO of Google;
Diane Greene, CEO of Google Cloud;
and Fei-Fei Li, Chief Scientist of AI/ML and Vice President, Google Cloud,

As scholars, academics, and researchers who study, teach about, and develop information technology, we write in solidarity with the 3100+ Google employees, joined by other technology workers, who oppose Google’s participation in Project Maven. We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes. The extent to which military funding has been a driver of research and development in computing historically should not determine the field’s path going forward. We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems.

Google has long sought to organize and enhance the usefulness of the world’s information. Beyond searching for relevant webpages on the internet, Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto “Don’t Be Evil” famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense.

According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras…that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international[1] and U.S. law.[2] These operations also have raised significant questions of racial and gender bias (most notoriously, the blanket categorization of adult males as militants) in target identification and strike analysis.[3] These problems cannot be reduced to the accuracy of image analysis algorithms, but can only be addressed through greater accountability to international institutions and deeper understanding of geopolitical situations on the ground.

While the reports on Project Maven currently emphasize the role of human analysts, these technologies are poised to become a basis for automated target recognition and autonomous weapon systems. As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems. According to Defense One, the DoD already plans to install image analysis technologies on-board the drones themselves, including armed drones. We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control. If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection – no technology has higher stakes – than algorithms meant to target and kill at a distance and without public accountability.

We are also deeply concerned about the possible integration of Google’s data on people’s everyday lives with military surveillance data, and its combined application to targeted killing. Google has moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief.

Should Google decide to use global internet users’ personal data for military purposes, it would violate the public trust that is fundamental to its business by putting its users’ lives and human rights in jeopardy. The responsibilities of global companies like Google must be commensurate with the transnational makeup of their users. The DoD contracts under consideration by Google, and similar contracts already in place at Microsoft and Amazon, signal a dangerous alliance between the private tech industry, currently in possession of vast quantities of sensitive personal data collected from people across the globe, and one country’s military. They also signal a failure to engage with global civil society and diplomatic institutions that have already highlighted the ethical stakes of these technologies.

We are at a critical moment. The Cambridge Analytica scandal demonstrates growing public concern over allowing the tech industries to wield so much power. This has shone only one spotlight on the increasingly high stakes of information technology infrastructures, and the inadequacy of current national and international governance frameworks to safeguard public trust. Nowhere is this more true than in the case of systems engaged in adjudicating who lives and who dies.
We thus ask Google, and its parent company Alphabet, to:

  • Terminate its Project Maven contract with the DoD.
  • Commit not to develop military technologies, nor to allow the personal data it has collected to be used for military operations.
  • Pledge to neither participate in nor support the development, manufacture, trade or use of autonomous weapons; and to support efforts to ban autonomous weapons.

[1] See statements by Ben Emmerson, UN Special Rapporteur on Counter-Terrorism and Human Rights and by Christof Heyns, UN Special Rapporteur on Extrajudicial, Summary and Arbitrary Executions.

[2] See for example Murphy & Radsan 2009.

[3] See analyses by Reaching Critical Will 2014, and Wilke 2014.

[Séminaire #ecnEHESS] Nikos Smyrnaios “Les GAFAM : notre oligopole quotidien” (20 mars 2017, 17h)

Enseignement ouvert aux auditeurs libres. Pour s’inscrire, merci de renseigner le formulaire.

Dans le cadre de notre séminaire EHESS Etudier les cultures du numérique, nous avons le plaisir d’accueillir Nikos Smyrnaios (Université Toulouse 3) et auteur de l’ouvrage Les GAFAM contre l’internet : une économie politique du numérique (INA Editions, 2017).

Pour suivre le séminaire sur Twitter : hashtag #ecnEHESS.

ATTENTION : CHANGEMENT DE SALLE : La séance se déroulera le lundi 20 mars 2017, de 17h à 20h, salle 13, 6e étage, EHESS, 105 bd. Raspail, Paris 6e arr.

Titre : GAFAM: logiques et stratégies de l’oligopole qui a pris le contrôle de nos outils numériques

Intervenant : Nikos Smyrnaios
Résumé : “Quelques startups autrefois ‘sympathiques’ ont donné naissance à des multinationales oligopolistiques qui régissent le cœur informationnel de nos sociétés au point qu’un acronyme, GAFAM, leur soit dédié. Google, Apple, Facebook, Amazon et Microsoft sont les produits emblématiques d’un ordre capitaliste nouveau qu’ils contribuent eux mêmes à forger, légitimer et renforcer. Cet ordre néolibéral s’inscrit résolument contre le projet originel de l’internet. La conférence s’intéressera précisément aux conditions qui ont permis l’émergence de cet oligopole et aux stratégies que celui-ci met en œuvre pour  contrôler nos outils de communication quotidiens et les plateformes qui nous utilisons pour accéder à l’information et aux contenus en ligne (exploitation du travail, évitement de l’impôt, concentration horizontale et verticale, infomédiation, exploitation des données personnelles etc.).”

Prochaines séances

mturk 10 avril 2017
Mary Gray (Microsoft Research)
Behind the API: Work in On-Demand Digital Labor Markets
datanomix 15 mai 2017
Louis-David Benyayer (Without Model) et Simon Chignard (Etalab)
Les nouveaux business models des données
magna 19 juin 2017
Juan Carlos De Martin (NEXA Center for Internet & Society)
Looking back at the 2015 ‘Declaration of Internet Rights’