Digitalisation is not just bounded to technology, it affects our society at large

A critical analysis of AI implies a close investigation of network structures and multiple layers of computational systems. It is our responsibility as researchers, activists and experts on digital rights to provoke awareness by reflecting on possible countermeasures that come from the technological, political, and artistic framework.

people walking

In the current discussion around big data, deep learning, neural networks, and algorithms, AI has been used as a buzzword for proposing new political and commercial agendas in companies, institutions and the public sector.

Public debates should make an effort not only to address the topic of AI in general, but to focus on concrete applications of data science, machine learning, and algorithms. It is crucial to foster a debate on how AI impacts our everyday life, reflecting inequalities based on social, racial and gender prejudices. Computer systems are influenced by implicit values of humans involved in data collection, programming and usage. Algorithms are not neutral and unbiased, and the consequence of historical patterns and individual decisions are embedded in search engine results, social media platforms and software applications reflecting systematic discrimination.

At the Disruption Network Lab conference “AI TRAPS: Automating Discrimination” (June 14-15, disruptionlab.org/ai-traps), Tech Policy Advisor Mutale Nkonde, who was part of the team that introduced the Algorithmic Accountability Act to the House of Representatives, described how in the US police’s “stop & frisk” programme mainly targets Black and Latinx: 90% are innocent. This activity allows them to collect biometric data like fingerprints, reinforcing criminalisation of people of colour.

ACLU tested Amazon’s facial recognition software used by a number of police departments on photos of members of Congress, which were compared to a public database of mug shots. The test disproportionally misidentified African-American and Latinx members of Congress as the people in the mug shots. According to Os Keyes, Human-Centred Design Engineer at the University of Washington, a just AI should be bias free, and shaped and controlled by the people affected by it. Automated Gender Recognition is used by companies and the public sector to target advertising and to automate welfare systems, but it is based on old norms which divide genders into binary male and female, thereby excluding trans communities, and helping to cement and normalise discrimination.

The problem is not AI per se – but that this technology is developed in a biased context around gender, race and class. We need to build systems around the values we want our present and future societies to have.

This article was first published (12th November 2019) online via hiig.de and is part of the publication "Critical Voices, Visions and Vectors for Internet Governance".