View profile

AI's reliance on exploitative labour, ‘algospeak’, and the racial bias of chest X-rays

Revue
 
 

Racism and Technology Center

April 16 · Issue #17 · View online

Curated news, research and activism on the intersection between racism and technology.


Exploitative labour is central to the infrastructure of AI
In this piece, Julian Posada writes about a family of five in Venezuela, who synchronise their routines so that there will always be two people at the computer working for a crowdsourcing platform to make a living. They earn a few cents per task in a cryptocurrency and are only allowed to cash out once they’ve made at least the equivalent of USD 10. On average they earn about USD 20 per week, but their earnings can be erratic, resulting in extreme stress and precarity.
The family’s task is to annotate data: transcribe text and audio, tag videos and images, and search for information online. All so that developers at research institutions and companies in North America and Europe can use this data to train machine learning algorithms. These algorithms are then used in the context of content moderation, facial recognition, and self-driving cars.
The invisibility of the workers in this process are at the core of the business model of many tech companies. They pretend that their products are fully automated, and absent of any human intervention. But through a combination of piece pay, and working from home, the labour of these data workers is key to the growth of AI. In the words of Veena Dubal: “these data workers, whose labour […] cannot be automated, make automation and AI possible.”
Collective exploitation is at the heart of data annotation, with platform companies continuously sourcing cheap labour from low-income economies, creating a dependency for the workers on these platforms. This disproportionately affects poor, racialised and working-class people who are often easily disposed. While these workers are building local and transnational communities to share survival tactics, forms of resistance, and solidarity, these are insufficient amidst the backdrop of the larger forces of neocolonial labour practices, and global financial capital. As Posada stresses:
Ultimately, only solutions that recognize the communal nature of work and economic justice can have transformative effects on the lives of workers […] Inequality in platform labor is not an issue of individual workers, but networks of people who resist the paradox of a technology that innovates for some by exploiting others.
Family Units
Inventing language to avoid algorithmic censorship
Platforms like Tiktok, Twitch and Instagram use algorithmic filters to automatically block certain posts on the basis of the language they use. The Washington Post shows how this has created ‘algospeak’, a whole new vocabulary. So instead of ‘dead’ users write ‘unalive’, they use ‘SA’ instead of ‘sexual assault’, and write ‘spicy eggplant’ rather than ‘vibrator’.
The algorithmic filters don’t take context into account. Unfortunately, this makes it hard to address racial oppression:
“You have to say ‘saltines’ when you’re literally talking about crackers now,” said Lodane Erisian, a community manager for Twitch creators. […] Black and trans users, and those from other marginalized communities, often use algospeak to discuss the oppression they face, swapping out words for “white” or “racist.” Some are too nervous to utter the word “white” at all and simply hold their palm toward the camera to signify White people.
Users of these platforms are lobbying for more transparency about how the filters work, but everybody is very aware that this problem can’t truly be solved within the current Big Tech business models.
Internet ‘algospeak’ is changing our language in real time, from ‘nip nops’ to ‘le dollar bean’
Technology, Racism and Justice at Roma Day 2022
Our own Jill Toh recently presented at a symposium on the use of technology and how it intersects with racism in the context of housing and policing. She spoke on a panel organised in the contex of the World Roma Day 2022 titled Technolution: Yearned-for Hopes or Old Injustices?.
Photo by Jana Kießer
Photo by Jana Kießer
The panel discussed how “everyday experiences of racialized people and current research show that media images and so-called artificial intelligence also reinforce old injustices and structural racisms, algorithms reproduce social inequalities.” Together with Wesley Goatley (artist) and Dimitra Andritsou and Bob Trafford (Forensic Architecture) and moderated by Laurence Meyer(Digital Freedom Fund), Jill discussed the role of technology in exacerbating social inequalities and how social justice organisations can fight this.
ROMADAY 2022: Still Hope In Paradise?
Racist Technology in Action: Chest X-ray classifiers exhibit racial, gender and socio-economic bias
The development and use of AI and machine learning in healthcare is proliferating. A 2020 study has shown that chest X-ray datasets that are used to train diagnostic models are biased against certain racial, gender and socioeconomic groups.
The authors wanted to find out whether AI classifiers trained on public medical imaging datasets were fair across different patient subgroups. Their study drew upon an aggregation of three large image datasets consisting of more than 700,000 images in total, combined with over 129,000 patients labelled with sex, age, race, and insurance type.
They found that all four datasets contained significant patterns of bias and imbalance. Female patients suffered from the highest disparity despite the fact the proportion of women was only slightly less than men. White patients, who make up the majority, with 67.6% of all the X-ray images, were the most-favoured subgroup, while Hispanic patients were the least-favoured. Additionally, bias existed against patients with Medicaid insurance, a minority of the population with only 8.98% of X-ray images. The classifiers often provided Medicaid patients with incorrect diagnoses.
Despite the study’s limitations, the authors assert that even the implication of bias is enough to warrant a closer look at the datasets and any models trained on them. Dataset challenges may give rise to bias in algorithms. For instance, groups with underdiagnosis are often those who experience more negative social determinants of health. Specifically women, minorities and people with lower socioeconomic statuses, may have less healthcare access than others. They underline that ‘debiasing’ techniques have their limits, as there are crucial biases inherent in existing large public datasets. Additionally, classifiers lack sufficient peer review, which can have different consequences when deployed in the real world.
The lack of transparency and access to code, datasets and techniques, and to the data used to train AI algorithms for diagnosing diseases continue to perpetuate inequalities. More crucially, as pointed out in this report by Balayn and Gürses, a focus on ‘debiasing’ shifts social and political problems into a technical domain, further removed from addressing actual issues of socioeconomic inequality, and discrimination.
Events, exhibits and other things to do
Help us…
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here.
Did you read something that would be relevant for this newsletter? Then share it with us.
About the Racism and Technology Center
The Racism and Technology Center is non-profit foundation based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center.
This is a monthly newsletter.
Did you enjoy this issue?
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue