View profile

An improved camera app for Black people, the hidden costs of AI, and the EU's need for a new vocabulary to talk about race

Revue
 
 

Racism and Technology Center

June 12 · Issue #6 · View online

Curated news, research and activism on the intersection between racism and technology.


Long overdue: Google has improved its camera app to work better for Black people
The following short video by Vox shows how white skin has always been the norm in photography. Black people didn’t start to look good on film until in the 1970s furniture makers complained to Kodak that their film didn’t render the difference between dark and light grained wood, and chocolate companies were upset that you couldn’t see the difference between dark and light chocolate.
Color film was built for white people. Here's what it did to dark skin.
These biases have resurfaced (or more likely, continued) in digital photography, with the default settings not working particularly well for people with a dark skin color. Working with a set of professional image-makers, Google has finally taken steps to address this problem within their computational photo algorithms:
Building a More Equitable Camera
In the Guardian, Black photographers welcome Google’s initiative, but also make it clear that a technological fix will not end racial bias in photography.
Skin in the frame: black photographers welcome Google initiative
AI and its hidden costs
In a recent interview with The Guardian, Kate Crawford discusses her new book, Atlas AI, that delves into the broader landscape of how AI systems work by canvassing the structures of production and material realities. One example is ImageNet, a massive training dataset created by researchers from Stanford, that is used to test whether object recognition algorithms are efficient. It was made by scraping photos and images across the web and hiring crowd workers to label them according to an outdated lexical database created in the 1980s.
From: The Economist
From: The Economist
ImageNet’s database was revealed to have extremely racist, ableist, sexist, and offensive classification categories. While ImageNet has rectified this by removing these categories, and by correcting datasets to be more representative, remaining issues with such large datasets still remain.
Firstly and importantly, and reiterated by Crawford, the idea that simply creating more classifications solves problems of bias is too narrow and limiting. Take, for example, categorising people only into two binary genders, or labelling people according to their skin colour that will lead to moral or ethical judgments. The idea of classification and categorisation should take heed from the past. Secondly, huge training datasets, used to train machine learning systems, are often in the hands of private tech companies, and are still hidden and inaccessible. Additionally, the hidden human cost of cleaning, labelling these data sets often involve the massive exploitation of workers globally.
It is important to address the issues of power in AI systems; technology that is often taken for granted as a beacon for progress. As proclaimed by Crawford:
We need a renewed politics of refusal that challenges the narrative that just because a technology can be built it should be deployed.
Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’
Why Europe needs a new vocabulary to talk about race
In this article for Algorithm Watch, Nicolas Kayser-Bril highlights an important issue facing Europe in the fight against racist technologies: we lack the words to talk about racism. He shows why Europeans need a new vocabulary and discourse to understand and discuss racist AI systems. For example, concepts such as ‘Racial Justice’ have no part in the EU’s anti-discrimination agenda and ‘ethnicity’ is not recognised as a proxy for race in a digital context. The lack of this vocabulary greatly harms our current ability to challenge and dismantle these systems and, crucially, the root of racism.
By tracing the historical conditions, Kayser-Bril outlines how Europe has lost the ability to talk about racism. But he also charts a way forward. The article highlights several organisations and projects actively aiming to develop this vocabulary and dismantle racist (technological) systems. Examples are Equinox that recently published a report on racial justice in the EU or the Digital Freedom Fund and EDRi that started the Decolonising Digital Rights programme. These initiatives stand on the shoulders of many people and organisations who have tirelessly been trying to forge the vocabulary for us to talk about race. It is time to start listening.
Europeans can’t talk about racist AI systems. They lack the words.
Racist Technology in Action: Predicting future criminals with a bias against Black people
In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.
From: ProPublica
From: ProPublica
ProPublica found that “blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend,” and that COMPAS “makes the opposite mistake among whites: They are much more likely than blacks to be labeled lower-risk but go on to commit other crimes.” Even though the risk assessment form doesn’t have direct questions about ethnicity or race, it does contain a lot of questions that can serve as a proxy for race. Questions about poverty, for example, or about the prevalence of crime in your neighbourhood.
ProPublica’s research has kick-started a whole academic field that is trying to increase the fairness (and accountability and transparency) in machine learning, the technology that is used for these kinds of systems. From this field, we now know that these systems can never be fair in the all the ways that we would intuitively understand fairness. This is another argument for why we should never forget that making systems more ‘fair’, doesn’t necessarily fix the injustices in society.
Events, exhibits and other things to do
Help us…
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here.
Did you read something that would be relevant for this newsletter? Then share it with us.
About the Racism and Technology Center
The Racism and Technology Center is based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center.
This is a monthly newsletter.
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue