View profile

Hiding the racialised impact of Covid-19, the impossibility of anti-racist tech companies at scale, and why a human in the loop won't fix all of AI's harms


Racism and Technology Center

July 10 · Issue #7 · View online

Curated news, research and activism on the intersection between racism and technology.

Covid-19 data: making racialised inequality in the Netherlands invisible
The CBS, the Dutch national statistics authority, issued a report in March showing that someone’s social economic status is a clear risk factor for dying of Covid-19. In an insightful piece, researchers Linnet Taylor and Tineke Broer criticise this report and show that the way in which the CBS collects and aggragates data on Covid-19 cases and deaths obfuscates the full extent of racialised or ethnic inequality in the impact of the pandemic.
There is abundant and clear evidence from all over the world, including the Netherlands, of marginalised groups being hit hardest by the pandemic. The way in which data is collected is crucial for gaining insight in this inequality. Taylor and Broer write that:
Statistics are always political. They embed political choices, and they have political consequences. In a pandemic that increases inequalities both within and between countries, it is important that each country’s statistical choices make disadvantage visible
Importantly, one of the main categories the CBS uses is of people with a ‘non-western migration background’: people who were born outside the Netherlands in a formerly colonised nation or have a parent who was. Even though this definition is now officially abandoned, it remains an important category present in the data and drives the CBS report on the disparate impact of Covid-19. Taylor and Broer point out that:
The very general ‘non-western migration background’ does not map onto marginalisation and disadvantage, and instead tends to mask racial and ethnic impacts by looking at place of origin rather than groups’ comparative likelihood of suffering discrimination and disadvantage.“
Crucially, Taylor and Broer call unpon the government and CBS to “make meaningful statistical categories that are able to capture differences between people, and to act on the results of such research.”
Now you see it, now you don’t: how the Dutch Covid-19 data gap makes ethnic and racialised inequality invisible
Tech companies poured 3.8 billion USD into racial justice, but to what avail?
The Plug and Fast Company looked at what happened to the 3.8 billion dollars that US-based tech companies committed to diversity, equity, and inclusion as their response to the Black Lives Matter protests.
The beautiful web-production (with interactive graphs and short pieces of audio) makes it clear that not much has really changed. One interviewee is dismayed that tech companies are working on going to Mars, yet seem to throw their hands up when they need to solve a social problem like increasing the diversity of their staff. Another is convinced that the companies spend more on the marketing around their Diversity, Equity and Inclusion programmes, than on the actual programmes themselves.
Answers to the question: Do you currently have one or more Black board members?
Answers to the question: Do you currently have one or more Black board members?
Surveillance expert Chris Gilliard doesn’t believe you can have an anti-racist tech company at scale. The imperatives for growth for these companies will always be stronger than their sense of justice. Companies say that Black lives matter, but the core functions of their products tell a different story (see here for plenty of examples of that fact).
1 year, $3.8 billion later: How 2020’s race reckoning shook up Big Tech
Human-in-the-loop is not the magic bullet to fix AI harms
In many discussions and policy proposals related to addressing and fixing the harms of AI and algorithmic decision-making, much attention and hope has been placed on human oversight as a solution. This article by Ben Green and Amba Kak urges us to question the limits of human oversight, rather than seeing it as a magic bullet. For example, calling for ‘meaningful’ oversight sounds better in theory than practice. Humans can also be prone to automation bias, struggle with evaluating and making decisions based on the results of the algorithm, or exhibit racial biases in response to algorithms. Consequentially, these effects can have racist outcomes. This has already been proven in areas such as policing and housing.
The irony lies in the belief held by many in the ‘promises’ of AI and algorithmic decision-making as the opportunity to improve upon human biases and cognitive limits. Yet, humans are now presented as the fix to high-stakes decision-making. While not entirely dismissing the idea of including a human-in-the-loop, it is important to stress that we humans evidently suffer from our own biases, and discriminatory and racist behaviours too. Additionally, those in power can simply shift or obfuscate their blame to frontline human operators of AI systems. We should neither pin, nor limit our imaginations, to a superficial human-in-the-loop fix. The tangible and material harms of AI should make us consider whether these algorithmic systems ought to be used at all in certain scenarios, and demand stronger accountability for the human decision-makers creating these harms, whether intentional or otherwise.
The False Comfort of Human Oversight as an Antidote to A.I. Harm
Racist Technology in Action: Proctoring software disadvantaging students of colour in the Netherlands
This week, in an opinion piece in Parool, we wrote about how Dutch universities use proctoring software that uses facial recognition technology that systematically disadvantages students of colour. In April’s newsletter Jill has written on the racial bias of these systems, leading to black students being excluded from exams or being labeled as frauds because the software did not properly recognise their faces as a face. Despite the clear proof that Procorio disadvantages students of colour, the University of Amsterdam has still used Proctorio extensively in this June’s exam weeks.
As early as December 2020, the Parliament voted to scale down the use of proctoring software. However, the racialised impact of online proctoring has only recently become part of political debate in the Dutch parliament. Peter Kwint, MP for the socialist party, has explicitly asked about the disciminatory impact of proctoring software:
This led to a response from the Education and Culture minister, Van Engelshoven, that “large-scale use of online proctoring should be avoided”:
Several motions have been passed in Parliament asking for clear rules under which exceptional circumstances online proctering can still be allowed. Hopefully, this will finally convince Dutch universities to stop using procotoring altogether.
Opinie: ‘UvA, verhul racisme van proctoring niet met mooie woorden’
Please check out an English translation of this opinion piece here.
Events, exhibits and other things to do
Summer is a slow time for events. We have nothing on our radar. Do reply to this email with your tips if you have any!
Help us…
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here.
Did you read something that would be relevant for this newsletter? Then share it with us.
About the Racism and Technology Center
The Racism and Technology Center is based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center.
This is a monthly newsletter.
Did you enjoy this issue?
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue