Amnesty warning against another benefit scandal, regulating big tech to make sure nobody is excluded, and putting harmed communities central

#11・
21

issues

Subscribe to our newsletter

By subscribing, you agree with Revue’s Terms of Service and Privacy Policy and understand that Racism and Technology Center will receive your email address.

Revue
 
 

Racism and Technology Center

October 30 · Issue #11 · View online

Curated news, research and activism on the intersection between racism and technology.


Amnesty's grim warning against another ‘Toeslagenaffaire’
In its report of the 25 of October, Amnesty slams the Dutch government’s use of discriminatory algorithms in the child benefits schandal (toeslagenaffaire) and warns that the likelihood of such a scandal occurring again is very high. The report is aptly titled ‘Xenophobic machines - Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal’ and it conducts a human rights analysis of a specific sub-element of the scandal: the use of algorithms and risk models. The report is based on the report of the Dutch data protection authority and several other government reports.
Amnesty’s main conclusion is that the ‘risk classification model’ used by the tax agency to score child benefits applicants on the likelihood of fraud, relied on racial profiling as nationality was a risk factor in this model. Amnesty additionally emphasizes the intersectional nature of this harm:
This treatment of parents and caregivers with non-Dutch nationalities by the Dutch tax authorities resulted in racial profiling. Different layers of inequality were created through discrimination based on people’s race and ethnicity in the use of the risk classification model, combined with the policies and practices that more often affected people with a lower economic status, who frequently belonged to an ethnic minority.
Based on its human rights assessment of the risk models used in the child benefits scandal, Amnesty calls for the creation of a substantive, supervisory and enforcement framework to prevent human rights violations through the government use of algorithmic or automated decision-making. Further, it calls or the prohibition of self learning algorithms such as the ones used in the scandal and for increased transparency. This last point is especially important in light of the new EU ‘AI Act’ that could potentially include a ban on the use of such technologies, as Nadia Benaissa from Bits of Freedom writes.
Even though this in depth analysis of the role of technology and algorithms in the scandal is an important contribution, it is important not to overstate this role. In the end government failure and abuse such as with the child benefits scandal is a product many different factors including systemic racism, personal failures of both civil servants and politicians, and a severe lack of accountability.
Toeslagen­affaire is mensenrechten­schending, zegt Amnesty International
Regulating big tech to make sure nobody is excluded
Our very own Naomi Appelman was interviewed for Atlas, a Dutch television show about science and current affairs. She talked about her research into what laws and regulations democracies should develop to ensure that large technology companies don’t unnecessarily exclude people.
In this short fragment, Naomi explains how we don’t have a complete picture of what is actually happening online. We can’t look into the black boxes that companies like Facebook, Twitter and YouTube basically are. Yet from the outside, we do see many things going awry. As an example, she mentions the the way that Twitter’s cropping algorithm for images favours white faces and women.
You can watch the full Atlas episode here:
Atlas gemist?
'Digital Rights for All': harmed communities should be front and centre
Earlier this month, Digital Freedom Fund kicked off a series of online workshops of the ‘Digital Rights for All’ programme. In this post, Laurence Meyer details the reasons for this initiative with the fundamental aim of addressing why individuals and communities most affected by the harms of technologies are not centred in the advocacy, policy, and strategic litigation work on digital rights in Europe, and how to tackle challenges around funding, sustainable collaborations and language barriers.
The piece underscores the performative, co-optive and objectifying use of marginalised and racialised individuals and communities in the discussions (and solutions) around harmful technologies. This is certainly not only a private tech sector issue, it can also be witnessed within advocacy, policymaking, academia and non-profit organisations. For example, in the many #BrusselsSoWhite conversations, and specifically in EU tech policy.
Rather than seeing these people’s experiences, expertise and knowledge as legitimate, and seeing them as equals, they are often placed either in the periphery, or as objects to be extracted from. These harmed communities are not only largely excluded from technology design and development, but also by well-intentioned actors that claim to help. Meyer thinks that these communities – most impacted by the harms of technologies – should be centered to set the agenda and determine how to address and meaningful engage with the issues caused by technologies. James Baldwin reminds us that those who are harmed or neglected by systems, especially members of racially or economically subordinated groups, can bring us knowledge that is missing, knowledge that is based on wisdom and experience.
Nothing About Us, Without Us: Introducing Digital Rights for All
Racist Technology in Action: Facebook labels black men as ‘primates’
In the reckoning of the Black Lives Matter movement in summer 2020, a video that featured black men in altercation with the police and white civilians was posted by the Daily Mail, a British tabloid. In the New York Times, Ryan Mac reports how Facebook users who watched that video, saw an automated prompt that asked if they would like to “keep seeing videos about Primates,” despite there being no relatedness to primates or monkeys.
This incident was only discovered through a friend of an ex-Facebook employee who then posted it on a product feedback forum. While Facebook has apologised, the flurry of many recent, yet unsurprising, revelations about the company renders their apology empty. The company’s many intentional (and unintentional) incidents with race, make it clear that dealing with racial (and other intersectionally related) problems is in no way the company’s priority.
Events, exhibits and other things to do
Help us…
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here.
Did you read something that would be relevant for this newsletter? Then share it with us.
About the Racism and Technology Center
The Racism and Technology Center is non-profit foundation based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center.
This is a monthly newsletter.
Did you enjoy this issue?
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue