|
|
|
April 17 · Issue #4 · View online
Curated news, research and activism on the intersection between racism and technology.
|
|
|
The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling. Unfortunately, Rotterdam is quite typical in its approach: they use these algorithms in their attempt to find benefits fraud, they run the algorithm for more than four years as ‘a pilot’ without evaluating whether the new algorithmic approach works better than the old non-algorithmic approach, and when they are being told that this could lead to biased decision making they tell us we shouldn’t worry because there is always a human in the loop. However, the city does say it will take all the Rekenkamer’s advice on board. This means they should soon have a complete and public registry of all the algorithms that are in use, and that there will be a single team responsible for ensuring that none of the algorithms that are in use could have an unjust impact. Other cities should take heed.
|
Gekleurde technologie
Algoritmes kunnen worden toegepast om besluitvorming sneller, doelmatiger en effectiever te laten plaatsvinden, maar het gebruik brengt ook verschillende ethische risico’s met zich mee.
|
|
In this piece for Markup, Leon Yin and Aaron Sankin expose how Google bans advertisers from targeting terms such as “Black lives matter”, “antifascist” or “Muslim fashion”. At the same time, keywords such as “White lives matter” or “Christian fashion” are not banned. When they raised this striking discrepancy with Google, its response was to fix the discrepancies between religions and races by blocking all such terms, as well as by blocking even more social justice related keywords such as “I can’t breathe” or “LGBTQ”. Blocking these terms for ad placement can reduce the revenue for YouTuber’s fighting for these causes. Yin and Sankin place this policy in stark contrast to Google’s support for the Black Lives Matter movement. In his response to this piece Cory Doctorow points out that part of the explanation lies in the potential harm targeting these terms can do. When advertisers can target these social justice terms, some of them will use this to place racist content, amplifying those voices. As not all ad placements can be checked, Google and Youtube decided to block all such targeting. However, as Doctorow emphasized, still: Youtube’s editorial choice has the foreseeable (and, evidently, acceptable to Youtube) outcome of producing an economic boycott of the creators it says it wants to uplift and support.
|
Google Blocks Advertisers from Targeting Black Lives Matter YouTube Videos
“Black power” and “Black Lives Matter” can’t be used to find videos for ads, but “White power” and “White lives matter” were just fine
|
|
A recent report by ADL, an anti-hate organisation in the US, has shown that social media platforms have consistently failed to prevent online hate and harassment. Despite the self-regulatory efforts made by social media companies, results from ADL’s annual survey shows that the level of online hate and harassment has barely shifted in the past three years. These online experiences disproportionately harm marginalised groups, with LGBTQI+, Asian-American, Jewish and African-American respondents reporting higher rates of various forms of harassment. Many of these problems are intrinsic to the ways in which the business models of social media platforms are optimised for maximum engagement, further exacerbating existing issues in society.Youtube’s editorial choice has the foreseeable (and, evidently, acceptable to Youtube) outcome of producing an economic boycott of the creators it says it wants to uplift and support.
|
Online Hate and Harassment: The American Experience 2021
Despite the blitz of self-regulation from technology companies, the level of online hate and harassment reported by users barely shifted when compared to reports from a year ago.
|
|
An already infamous example of racist technology is Amazon’s facial recognition system ‘Rekognition’ that had an enormous racial and gender bias. Researcher and founder of the Algorithmic Justice League Joy Buolawini (the ‘ poet of code’), together with Deborah Raji, meticulously reconstructed how accurate Rekognition was in identifying different types of faces. Buolawini and Raji’s study has been extremely consequencial in laying bare the racism and sexism in these facial recognition systems and was featured in the popular Coded Bias documentary. Concretely, their research made stark discrepancies visible in how good the system was in identifying white men as opposed to black women:
|
|
This large discrepancy between the races and sexes can be explained by the limited datasets these systems are trained on that underrepresent black people and women, together with the fact that the people developing them are mainly white men. These discrepancies are not just relevant from an abstract perspective, they have grave real life consequences. Facial recognition systems developed by the big tech companies such as Amazon are increasingly used in extremely consequential settings such as in policing, leading to wrongful arrests, or on refugees. Amazons’s facial recognition systems are already in use by the U.S. police. However, last year af the hight of the Black Lives Matter movement after the killing of George Floyd, Amazon announced a one year moratorium on the use of their facial recognition systems for policing. In two months this moratorium will be lifted and it is as of yet unclear what will happen.
|
|
-
Facial recognition, resistance, and Roma & Sinti rights: Reclaim Your Face – In Conversation, Monday, April 19, 14:00 CET, Free online event, register here.
-
Bias^2 Seminar: Yeshimabeit Milner, Thursday, April 22, 19:30 CET, Free online event.
-
Coded Bias including a Q&A with Merel Koning (Amnesty), Saturday, April 24, 20:00 CET, Online at Movies that Matter Festival.
-
Cryptotakkies, Sunday, April 25, Free online podcast at Felix Meritis
-
Inclusion-by-Design: Products, Monday, May 3, 18:30 CET, Free online event by Pakhuis de Zwijger.
-
Bias^2 Seminar: Ziad Obermeyer on Algorithms that reinforce racial biases, and algorithms that fight them, Thursday, May 6, 19:30 CET, Free online event.
-
Inclusion-by-Design: Systems, Monday, May 17, 18:30 CET, Free online event by Pakhuis de Zwijger.
-
Data Justice 2021: Civic Participation in the Datafied Society, Thursday and Friday, May 20 and 21, Free online conference, register before May 7.
-
The Coded Gaze, Till 30 June 2021, NXT Museum, Amsterdam, NL.
|
|
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here. Did you read something that would be relevant for this newsletter? Then share it with us.
|
|
The Racism and Technology Center is based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center. This is a monthly newsletter.
|
Did you enjoy this issue?
|
|
|
|
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
|
|
|
|
|
|