View profile

Algorithmic ethnic profiling in Rotterdam, the proliferation of online hate and harassment, and Google blocking ad targeting to “Black Lives Matter”

Revue
 
 

Racism and Technology Center

April 17 · Issue #4 · View online

Curated news, research and activism on the intersection between racism and technology.


Rotterdam's use of algorithms could lead to ethnic profiling
The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling.
Unfortunately, Rotterdam is quite typical in its approach: they use these algorithms in their attempt to find benefits fraud, they run the algorithm for more than four years as ‘a pilot’ without evaluating whether the new algorithmic approach works better than the old non-algorithmic approach, and when they are being told that this could lead to biased decision making they tell us we shouldn’t worry because there is always a human in the loop.
However, the city does say it will take all the Rekenkamer’s advice on board. This means they should soon have a complete and public registry of all the algorithms that are in use, and that there will be a single team responsible for ensuring that none of the algorithms that are in use could have an unjust impact. Other cities should take heed.
Gekleurde technologie
Google blocks advertisers from targeting Black Lives Matter
In this piece for Markup, Leon Yin and Aaron Sankin expose how Google bans advertisers from targeting terms such as “Black lives matter”, “antifascist” or “Muslim fashion”. At the same time, keywords such as “White lives matter” or “Christian fashion” are not banned. When they raised this striking discrepancy with Google, its response was to fix the discrepancies between religions and races by blocking all such terms, as well as by blocking even more social justice related keywords such as “I can’t breathe” or “LGBTQ”. Blocking these terms for ad placement can reduce the revenue for YouTuber’s fighting for these causes. Yin and Sankin place this policy in stark contrast to Google’s support for the Black Lives Matter movement.
In his response to this piece Cory Doctorow points out that part of the explanation lies in the potential harm targeting these terms can do. When advertisers can target these social justice terms, some of them will use this to place racist content, amplifying those voices. As not all ad placements can be checked, Google and Youtube decided to block all such targeting. However, as Doctorow emphasized, still:
Youtube’s editorial choice has the foreseeable (and, evidently, acceptable to Youtube) outcome of producing an economic boycott of the creators it says it wants to uplift and support.
Google Blocks Advertisers from Targeting Black Lives Matter YouTube Videos
Online hate and harassment continue to proliferate
A recent report by ADL, an anti-hate organisation in the US, has shown that social media platforms have consistently failed to prevent online hate and harassment. Despite the self-regulatory efforts made by social media companies, results from ADL’s annual survey shows that the level of online hate and harassment has barely shifted in the past three years. These online experiences disproportionately harm marginalised groups, with LGBTQI+, Asian-American, Jewish and African-American respondents reporting higher rates of various forms of harassment. Many of these problems are intrinsic to the ways in which the business models of social media platforms are optimised for maximum engagement, further exacerbating existing issues in society.Youtube’s editorial choice has the foreseeable (and, evidently, acceptable to Youtube) outcome of producing an economic boycott of the creators it says it wants to uplift and support.
To be clear, this is not solely a U.S. problem. A study by the Alan Turing Institute at Oxford University highlights that the COVID-19 pandemic and its health-related misinformation, has driven an increase in prejudice against East Asians on social media platforms. Following the Atlanta shootings in the U.S., people in the Netherlands also took to the streets to protest anti-racism last month.
Online Hate and Harassment: The American Experience 2021
Racist Technology in Action: Amazon's racist facial 'Rekognition'
An already infamous example of racist technology is Amazon’s facial recognition system ‘Rekognition’ that had an enormous racial and gender bias. Researcher and founder of the Algorithmic Justice League Joy Buolawini (the ‘poet of code’), together with Deborah Raji, meticulously reconstructed how accurate Rekognition was in identifying different types of faces. Buolawini and Raji’s study has been extremely consequencial in laying bare the racism and sexism in these facial recognition systems and was featured in the popular Coded Bias documentary. Concretely, their research made stark discrepancies visible in how good the system was in identifying white men as opposed to black women:
This large discrepancy between the races and sexes can be explained by the limited datasets these systems are trained on that underrepresent black people and women, together with the fact that the people developing them are mainly white men. These discrepancies are not just relevant from an abstract perspective, they have grave real life consequences. Facial recognition systems developed by the big tech companies such as Amazon are increasingly used in extremely consequential settings such as in policing, leading to wrongful arrests, or on refugees. Amazons’s facial recognition systems are already in use by the U.S. police. However, last year af the hight of the Black Lives Matter movement after the killing of George Floyd, Amazon announced a one year moratorium on the use of their facial recognition systems for policing. In two months this moratorium will be lifted and it is as of yet unclear what will happen.
Events, exhibits and other things to do
Help us…
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here.
Did you read something that would be relevant for this newsletter? Then share it with us.
About the Racism and Technology Center
The Racism and Technology Center is based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center.
This is a monthly newsletter.
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue