View profile

The racial bias of predictive policing, Dutch tax office fined for discriminatory data processing, and Uber using racist facial recognition to fire its workers

Revue
 
 

Racism and Technology Center

December 25 · Issue #13 · View online

Curated news, research and activism on the intersection between racism and technology.


Predictive policing reinforces and accelerates racial bias
The Markup and Gizmodo, in a recent investigative piece, analysed 5.9 million crime predictions by PredPol, crime prediction software used by law enforcement agencies in the U.S. The results confirm the racist logics and impact driven by predictive policing on individuals and neighbourhoods. As compared to Whiter, middle- and upper-income neighbourhoods, Black, Latino and poor neighbourhoods were relentlessly targeted by the software, which recommended increased police presence. The fewer White residents who lived in an area – and the more Black and Latino residents who lived there – the more likely PredPol would predict a crime there. Some neighbourhoods, in their dataset, were the subject of more than 11,000 predictions.
From the article at the Markup
From the article at the Markup
The basis of predictive policing lies in the idea that by feeding historical crime data into a machine learning model, it can predict where future crime will take place, in order for the police to direct their activities to those areas. This is flawed for several reasons. First, there is disparity in crime reporting. Crime victims do not report crime to police at equal rates – White crime victims are less likely to report violent crime to police than Black or Latino victims – and thus reported crime data is inaccurate. Yet, these data are used to generate predictions. Second, these systems assume that all crimes are the same; or that every crime statistic represents a true crime. Combining other types of crime (such as drugs or sex crimes) into a single prediction is distorted. Moreover, these systems are expensive, and neither useful nor effective. The lack of transparency into the use of such systems by law enforcement agencies remains a significant problem, particularly as large sums of public money are spent on experimental technologies that are not externally validated and continue to perpetuate racist logics. This certainly extends beyond the U.S., as highlighted by our own Naomi Appelman in a recent podcast for the Dutch Big Brother Awards.
What it proves, however, is that these systems – neither new nor innovative – are used as a justification for the carceral state, and for the police to keep doing what they are doing, bolstered by the “objectivity” of technologies, in what Cory Doctorow terms as “confirmation bias as a service”. Despite many technology companies associating less with the term “predictive policing” (including PredPol’s rebranding), many continue to sell technologies to the police that perpetuate and exacerbate racist logics, simply marketed differently.
Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them
Dutch Data Protection Authority (AP) fines the tax agency for discriminatory data processing
The Dutch Data Protection Authority, the Autoriteit Persoonsgegevens (AP), has fined the Dutch Tax Agency 2.75 milion euros for discriminatory data processing as part of the child benefits scandal.
In its decision, the AP explains the Tax Agency is fined for three counts of unlawful data processing. First, the Tax Agency registered and used data on people’s double nationality when they were not allowed to do so in the first place as it was not necessary for their task. Second, (double) nationality was used as “an indicator in a system that automatically designated certain applications as risky.” As such, people with a double nationality or without Dutch citizenship were automatically considered more suspicious by the Tax Agency and investigated more thouroughy. Third, the Tax Agency used people’s nationality to guide their investigations into organised crime. The AP ruled that his second and third instance of unlawful data processing was also discriminatory and, as such, infringed on people’s fundamental rights: “unlawful processing by means of an algorithm led to a violation of the right to equality and non-discrimination.”
The AP’s Chair Aleid Wolfsen emphasized the severity of the situation by stressing that people do not have a choice whether or not they allow the government to process their data, their are forced to do so. Further, he stated that:
Digital applications have become indispensable, and they enable us to swiftly process and conveniently combine huge volumes of information. But when it goes wrong, it really goes wrong. And other fundamental rights can be affected too. The implications for individuals can be enormous, as the Childcare Benefits affair has made painfully clear.
This decision was based on an earlier investative report the AP published this summer and on which Amnesty based its own report on the role of technology in the Child benefits scandal (we wrote about Amnesty’s report previously).
Tax Administration fined for discriminatory and unlawful data processing
Two new technology initiatives focused on (racial) justice
We are happy to see that more and more attention is being paid to how technology intersects with problems around (racial) justice. Recently two new initiatives have launched that we would like to highlight:
Timit Gebru (who was fired at Google a year ago for her critical stance on its technology and its policies around diversity) has just launched the Distributed AI Research Institute (DAIR). In the Guardian she argues that we should have “an independent source of government funding to nourish independent AI research institutes that can be alternatives to the hugely concentrated power of a few large tech companies and the elite universities closely intertwined with them.”
The DAIR Institute
Nani Jansen Reventlow (who recently and rightfully received the Felipe Rodriguez Award for her work on strategic litigation in the digital rights field) has finished her tenure as the Director of the Digital Freedom Fund. Going forward, she will focus on her newly launched NGO, System Justice, where she will “partner with organisations working on racial, social and economic justice to bring about change through strategic litigation.” Make sure to go to the NGO’s site and sign up for the newsletter.
Systemic Justice
Racist Technology in Action: Uber's racially discriminatory facial recognition system firing workers
This example of racist technology in action combines racist facial recognition systems with exploitative working conditions and algorithmic management to produce a perfect example of how technology can exacarbate both economic precarity and racial discrimination.
Image from the crowdfunding campaign
Image from the crowdfunding campaign
The UK App Drivers & Couriers Union (ADCU) has launched legal action over the automatic dismissal of drivers based on “facial recognition technology known to be error prone with people of colour.” The case focusses on Uber’s requirement that drivers identify themselves (taking a selfie) via the app. Facial recognition software will then try to match this selfie to a reference photo. If the software is unable to make such a match, the driver can be dismissed and also automatically lose their London Taxy permit. In the court case, the ADCU will argue that the facial recognition technology Uber uses, which is developed by Microsoft, is inherently racist as it is significantly worse in recognizing people of colour. As explained by Yaseen Aslam, President of ADCU:
Last year Uber made a big claim that it was an anti-racist company and challenged all who tolerate racism to delete the app. But rather than root out racism Uber has bedded it into its systems and workers face discrimination daily as a result.
The ADCU has an impressive track record in fighting for workers rights. Previously, the ADCU, together with the Workers Info Exchange and several drivers, brought a series of successful court cases against Uber in the Netherlands that focussed on the digital rights of workers. These cases came right after winning a landmark case that re-classified riders as workers, as described by our own Jill Toh in an article for Social Europe.
ADCU initiates legal action against Uber’s workplace use of racially discriminatory facial recognition systems
Events, exhibits and other things to do
Help us…
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here.
Did you read something that would be relevant for this newsletter? Then share it with us.
About the Racism and Technology Center
The Racism and Technology Center is non-profit foundation based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center.
This is a monthly newsletter.
Did you enjoy this issue?
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue