View profile

A call to government to stop using discriminatory algorithms, proof of Twitter's bias against dark skin tones, and racial discrimination in online pricing

Revue
 
 

Racism and Technology Center

September 4 · Issue #9 · View online

Curated news, research and activism on the intersection between racism and technology.


Government: Stop using discriminatory algorithms
In her Volkskrant opinion piece Nani Jansen Reventlow makes a forceful argument for the government to stop using algorithms that lead to discrimination and exclusion. Reventlow, director of the Digital Freedom Fund, employs a myriad of examples to show how disregarding the social nature of technological systems can lead to reproducing existing social injustices such as racism or discrimination. The automatic fraud detection system SyRI that was ruled in violation of fundamental rights (and its dangerous successor Super SyRI) is discussed, as well as the racist proctoring software we featured in a previous newsletter.
Reventlow uses these, and other examples, to illustrate how technology can solidify inequalities and create barriers to access to justice. In contrast to what the Netherlands Institute for Human Rights wrote recently, the opinion piece clearly shows that ‘fixing’ this will involve more than paying attention to potentially biased data sets. Reventlow concludes her strong piece with the plea to stop believing blindly in the ‘progress’ of technology. The state should only employ algorithms if it takes heed of the social and historical inequalities, such as institutional racism, that form the context in which these systems are created and operated.
Opinie: Stop algoritmen van overheid die tot discriminatie en uitsluiting leiden
Proof for Twitter's bias toward lighter faces
We have written about the racist cropping algorithm that Twitter uses, and have shared how Twitter tried to fix the symptoms. Twitter also instituted an ‘algorithmic bug bounty’, asking researchers to prove bias in their algorithms. That is exactly what the researchers did as you can see in this video from the DEF CON hacking conference:
Twitter Algorithmic Bias Bug Bounty Challenge Results
Winner of the prize was Bogdan Kulynych who proved Twitter’s bias by creating artificial faces with minor differences and then looking at the ‘saliency’ score that Twitter would give these images. One of his conclusions was that the algorithm thought that light skin tones were more interesting than dark ones:
Saliency increase: 4.2101 → 5.5751 (132.42%)
Saliency increase: 4.2101 → 5.5751 (132.42%)
Kulynych was appreciative of the award, but he was also critical:
Algorithmic harms are not only ‘bugs’. Crucially, a lot of harmful tech is harmful not because of accidents, unintended mistakes, but rather by design. This comes from maximisation of engagement and, in general, profit externalising the costs to others. As an example, amplifying gentrification, driving down wages, spreading clickbait and misinformation are not necessarily due to ‘biased’ algorithms.
Algorithm Watch thinks that Twitter’s bug bounty program is “an unprecedented experiment in openness”, but laments how much less money Twitter invests in algorithmic bug bounties in comparison to bug bounties related to security. This is because companies are heavily regulated on security, whereas regulators battling discrimination are underfunded. A bug bounty like Twitter’s makes it clear that governments need to invest more in their algorithmic auditing capability.
PhD Student proves Twitter algorithm ‘bias’ toward lighter, slimmer, younger faces
Uber-racist: Racial discrimination in dynamic pricing algorithms
Racial discrimination in dynamic pricing algorithms is neither surprising nor new. VentureBeat writes about another recent study that supports these findings, in the context of dynamic pricing algorithms used by ride-hailing companies such as Uber, Lyft and other apps. Neighbourhoods that were poorer and with larger non-white populations were significantly associated with higher fare prices. A similar issue was discovered in Airbnb’s ‘Smart Pricing’ feature which aims to help hosts secure more bookings. It turned out to be detrimental to black hosts leading to greater social inequality (even if unintentional).
When machine learning is applied to social data, the algorithms pick up on the statistical regularities of problematic social biases and historical injustices that are embedded in the data sets. Not only do these algorithms perpetuate bias, they can further disadvantage certain populations. Importantly, while these may be unintended effects, the consequences of racially based disparities must still be accounted for. Identifying the extent of the harm, and fixing these problems remain a huge challenge, due to the often proprietary nature of these algorithms. It is clear that technology can unintentionally discriminate. However, leaving discrimination analysis to companies that do not have a clear track record makes it even more pressing to demand for better accountability mechanisms, including access to company data and algorithms.
Researchers find racial discrimination in 'dynamic pricing' algorithms used by Uber, Lyft, and others
Racist Technology in Action: Racist search engine ads
Back in 2013, Harvard professor Latanya Sweeney was one of the first people to demonstrate racism (she called it ‘discrimination’) in online algorithms. She did this with her research on the ad delivery practices of Google.
In an article for the Communication of the ACM, Discrimination in Online Ad Delivery (PDF), she showed how online ads suggestive of arrest records appeared more often with searches of black-sounding names than with white-sounding names.
In a very smart experiment, Sweeney first made lists of black-identifying and white-identifying full names. She then searched for these names on Google and on Reuters (who was using Google’s ad delivery network at that point in time), and stored the ads that were displayed with the search results. These were mostly ads for companies that sell public records about people. By closely looking at the shown ad templates she then showed that the word ‘arrest’ was used significantly more in ads for black-identifying names, than for white-identifying names.
In her article, Sweeney is also one of the first people to suggest the use of algorithms to correct against these biases. She can therefore be considered one of the founders of the research field of what is now called ‘algorithmic fairness’.
Events, exhibits and other things to do
Help us…
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here.
Did you read something that would be relevant for this newsletter? Then share it with us.
About the Racism and Technology Center
The Racism and Technology Center is non-profit foundation based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center.
This is a monthly newsletter.
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue