The limitations of ‘debiasing’, Big Tech exploiting marginalised people, and photo filters keeping colorism alive

#10・
21

issues

Subscribe to our newsletter

By subscribing, you agree with Revue’s Terms of Service and Privacy Policy and understand that Racism and Technology Center will receive your email address.

Revue
 
 

Racism and Technology Center

October 2 · Issue #10 · View online

Curated news, research and activism on the intersection between racism and technology.


Why ‘debiasing’ will not solve racist AI
Policy makers are starting to understand that many systems running on AI exhibit some form of racial bias. So they are happy when computer scientists tell them that ‘debiasing’ is a solution for these problems: testing the system for racial and other forms of bias, and making adjustments until these no longer show up in the results.
European Digital Rights (EDRi) commissioned a report, asking academic researchers whether ‘debiasing’ is indeed a feasible way towards a more equitable use of AI. Turns out it isn’t.
In the report, Agathe Balayn and Seda Gürses point out the limitations of ‘debiasing’. Their main concern is that a focus on ‘debiasing’ shifts political problems (of structural discrimination) into a technical domain, which is dominated by the large commercial technology companies.
To enhance the policy debate about AI in the EU, the authors propose four alternative ways of looking at AI:
  • The machine learning view – Some aspects of machine learning are inherently harmful.
  • The production view – the making of AI systems have potential harmful effects that fall outside of the system.
  • The infrastructural view – the computational infrastructure that is needed for AI systems is in the hands of few, creating power imbalances.
  • The organizational view (AI will automate and centralise workflows affecting the structure of the public sector and democracy.
If AI is the problem, is debiasing the solution?
Big Tech is propped up by a globally exploited workforce
Behind the promise of automation, advances of machine learning and AI, often paraded by tech companies like Amazon, Google, Facebook and Tesla, lies a deeply exploitative industry of cheap, human labour. In an excerpt published on Rest of the World from his forthcoming book, “Work Without the Worker: Labour in the Age of Platform Capitalism,” Phil Jones illustrates how the hidden labour of automation is outsourced to marginalised, racialised and disenfranchised populations within the Global North, as well as in the Global South.
Machine learning requires large sets of training data that has to be cleaned, annotated, categorised, and labelled. These “microwork” or “clickwork” programs are often outsourced and targeted at populations hit by civil unrest, war, and economic collapse. For instance, Syrian and Palestinian refugees in the Middle East take on microwork such as annotating data, labelling videos, transcribing audio, or showing algorithms how to identify photos for tech companies like Amazon and Google. Promoted and facilitated by organisations such as the World Bank, “impact sourcing” ventures and NGOs, microwork is heralded/seen as providing employment opportunities for populations in the Global South.
Workers earn poor wages, and have no rights, security or stability. They are also often in no position to protest their dire conditions. Phil Jones draws parallels to the prison-industrial complex, in particular, the racialised aspect of these work programs. The racist, colonialist, and exploitative aspects of technology development or advancement are clear. The hidden human exploitation on a global scale, by tech companies and economically rich countries, should make us seriously question to what end and at what cost are we willing to sacrifice for the sake of technological "innovation”.
Refugees help power machine learning advances at Microsoft, Facebook, and Amazon
Photo filters are keeping colorism alive
Many people use filters on social media to ‘beautify’ their pictures. In this article, Tate Ryan-Mosley discusses how these beauty filters can perpetuate colorism. Colorism has a long and complicated history, but can be summarised as a preference for whiter skin as opposed to darker skin. Ryan-Mosley explains that “though related to racism, it’s distinct in that it can affect people regardless of their race, and can have different effects on people of the same background.” The harmful effects of colorism, ranging from discrimination to mental health issues or the use of toxic skin-lightening products, are found across races and cultures.
The article lists a slew of different ways in which beauty filters available on the popular platforms such as Snapchat, Instagram, TikTok, Facebook or the Chinese Douyin perpetuate these harmful standards of beauty. To start, as we wrote earlier, many cameras are still incapable of capturing the real range of skin tones. Further, many of these filters whiten faces when ‘beautifying’ them. This ingrains and perpetuates colorist ideas of beauty. On top of that, there are myriad examples of social media promoting whiter faces over darker ones(see here and here), contributing even further to colorism and to narrowing beauty standards. As a result, forms of colorism have become part of the social media environment:
Recommendations based on user preferences often reflect the biases of the world—in this case, the diversity problems that have long been apparent in media and modeling. Those biases have in turn shaped the world of online influencers, so that many of the most popular images are, by default, of people with lighter skin.
How beauty filters perpetuate colorism against people with darker skin How beauty filters perpetuate colorism against people with darker skin
Racist Technology in Action: White preference in mortage-approval algorithms
A very clear example of racist technology was exposed by Emmanuel Martinez and Lauren Kirchner in an article for the Markup. Algorithms used by a variety of American banks and lenders to automatically assess or advice on mortgages display clear racial disparity. In national data from the United States in 2019 they found that “loan applicants of color were 40%–80% more likely to be denied than their White counterparts. In certain metro areas, the disparity was greater than 250%.”
In previous iterations of this type of research, lenders criticised the data for not including crucial metrics that would make the racial disparities go away. However, much of these specific metrics (such as US credit scores or debt ratio) are kept secret and unavailable for scrutiny.
One explanation for the racial disparity in the data is the use of outdated models that rely on specific metrics that fuction as proxies for race. As described in the article by Aracely Panameño, director of Latino affairs for the Center for Responsible Lending, the racism embedded in these automated lending algorithms is directly connected to the data it is trained on: “The quality of the data that you’re putting into the underwriting algorithm is crucial […] If the data that you’re putting in is based on historical discrimination, then you’re basically cementing the discrimination at the other end.” These classic problems are exacerbated by the specific US situation where there is insufficient regulation and mandated transparancy of the algorithms used.
Events, exhibits and other things to do
Help us…
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here.
Did you read something that would be relevant for this newsletter? Then share it with us.
About the Racism and Technology Center
The Racism and Technology Center is non-profit foundation based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center.
This is a monthly newsletter.
Did you enjoy this issue?
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue