View profile

Racist and classist predictive policing, the right to repair as a social justice issue, and being at the mercy of the TikTok algorithm

Revue
 
 

Racism and Technology Center

May 15 · Issue #5 · View online

Curated news, research and activism on the intersection between racism and technology.


Racist and classist predictive policing exists in Europe too
The enduring idea that technology will be able to solve many of the existing problems in society continues to permeate across governments. For the EUObserver, Fieke Jansen and Sarah Chander illustrate some of the problematic and harmful uses of ‘predictive’ algorithmic systems by states and public authorities across the UK and Europe.
They highlight how these systems have racist and classist undertones; often obscured by the idea that technology and data are “neutral” and “objective”. In the same vein as Sarah Brayne’s research on the LAPD, the article elaborates that the data underlying law enforcement systems are predicated on historical practices and patterns of policing that are racialised and classed. Amnesty revealed, for example, how the UK’s Gangs Matrix, a secretive database of suspected criminals in London used by the Metropolitan Police, contains a disproportionate number of people in the Matrix who are black despite only a small fraction of gang-related crimes being committed by black people.
As argued by the authors, the use of technologies that have discriminatory outcomes – often affecting marginalised communities – are not unintentional. Such uses are the norm and not the exception amongst national governments. The Dutch social welfare benefit scandal and increasing state surveillance in France, are testament to that.
On the EU-level, the current proposal to regulate AI – that will govern predictive policing, biometric mass surveillance and other applications – continues to largely benefit public authorities and private companies rather than people. The prevalence of techno-solutionism across policymaking, combined with a disregard for fundamental rights, continue to be a worrying trend against the backdrop of an increased digitisation of our society.
Why EU needs to be wary that AI will increase racial profiling
The right to repair our devices is also a social justice issue
Over the past couple of years, devices like our phones have become much harder to repair, and unauthorized repair often leads to a loss of warranty. This is partially driven by our manufactured need for devices that are slimmer and slicker, but is mostly an explicit strategy to make us throw away our old devices and have us buy new ones.
This doesn’t just have a negative impact on the climate. A new US Federal Trade Commission (FTC) report shows that repair restrictions “may fall more heavily on communities of color and low-income communities.” This is because (in the US) “many Black-owned small businesses are in the repair and maintenance industries,” and because “Black and Hispanic Americans are about twice as likely as White Americans to have smartphones, but no broadband access at home” so “this smartphone dependency makes repair restrictions on smartphones more likely to affect these communities adversely.”
It is therefore encouraging to see that EU is moving towards a comprehensive right to repair both in the Green Deal action plan from the Commission and in the ambitions of the Parliament. The FTC report shows that it behoves our EU institutions to also take into account matters of racial justice when thinking about right to repair regulation.
FTC Report Finds Manufacturers’ Repair Restrictions Unwarranted
At the mercy of the TikTok algorithm?
In this article for the Markup, Dara Kerr offers an interesting insight in the plight of TikTok'ers who try to earn a living on the platform. TikTok’s algorithm, or how it decides what content gets a lot of exposure, is notoriously vague. With ever changing policies and metrics, Kerr recounts how difficult it is to build up and retain a following on the platform. This vagueness does not only create difficulty for creators trying to monetize their content, but also leaves more room for TikTok to suppress or spread content at will.
TikTok has often been accused of intervening more directly with users’ content, targeting specific groups. Notably, there are many examples of TikTok actively censoring anti-chinese content. Further, Black content was suppressed during Black Lives Matters protests in the summer of 2020, for which TikTok apologised. Another example is where TikTok moderators were told to suppress content by ‘ugly people’ to attract new users. Kerr explains how one of the main ways in which the platform can suppress content is by ‘shadowbanning’, where a user does not see anything different but their content just has no reach. Despite all these controversies, Kerr shows us how people keep chasing the fame the platform promises. As explained by TikTok'er Oyelowo: “With algorithms, in theory, there is a potential solution, there is a way to figure it out—everybody is chasing that pot of gold in some way. But it’s a moving target.”
Shadow Bans, Dopamine Hits, and Viral Videos, All in the Life of TikTok Creators
Racist Technology in Action: Speech recognition systems by major tech companies are biased
From Siri, to Alexa, to Google Now, voice-based virtual assistants have increasingly become ubiquitous in our daily lives. So, it is unsurprising that yet another AI technology – speech recognition systems – has been reported to be biased against black people.
A scientific study has found that the speech-to-text tools developed by the five major tech companies – Google, Apple, Amazon, Microsoft and IBM – suffer from large racial disparities in speech recognition technology. In other words, these technologies recognise white voices better than others. For instance, the rate of misidentified words by these systems are at 19% for white people but rises to 35% for black people. When it comes to the unreadability of audio snippets by these systems, the study shows that only 2% of the snippets from white people were considered unreadable, while this unreadability increased to 20% with black people. While not all applications run on these five speech recognition systems, it is worrying that this is a pattern across the majority of big tech companies.
Audio clip of a black man speaking (top) in comparison to an audio clip of a white man speaking (bottom).
Audio clip of a black man speaking (top) in comparison to an audio clip of a white man speaking (bottom).
A reason for these errors can likely be found in the lack of diversity in data sets; and the false assumption that different ethnic groups are properly represented by these companies. However, improving diversity in datasets (or hiring) may not be a priority for these companies despite what they claim, nor the main point. It is crucial to remember that bias cannot be rid by technology. Technology could actually exacerbate, or distract us from, the main problems we need to address.
Events, exhibits and other things to do
Help us…
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here.
Did you read something that would be relevant for this newsletter? Then share it with us.
About the Racism and Technology Center
The Racism and Technology Center is based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center.
This is a monthly newsletter.
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue