Automating racism, the double standard in content moderation on social media, and a stereotype reinforcing emoji keyboard

#8・
10

issues

Revue
 
 

Racism and Technology Center

August 7 · Issue #8 · View online

Curated news, research and activism on the intersection between racism and technology.


The use of racist technology is not inevitable, but a choice we make
Last month, we wrote a piece in Lilith Mag that builds on some of the examples we have previously highlighted – the Dutch childcare benefits scandal, the use of online proctoring software, and popular dating app Grindr – to underscore two central ideas.
First, technology itself can be racist. In online proctoring that has been used in education, dark-skinned students had to shine a light on themselves to be verified to take an exam as the Proctorio software was unable to detect them otherwise. Facial recognition systems have been proven to have high error rates when used to identify people of colour, as we have previously flagged. Yet, these systems are still in place. The point we want to drive across is that it is not enough to say that these technologies produce racist outcomes. Rather, some technologies themselves explicitly reproduce and exacerbate existing racist patterns within society and should simply be banned.
Second, regardless of intent and outcome, the creation and use of such racist technologies are always choices that are actively made by people in a position of power. Therefore, these individuals have to be held responsible for the tangible harms that have, and can be, inflicted onto to individuals and communities. Despite the prevailing notion that technology can solve bias and racism, let us not forget that institutional and structural racism is rife in our societies, with or without technology-use. We need to demand from our governments, employers, and communities to be responsible, rather than having them use technology as a veneer or solution to existing racist practices and behaviours.
Technology can be racist and we should talk about that
Are we automating racism?
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?” She and her team made a brilliant video (racking up close to two millions views) to find answers to these questions:
Watch “Are We Automating Racism?” on YouTube
In it, she tests Twitter’s cropping algorithm with two of her colleagues, talks to Ruha Benjamin (author of Race After Technology) about the racial disparity in healthcare priority scores, puts a bunch of racist technology examples on a timeline, and shows how non-diverse the datasets that are used to train image recognition algorithms can be.
Fong ends the video by asking not just how to get rid of bias in algorithms, but asking what algorithms should be used at all:
Do we need a predictive model to be cropping our photos? Do we want facial recognition in our communities? Many would say “No”, whether it is biased or not.
Social media content moderation has double standards
The policies social media platforms use to decide what content they allow are rife with double standards, further marginalising vulnerable groups. Ángel Díaz and Laura Hecht-Felella demonstrate this in their report for the Brenan Center for Justice. They focus on the content moderation policies and their enforcement on Facebook, YouTube and Twitter. Taking a close look at how these platforms operate, the report concludes that:
Platform rules often subject marginalized communities to heightened scrutiny while providing them with too little protection from harm.
Looking at these platforms’ rules for hate speech, terrorist content and harassment, the researchers clearly demonstrate how the viewpoints of communities of color, women, LGBTQ+ communities, and religious minorities are at risk of over-enforcement, while harms targeting them often remain unaddressed.
For example, vague rules targeting terrorist content lead to disproportional over-removal of content from muslims or Arabic speakers while rules on white supremacist groups are much more narrow and lead to far fewer removals. The report proposes several recommendations such as increased transparency, appeals procedures and oversight, targeting both the platforms themselves and legislators.
Double Standards in Social Media Content Moderation
Racist Technology in Action: Apple’s emoji keyboard reinforces Western stereotypes
Time and time again, big tech companies have shown their ability and power to (mis)represent and (re)shape our digital world. From speech, to images, and most recently, to the emojis that we regularly use.
This Rest of World article highlights how Apple’s emoji keyboard – an emoji search bar in the iMessage feature that allows users to search for emojis by typing related search terms – tends to reinforce Western stereotypes, while inaccurately representing other countries such as Africa and China. When a user searches for ‘Africa’ the emoji results returns a hut emoji, in contrast to a search for ‘Europe’ which recommends a more diverse set of emoji options. While huts are associated with Africa, reducing an entire continent to a monolithic view, and reinforcing Western stereotypes about certain places and cultures remains problematic.
Image by Katherine Lee for Rest of World
Image by Katherine Lee for Rest of World
This raises questions about encoding existing biases into new technological systems and products; something that isn’t new to Apple (“Type ‘ceo’ into your iPhone keyboard for a sexist surprise”), or to Instagram for that matter (“A Search For “Dog” On Instagram Surfaces An Emoji For A Chinese Takeout Box”), and about the lack of transparency of Apple’s language processing systems and how its recommendations are generated.
The question as to whether implementing a technical fix to address representation issues by increasing the types of emojis is sufficient remains murky. Unicode, the consortium made up of (not exclusively) big tech companies, is responsible for standardising emoji usage and is the gatekeeper that approves or rejects new emoji entries. The Rest of World article highlights how Afrocentric designers struggle to have their emojis approved.
These types of misrepresentation and reductionist views are not new, and have a long history in the media too. It should remind us that encoding biases into new tech products can reinforce or shape our worldview in subtle ways, and broader representation requires moving beyond technical fixes.
Events, exhibits and other things to do
Summer is a slow time for events. We have nothing on our radar. Do reply to this email with your tips if you have any!
Help us…
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here.
Did you read something that would be relevant for this newsletter? Then share it with us.
About the Racism and Technology Center
The Racism and Technology Center is non-profit foundation based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center.
This is a monthly newsletter.
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue