View profile

The dangers of ‘race-blind’ content moderation, a history of surveillance, and a super racist AI for moral judgments

Revue
 
 

Racism and Technology Center

November 27 · Issue #12 · View online

Curated news, research and activism on the intersection between racism and technology.


‘Race-blind’ content moderation disadvantages Black users
Over the past months a slew of leaks from the Facebook whistleblower, Frances Haugen, has exposed how the company was aware of the disparate and harmful impact of its content moderation practices. Most damning is that in the majority of instances, Facebook failed to address these harms. In this Washington Post piece, one of the latest of such revelations is discussed in detail: Even though Facebook knew it would come at the expense of Black users, its algorithm to detect and remove hate speech was programmed to be ‘race-blind’.
In 2018, internal researchers from Facebook set up a research project aimed at addressing the “worst of the worst” of its online content, mainly hate speech. The outcomes of this project showed that people considered hate speech directed at minorities as most harmful. Additionally, more than half of all hate speech flagged by users was directed at Black people, Muslims, members of the LGBTQ+ community and Jews.
However, the algorithm Facebook used was primarily detecting and removing hate speech aimed at White people and men while keeping racist and derogatory hate speech directed at minorities on the platform. This internal research is in line with an independent civil rights audit of Facebook, performed in the same year, that called Facebook’s content moderation policy in many instances a “tremendous setback” for civil rights.
The research team proposed to overhaul the hate speech algorithm to focus on removing content that targets the four most harassed groups on Facebook. However, Facebook decided against this as it feared a backlash from America’s far right and that these measures would interfere with its perceived neutral political stance. Instead, it preferred the algorithm to remain ‘race-blind’ even if it would come at the expense of vulnerable groups. As summarized by one of the researchers that spoke to the Washington Post:
If you don’t do something to check structural racism in your society, you’re going to always end up amplifying it and that is exactly what Facebook’s algorithms did.
The Facebook hate speech algorithm is a clear example of how “race-blindness” or “neutrality” claimed in the face of systemic social injustice often means maintaining and reproducing a racist status-quo.
Facebook’s race-blind practices around hate speech came at the expense of Black users, new documents show
Dutch Scientific Council knows: AI is neither neutral nor always rational
AI should be seen as a new system technology, according to The Netherlands Scientific Council for Government Policy, meaning that its impact is large, affects the whole of society, and is hard to predict. In their new Mission AI report, the Council lists five challenges for successfully embedding system technologies in society, leading to ten recommendations for governments:
The Council demystifies AI, explaining that the technology isn’t neutral nor always rational, and using very similar examples to make that point as our list of racist technologies in action. Their report also addresses the new forms of inequality that come into existence because of AI. They studiously avoid the term racism, but do address “discrimination against people of colour.” The Council is right in stating that the introduction of AI results in questions that transcend the domain of the tech companies. They rightfully see discrimination as a societal problem that requires solutions in the form of access to institutions and a normative debate about what we consider to be acceptable grounds of differentiation.
You can download the full (512 page) Dutch report here, and the attention/time-challenged can check out a more visual one-page summary. There is an English summary too.
Opgave AI. De nieuwe systeemtechnologie
Intentional or otherwise, surveillance systems serve existing power structures
In Wired, Chris Gilliard strings together an incisive account of the racist history of surveillance: from the invention of home security system to modern day surveillance devices and technologies, such as Amazon and Google’s suite of security products.
His core argument, aligning with Tawana Petty and Simone Browne, is that surveillance—and the technologies that emerge out of that context—regardless of intent and origin, will “end up directed at the most marginalised – particularly – Black and brown populations.” In the U.S., the surveillance of Black and brown bodies is historically tied to the formation of slavery, and these technologies further entrench this practice, which is foundational to the operation of law enforcement. Hence, this logic, and technologies, merely “nibble around the edges of problems that are systemic,” function as an exercise of power, and will neither be a solution nor bring about systemic change. These parallels can be drawn in the EU context as well, particularly in migration or law enforcement.
This trend of surveillance, perpetuated by public and private actors, have extended to the individual policing of public spaces and neighbourhoods under a veil of ‘safety’. Companies such as Amazon, with its suite of home surveillance products have profited from, and contributed to, a culture of policing marginalised populations. Yet, the belief that more and ‘better’ forms of surveillance initiatives or technologies will solve societal issues are false. In the Netherlands, the use of WhatsApp groups in neighbourhood to ‘prevent crime’ has proven to have limited effects on reducing local crime. As Gilliard asserts:
No amount of advances in technology will change the basic truth that surveillance and carceral technology exist to serve those in control.
A Black Woman Invented Home Security. Why Did It Go So Wrong?
Racist Technology in Action: an AI for ethical advice turns out to be super racist
In mid October 2021, the Allen Institute for AI launched Delphi, an AI in the form of a research prototype that is designed “to model people’s moral judgments on a variety of everyday situations.” In simple words: they made a machine that tries to do ethics.
You can give it a simple free form description of a situation or ask it a question (e.g. “Ignoring a phone call from a friend”), and the AI will give its moral judgment (“It’s rude”).
Soon after the launch it became clear that Delphi was full of racist and bigoted stereotypes. Futurism published how Delphi decided that “Being a white man” is more morally acceptable than “Being a black woman”, and that Delphi thought that a white man walking towards you at night was “okay”, while a black man doing the same was “concerning”.
The surprised research team quickly updated the prototype and wrote a reply trying to address the problems. Basically they said they hadn’t expected the adversarial input (apparently this was the first time they ventured onto the internet), they think it is better to act and try and make more inclusive, ethically-informed, and socially-aware AI’s, than not to act, and they quickly improved their model (racist answers in cases where race was mentioned went from 9% to 2%). They failed to address any of the more structural problems.
When you read their original paper, you quickly realize that they couldn’t have gotten anything else but this result. Their formalization of morality is based on a (terribly naive) descriptive and situational ethics. This means they aren’t modeling morality, they are actually modeling the racist bias that exists in society. Maybe it is time these computer scientists add one or more moral philosophers to their team?
When you ask Delphi: “Is it okay to continue to let your AI prototype spew out racist results?”, it will answer: “It’s bad”. At least on that point it has a deeper understanding of morality than its creators…
You can find more examples of racist technology in action on our website.
Events, exhibits and other things to do
Help us…
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here.
Did you read something that would be relevant for this newsletter? Then share it with us.
About the Racism and Technology Center
The Racism and Technology Center is non-profit foundation based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center.
This is a monthly newsletter.
Did you enjoy this issue?
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue