View profile

Racism and “Smart borders”, corporatespeak and racial injustice, and a year of algorithms behaving badly


Racism and Technology Center

January 24 · Issue #1 · View online

Curated news, research and activism on the intersection between racism and technology.

Racism and “Smart Borders”
As many of us had our attention focused on the use of biometric surveillance technologies in managing the COVID-19 pandemic, in a new UN report prof. E. Tendayi Achiume forcefully puts the spotlight on the racial and discriminatory dimension of biometric surveillance technology in border enforcement. The UN working with Palantir, EU’s Frontex using drones, iris scans and even lie detectors; surveillance technologies are increasingly part of the ‘day to day business’ of border and migration enforcement. Prof. Achiume, as the UN Special Rapporteur, strongly calls attention to the racism in these technologies, countering the dominant narrative of these technologies as a more “humane” and “objective” form of border control.
Crucially, Achiume calls for a moratorium on the use of digital surveillance technologies in border enforcement due to their use “to advance the xenophobic and racially discriminatory ideologies”. The report sees this racism as an effect of “widespread perceptions of refugees and migrants as per se threats to national security [or] as a result of the pursuit of bureaucratic and humanitarian efficiency without the necessary human rights safeguards”. Additionally, researcher Petra Molnar emphasizes that the uniquely vulnerable position created in the immigration and border context (and lack of societal attention) allows for the overwhelmingly black and brown migrants to function as a testing ground for these surveillance technologies.
Read more about the Special Rapporteur prof. E. Tendayi Achiume’s work here and her previous report on racial discrimination and emerging digital technologies here. See the EDRi report by Petra Molnar on migrants as a testing ground for technology here.
UN warns of impact of smart borders on refugees: ‘Data collection isn't apolitical’
Corporatespeak and racial injustice
In light of the Black Lives Matter protests in the U.S. and protests against police brutality in Europe, technology companies have been quick to release corporate statements, commitments, campaigns and initiatives to tackle discrimination and racial injustice. Amber Hamilton evaluated 63 public facing documents from major technology companies such as Facebook, Instagram, Twitter, YouTube, Airbnb and TikTok. Her work shows that the power of language and the choice of words we use when it comes to racism matter, as this shapes how issues are framed and the type of solutions that can be proposed. The tech industry’s statements about diversity and inclusion obscure the problems of racism and whiteness, fundamentally failing to address the power asymmetries and systemic inequality within the industry. By engaging in ‘racecraft,’ and focusing on race as a category rather than racism as an issue and the cause for discrimination, Hamilton argues that it absolves these companies of responsibility for racism. Complex issues require moving beyond quick, shiny fixes and companies “need to understand that fighting racism cannot start and end with statements of solidarity and technical fixes”.
In order to effect real change, Hamilton asserts that technology companies have to hire, retain and promote people from underrepresented groups in executive positions. This, however, continues to be an uphill battle. The lack of representation in the industry also has significant ramifications on other areas of technology development, such as artificial intelligence that are utilised in sectors such as education and healthcare. The recent controversy between Google employee and AI ethical researcher Timnit Gebru and Google has shown that actual change continues to prove challenging, reinforcing the need for resistance and to organise internally and externally of tech companies.
What’s missing from corporate statements on racial injustice? The real cause of racism.
Google fires AI researcher Timnit Gebru
Google’s fired AI researcher and ethicist Timnit Gebru after she wrote an email criticising Google’s policies around diversity while she struggled with her leadership to get a critical paper on AI published. This angered thousands of her former colleagues and academics. They pointed at the unequal treatment that Gebru received as a black woman and they were worried about the integrity of Google’s research.
For Alex Hanna (a research scientist at Google) and Meredith Whittaker (a former organizer at Google) this story is another example of how “corporate-funded research can never be divorced from the realities of power, and the flows of revenue and capital.” They are concerned with the way that the people who are benefiting from the proliferation of AI are shaping the understanding of these systems and how this inequity follows racial, gender and class lines. They suggest a few ways to mitigate this problem: tech workers need a union, critical research needs to be funded outside of the corporate environment (while being free of corporate influence), and the AI field needs regulation.
Timnit Gebru’s Exit From Google Exposes a Crisis in AI
A year of algorithms behaving badly
The Markup has published an overview of the ways in which algorithms have been given decisional powers in 2020 and have taken a wrong turn. They list some pretty horrid stories in categories like ‘Workplace Surveillance’ and ‘Policing and Persecution’. Under ‘The Racism Problem’ they write about how Google Adwords connects the key words “Black girls”, “Asian girls” and “Latina girls” but not “White Girls” to porn, and about how ‘race adjustments’ in the algorithms that doctors use to determine who get to receive treatment can have a negative impact on people of color (not being eligible for a kidney transplant for example).
Algorithms Behaving Badly: 2020 Edition
Racist technology in action: Cropping out the non-white
A recent, yet already classic, example of racist technology is Twitter’s photo cropping machine learning algorithm. The algorithm was shown to consistently preference white faces in the cropped previews of pictures. As the algorithm automatically put the white faces on an image front and centre, black, brown or Asian faces were hidden in the cropped versions of the pictures. The racism accidentally came to light after tweets from student Colin Madland went viral in September 2020. Madland, a white man, posted several pictures of himself and a black colleague with the black colleague consistently being cropped out of the preview. With his tweets going viral, many people replicated the same racist result with notably even cartoon characters not escaping the racist treatment. Most popular were pictures of Mitch McConnell and Barack Obama where Obama’s image was consistently erased.
Both versions of the image get cropped by Twitter to McConnell
Both versions of the image get cropped by Twitter to McConnell
Responding to the controversy, Twitter emphasized that it ‘tested for bias’ before implementing the system and vowed to address the apparent ‘bias’ in its algorithm. Possible explanations given were the models reliance on contrast in its learning patterns. However, Twitter struggled to explain how the model was trained and currently functions and, as of last December, the algorithm still seemed to display the same racist behaviour.
Events, exhibits and other things to do
Help us…
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here.
Did you read something that would be relevant for this newsletter? Then share it with us.
About the Racism and Technology Center
The Racism and Technology Center is based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center.
This is a monthly newsletter.
Did you enjoy this issue?
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue