|
|
May 14 · Issue #18 · View online
Curated news, research and activism on the intersection between racism and technology.
|
|
|
Investigative journalism of the NRC brought to light that the Dutch NCTV (the National Coordinator for Counterterrorism and Security) uses fake social media accounts to track Dutch activists. The agency also targets activists working in the social justice or anti-discrimination space and tracks their work, sentiments and movements through their social media accounts. This is a clear example of how digital communication allows governments to intensify their surveillance and criminalisation of political opinions outside the mainstream. Even though it falls outside of its legal mandate, the agency has been secretly tracking and keeping tabs on a broad range of activists in the Netherlands. Since these revelations last year the Ministry of the Interior, responsible for the NCTV, has responded by attempting to expand this legal mandate by proposing a new law. This law, currently up for debate in parliament, would considerably extend the competencies of the NCTV. The government is pushing forward even though the Autoriteit Persoonsgegevens (Dutch Data Protection Authority) as well as the Raad van State (Council of State) have strongly advised against doing so. Civil society organisations such as Amnesty and Bits of Freedom have been campaigning against the law with Amnesty recently starting a petition directed at parliament. Recently, Merel Koning, Senior Advisor Technology and Human Rights at Amnesty International in the Netherlands, spoke (in Dutch) on Radio 2 about this situation.
|
Amnesty International over bespieden online activisten: 'We moeten de overheid een halt toeroepen'
In een nieuwe aflevering van de podcast Op Z'n Kop! met Rick van Velthuysen en Marianne Zwagerman horen we Merel Koning, Senior Advisor Technology and Human Rights bij Amnesty International. Amnesty is een petitie gestart om de politiek te bewegen de overheid een halt toe te roepen.
|
|
In this op-ed, Sydette Harry unpacks how the tech sector, particularly tech journalism, has largely failed to meaningfully listen and account for the experiences of Black women, a group that most often bears the brunt of the harmful and racist effects of technological “innovations”. While the role of tech journalism is supposedly to hold the tech industry accountable through access and insight, it has repeatedly failed to include Black people in their reporting, neither by hiring Black writers nor by addressing them seriously as an audience. Rather, their experiences and culture are often co-opted, silenced, unreported, and pushed out of newsrooms.
|
Image by Sam Whitney from the original WIRED article.
|
“Harmful behavior toward Black women isn’t enough to inspire change until others are harmed, but the original harms are often lost by journalists tasked with covering tech. […] Defining the systemic abuse becomes a frustrating exercise of describing an empty space that no one believes is there.” Tech journalism fails to acknowledge that its critique of big tech must be situated within a broader, historical context of racism in our societies. The firing of Timnit Gebru from Google, as aptly put by Harry, is not an aberration, but rather, the norm. Silencing Black women is not new. Despite these conditions, their pursuit for liberation continues in imagining and reimagining futures and cybercultures.
|
Listening to Black Women: The Innovation Tech Can't Crack
Tech creators and journalists ignore the insights and experiences of Black women—and fail to see the harm of their “innovations.”
|
|
The MIT Technology Review has written a four-part series on how the impact of AI is “repeating the patterns of colonial history.” The Review is careful not to directly compare the current situation with the colonialist capturing of land, extraction of resources, and exploitation of people. Yet, they clearly show that AI does further enrich the wealthy at the tremendous expense of the poor.
|
Image by Edel Rodriguez from the original MIT Technology Review series.
|
The four parts travel from privatised surveillance networks in South-Africa (terrifying), to an AI data-labeling platform in Venezuela (exploitative). And from ride-hailing drivers in Indonesia (inspiring), to an attempt to revitalise Māori language with AI (hopeful). Read the the four parts here:
|
Part 1: South Africa's private surveillance machine is fueling a digital apartheid
As firms have dumped their AI technologies into the country, it’s created a blueprint for how to surveil citizens and serves as a warning to the world.
|
Part 2: How the AI industry profits from catastrophe
As the demand for data labeling exploded, an economic catastrophe turned Venezuela into ground zero for a new model of labor exploitation.
|
Part 3: The gig workers fighting back against the algorithms
In Jakarta, ride-hailing giant Gojek’s fleet of motorbike taxi drivers are building worker power through worker community.
|
Part 4: A new vision of artificial intelligence for the people
ln a remote rural town in New Zealand, an Indigenous couple is challenging what AI could be and who it should serve.
|
|
The Responsible AI Collaborative is stewarding a database of harms (or near harms) realised in the real world as a result of the use of artificial intelligence: the Artificial Intelligence Incidence Database. It uses a taxonomy by the Center for Security and Emerging Technology to classify each of the incidents. This allows you to search for examples of harm that is unevenly distributed over ‘race’. At the moment of writing the database has 228 reports of this type of harm (although it does contain quite a few duplicates).
|
|
📨 Please tell us which example you think we should feature in our next edition of the newsletter (a reply to this email with a link to the example will do fine…).
|
Artificial Intelligence Incident Database
Examples of race-related harm through the use of AI
|
|
-
Prinsenhof Conferentie 2022, Monday, May 16, 10:00 CET, free event at Pakhuis de Zwijger, Amsterdam.
-
Dilara Bilgiç – De gelabelde: Pleidooi voor een genuanceerde wereldbeschouwing, Monday, May 16, 20:00 CET, paid for event at De Rode Hoed, Amsterdam.
-
Realising Environmental Justice: The Complexity of Inequality, Monday, May 16, 20:00 CET, Free event and livecast at Studio Pakhuis de Zwijger, Amsterdam.
-
The time is now! Public Spaces Conference, May 17-18, free to register at Pakhuis de Zwijger, Amsterdam.
- Deadline for submitting a proposal for the Colour of Surveillance Europe Conference, May 19th.
-
The Hmm @ Responsible AI, Wednesday, May 25, 20:00 CET, paid for event at the Benno Premselahuis , Amsterdam
-
“Freedom Dreams”: Imagining Inclusive Technology Futures through Co-Design with Black Americans, Wednesday, June 8th, 19:00 CET, free online event in the Race and Technology Research Lecture Series by Microsoft.
- The Colour of Surveillance Europe Conference, September 15-16, Amsterdam.
-
Zwartheid onder ogen zien/Facing blackness, Till December 23, Exhibit at The Black Archives, Amsterdam.
|
|
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here. Did you read something that would be relevant for this newsletter? Then share it with us.
|
|
The Racism and Technology Center is non-profit foundation based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center. This is a monthly newsletter.
|
Did you enjoy this issue?
|
|
|
|
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
|
|
|