View profile

State algorithms failing the most basic requirements, techno solutionism in the world of soccer, and a racist AI judging a beauty contest

Revue
 
 

Racism and Technology Center

June 11 · Issue #19 · View online

Curated news, research and activism on the intersection between racism and technology.


Shocking report by the Algemene Rekenkamer: state algorithms are a shit show
The Algemene Rekenkamer (Netherlands Court of Audit) looked into nine different algorithms used by the Dutch state. It found that only three of them fulfilled the most basic of requirements.
The Rekenkamer selected algorithms that have an impact on Dutch citizens, that carry a risk for incorrect use, and that are in actual use. They also made sure to look at different types of algorithms and in different domains. Then they used an assessment framework to evaluate the algorithms on governance, privacy, data and model, and on IT management. This resulted in the following table:
The tone of report is very matter of fact, but the results are truly shocking. When it comes to bias for example, organizations like the Dutch Police, the Migration Directorate at the Ministry of Justice, and the National Office for Identity Data, have not even properly checked whether the models behind their algorithms contain bias.
The predictive policing system that is in use by the police does not get a single passing score for any of the elements in the assessment framework. Unbelievably, the police has chosen to dig in and dispute the report. In their letter to the government they write that the ‘Crime Anticipation System’ is low risk in their books, and so they see no problem in continuing to use the system. The Rekenkamer calls their reaction ‘worrying’, as the algorithm could negatively impact real people.
Let’s hope that other organisations are less foolhardy and will follow the Algemene Rekenkamer’s recommendation to continuously (and perpetually) check for bias in algorithms during both the design and the implementation phase, making sure that there are no unwanted systemic deviations for specific individuals or groups.
Diverse algoritmes Rijk voldoen niet aan basisvereisten
‘Smart’ techologies to detect racist chants at Dutch football matches
The KNVB (Royal Dutch Football Association) is taking a tech approach at tackling racist fan behaviour during matches, an approach that stands a great risk of falling in the techno solutionism trap.
Football players have been calling out abusive behaviour by fans as well as racism in the clubs themselves for a long time with 14% of professional Dutch football players having personally encourntered racism in their work. This latest pilot aiming to employ ‘smart technologies’ to combat racism is part of the KNVB’s plan of attack against racism and discrimination‘, launched in 2020 and developed in response to racist comments directed towards Excelsior-player Ahmad Mendes Moreira. Three different technologies will be tested, although the press statement is as of yet quite vague about the exact technologies.
It seems that Feyenoord will use video imaging as well as audio to identify verbal abuse but also look into “prevention by technology”. PSV will base measurements of the enthousiasm and mood of the crowd on audio data and will use this to both give live feedback to the crowd (hopeing to encourage them to engage positively with the game) as well as to detect anomalies “through artificial intelligence” and intervene where necesssary. PEC Zwolle will test out an app that can be used to get access to the stadium and will additionally test how “artificial intelligence and machine learning can improve the fan experience and social safety in the stadium”.
The willingness of the KNVB to take concrete action against racism in football is a welcome, albeit long overdue, development. As ex-player Clarence Seedorf recently remarked there has been “a lot of talk, but little action”. Nevertheless, presenting technology (especially much hyped and vague ones such as ‘smart technologies’ and ‘AI’) as a ‘solution’ for racism transforms this deeply social and political problem into a technological one. This type of techno solutionist thinking could draw away attention and resources from adressing root causes as well as lay the ground work for the implementation of even more technologies of surveillance and control. Perhaps the KNVB could listen to Seedorf, who points towards educational measures, media coverage and the lack of people of colour in leadership positions.
Pilot met slimme technologie tegen discriminerende spreekkoren
Centring communities in the fight against injustice
In this interview with OneWorld, Nani Jansen Reventlow reflects on the harmful uses of technology, perpetuated by private and public actors. Ranging from the Dutch child benefits scandal, to the use of proctoring in education and to ‘super SyRI’ in public services.
Photo by Mishael Phillip from the OneWorld article
Photo by Mishael Phillip from the OneWorld article
Jansen Reventlow reflects on her experience in the digital rights field, since setting up Digital Freedom Fund in 2017. She explain how for a long time the field has mostly focused on the issues of privacy, data protection, and freedom of expression. Whilst important, this tends to ignore or downplay socio-economic issues and rights. This has partly been so because the entire technology sector has largely been dominated by white, cisgender, middle-class, able-bodied men. Her positionality as a non-white woman working in the field has been particularly challenging when fighting for justice and pushing for change internally and externally. Toni Morrison’s words continue to ring true: “the function, the very serious function of racism is distraction. It keeps you from doing your work. It keeps you explaining, over and over again, your reason for being.”
The lack of reckoning with marginalised and affected groups which bear the brunt of racist and harmful technologies has been given more focus through DFF’s Decolonising Digital Rights Project, expanding the focus and reach of the digital rights community. Building on this movement remains crucial, as racialised and marginalised people continue to suffer from the increasing use of technologies in our societies. Nani’s fight against injustice has had a long trajectory, and her new organisation, Systemic Justice continues her efforts. Focused on strategic litigation, it centres communities from the start. And by lets these communities lead, rather than letting ‘experts’ dictate the agenda and approach. Making sure that communities are at the forefront of our struggle against injustice remains crucial, if any meaningful systemic change is to happen.
Racist Techology in Action: Beauty is in the eye of the AI
Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”
Still from the Beauty.AI promo video
Still from the Beauty.AI promo video
In 2016, the coalition behind Beauty.AI set out to use this ‘impartial’ system to judge a global beauty contest. Contestants send in a photo which the system used to rate their beauty based on criteria such as symmetry, wrinkles, and age group. However, it did not stop there as skin colour was an explicit criterion and the system was apparently able to gather ethnicity and gender from the photos as these were criteria used to judge contestants’ beauty. Jordan Pearson, in a story for Motherboard, described the results:
Out of the 44 people that the algorithms judged to be the most “attractive,” all of the finalists were white except for six who were Asian. Only one finalist had visibly dark skin.
The clear racial preference of the system can be explained by the biased training data and homogenous group of system developers as well as the fact that skin colour was used as a criterion to begin with. One can only imagine how this criterion was concretely weighed in the system. The problem of biased training data is one we have seen many times before in systems using any type of facial recognition software (here, here and here).
However, this example of a racist technology also points towards deeper problems. The basic notion that a concept such as beauty can be judged ‘objectively’ if only the right algorithm is found is fundamentally flawed, even if deeply problematic categories such as ethnicity or gender are removed from the equation (which cannot be ascertained based on appearance to begin with). And, even if it would be technically possible, these should never factor into a conception of beauty.
Events, exhibits and other things to do
Help us…
Do you know somebody who might think this newsletter is worthwhile too? Please tell them to subscribe here.
Did you read something that would be relevant for this newsletter? Then share it with us.
About the Racism and Technology Center
The Racism and Technology Center is non-profit foundation based in the Netherlands. It uses technology as a mirror to reflect and make visible existing racist practices in (Dutch) society. As a knowledge center, it provides a platform, resources, knowledge, skills and legitimacy to anti-racism and digital rights organizations to help them create understanding of how racism is manifested in technology with the goal of dismantling systems of oppression and injustice. Read more about the Center.
This is a monthly newsletter.
Did you enjoy this issue?
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue