The impact of AI systems on marginalised groups is all too often overlooked, which exacerbates systemic problems and inequalities. If you’re interested in learning more about the impact of AI on specific communities and would like to delve right into it, this package is for you!
By the end of this learning package, you will:
- Have a deeper understanding of the impact of AI systems on particular communities;
- Know where to find useful information on specific groups (eg. gender, race, migrants, people with disabilities etc).
The second package brings together existing resources on how AI affects the human rights of various sections of society. Note that we try to add to and update this resource list on an ongoing basis. If there are particular communities you would like to add and/or if you are aware of other resources that are worth including on this list, feel free to email us on [email protected].
Techno-solutionism.
We see that AI is being used as a buzzword to promote dangerous data-driven technologies disguised as ‘innovation’ and ‘progress’. But there is often no clear vision or understanding of whether such technology is even suited to solve real-life problems.
Harms for those at-risk.
There is a marked power imbalance between developers and deployers of AI systems and the communities who use them or in whose spaces these are employed, particularly historically marginalised and underrepresented groups. When considering the potential opportunities offered by AI systems, it is important to begin by analysing the relevant power dynamic and focusing on the needs of the most at-risk communities. ‘Nothing about us without us’ rings true in every situation, including in AI design, deployment and governance, given the significant human rights impacts it has and the potentials of algorithmic systems.
Testimonials
We need to be actively involved in decision-making concerning development and implementation of AI legislation and policy. It is more than a plea; it is an obligation that all member states must comply with, as they have ratified the UN Convention on the Rights of Persons with disabilities.
European institutions that monitor human rights, assess the impact of AI systems, and issue recommendations to the EU on how to build human centric and trustworthy AI - these actors need to decolonise themselves and include a wider spectrum of voices from the actual communities that are vulnerable and at actual risk of technological discrimination.
Want to learn more about the impact of AI on marginalised groups? Watch the video explainer.
When watching the video consider the following questions...
- How, concretely, does AI for security purposes disproportionately affect those that you work with?
- Mai E'leimat discusses persons most at risk from increased profiling. Have you seen a similar impact trend in your context?
- Human rights impact assessments and evidence based meaningful participation can be used to prevent harmful effects of AI. Do you know of any AI that has had these checks before being employed?
Resources centered on community-specific impacts of AI
Black, Indigenous, and people of colour
- Blog post: Data Racism, A New Frontier (European Network Against Racism, ENAR). This blog explains what data racism is in the context of an emerging strand of ENAR's work exploring racism in the digital space.
- Book: Race After Technology: Abolitionist Tools for the New Jim Code by Ruha Benjamin (2019). From everyday apps to complex algorithms, Ruha Benjamin cuts through tech-industry hype to show how emerging technologies can reinforce White supremacy and deepen social inequity. In this illuminating guide, Benjamin provides conceptual tools for decoding tech promises by applying sociologically informed scepticism. In doing so, she challenges us to question not only the technologies we are sold but also the ones we ourselves manufacture. Visit the book's free Discussion Guide here.
- Book: Algorithms of Oppression: How search engines reinforce racism by Safiya Umoja Noble (2018). The author challenges the idea that search engines like Google provide a level playing field for the entire spectrum of ideas, identities and activities. Based on an analysis of media searches and extensive research on paid online advertising, Noble exposes a culture of racism and sexism in how discoverability is created online.
- Working group: Indigenous AI. The Indigenous Protocol and AI Working Group develops new conceptual and practical approaches to building the next generation of AI systems. Here you can find their position paper and several blogs from its members.
- Film: Coded Bias by Shalini Kantayya explores the fallout from MIT Media Lab researcher Joy Buolamwini’s discovery that facial recognition does not accurately see darker-skinned faces, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all.
- The Racism and Technology Center uses technology as a mirror to reflect existing racist practices in society and make them visible. This knowledge centre provides a platform as well as resources, knowledge, skills and legitimacy to anti-racism and digital rights organisations to help them create an understanding of how racism is manifested in technology, the goal being to dismantle systems of oppression and injustice. See for example their collected examples of racist technology.
- Toolkit: Artificial Intelligence in HR (European Network Against Racism, ENAR). The toolkit explores the role of human bias and structural discrimination in AI used for human resource management. It provides accessible explanation to how structural racism and bias is reproduced and amplified by intelligent systems and use of key technologies in the field. It also provides clear steps to ensure companies can address these biases that often mostly impact people of colour, women and other marginalised groups.
Women, girls, and non-binary people
- Project: Gender Shades (Timnit Gebru & Joy Buolamwini). The Gender Shades project evaluates the accuracy of AI powered gender classification products. The website features the research, data set, results and a short video explaining the project and its results.
- Project: The Oracle for Transfeminist Technologies is a space that provides tools for enabling collective brainstorming on alternative imaginaries surrounding technologies.
- Research: My Data Rights, feminist reading of the right to privacy and data protection in the age of AI.
- Book: Data Feminism by Catherine D’Ignazio and Lauren F. Klein. The book provides a new way of thinking about data science and data ethics that is informed by the ideas of intersectional feminism.
LGBTQIA+
- Report: GLAAD Social Media Safety Index. This report draws on extensive input from leaders at the intersection of tech and LGBTQIA+ advocacy and it contains a broad literature review that distils other reports, articles, research and journalism. It also reviews platform policies and analyses how they match up (or don’t match up) with actual LGBTQIA+ user-experience. The campaign to ban automated recognition of gender and sexual orientation.
Refugees and migrants
- Report: Technological Testing Grounds by EDRi, Refugee Law Lab.
People with disabilities
- Journal paper: Artificial intelligence and disability: too much promise, yet too little substance? by Peter Smith and Laura Smith, exploring the day-to-day realities of how AI can support, and frustrate, people with disabilities. From this, they draw some conclusions on how AI software and technology might best be developed in the future.
Socio-economic inequality
- Book: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neill. See also this video presentation based on the book.
- A Primer on AI in/from the Majority World by Data & Society is a curated collection of more than 160 thematic pieces, designed to explore the presence of AI and technology in the geographic regions that are home to the majority of the global population.
Children
Reflect on what you have learnt:
- Do you work with any of the communities listed in the package. If so, can you think of examples of the disproportionate impact of AI on these persons? Was the information helpful in understanding the impact further?
- Can you think of ways AI has been deployed in your context, where improvements could have been made if civil society was consulted in each implementation stage?