Image
10 different boxes with people in them. The first one portrays 2 men holding hand. The second one one man in glasses turning right. The 3rd one 2 women carrying a baby in a baby carriage. 4th a blond woman turning right. 5th a woman with handscarf turning left. 6th a woman of colour. 7th a person on wheelchair. 8th a woman of colour. 9th a pregnant woman and 10th a man in glasses. People of disadvantaged groups are coloured orange, the rest in blue. Dark blue background.
Image
10 different boxes with people in them. The first one portrays 2 men holding hand. The second one one man in glasses turning right. The 3rd one 2 women carrying a baby in a baby carriage. 4th a blond woman turning right. 5th a woman with handscarf turning left. 6th a woman of colour. 7th a person on wheelchair. 8th a woman of colour. 9th a pregnant woman and 10th a man in glasses. People of disadvantaged groups are coloured orange, the rest in blue. Dark blue background.
Image
10 different boxes with people in them. The first one portrays 2 men holding hand. The second one one man in glasses turning right. The 3rd one 2 women carrying a baby in a baby carriage. 4th a blond woman turning right. 5th a woman with handscarf turning left. 6th a woman of colour. 7th a person on wheelchair. 8th a woman of colour. 9th a pregnant woman and 10th a man in glasses. People of disadvantaged groups are coloured orange, the rest in blue. Dark blue background.

Community-specific human rights impacts of AI

The impact of AI systems on marginalised groups is all too often overlooked, which exacerbates systemic problems and inequalities. If you’re interested in learning more about the impact of AI on specific communities and would like to delve right into it, this package is for you!

By the end of this learning package, you will:

  • Have a deeper understanding of the impact of AI systems on particular communities;
  • Know where to find useful information on specific groups (eg. gender, race, migrants, people with disabilities etc).

     

The second package brings together existing resources on how AI affects the human rights of various sections of society. Note that we try to add to and update this resource list on an ongoing basis. If there are particular communities you would like to add and/or if you are aware of other resources that are worth including on this list, feel free to email us on [email protected].

Techno-solutionism.

We see that AI is being used as a buzzword to promote dangerous data-driven technologies disguised as ‘innovation’ and ‘progress’. But there is often no clear vision or understanding of whether such technology is even suited to solve real-life problems.

Harms for those at-risk.

There is a marked power imbalance between developers and deployers of AI systems and the communities who use them or in whose spaces these are employed, particularly historically marginalised and underrepresented groups. When considering the potential opportunities offered by AI systems, it is important to begin by analysing the relevant power dynamic and focusing on the needs of the most at-risk communities. ‘Nothing about us without us’ rings true in every situation, including in AI design, deployment and governance, given the significant human rights impacts it has and the potentials of algorithmic systems.

Testimonials

Image
Mia Ahlgren

We need to be actively involved in decision-making concerning development and implementation of AI legislation and policy. It is more than a plea; it is an obligation that all member states must comply with, as they have ratified the UN Convention on the Rights of Persons with disabilities.

Mia Ahlgren
Human Rights Policy Officer at the Swedish Disability Rights Federation, Member of European Disability Forum ICT expert group
Image
Benjamin Ignac

European institutions that monitor human rights, assess the impact of AI systems, and issue recommendations to the EU on how to build human centric and trustworthy AI - these actors need to decolonise themselves and include a wider spectrum of voices from the actual communities that are vulnerable and at actual risk of technological discrimination.

Benjamin Ignac
Romani technologist, Research Fellow at the Roma Initiatives Office at OSF and Public Policy alumnus from the University of Oxford
Video explainer

Want to learn more about the impact of AI on marginalised groups? Watch the video explainer.

When watching the video consider the following questions...
  • How, concretely, does AI for security purposes disproportionately affect those that you work with?
  • Mai E'leimat discusses persons most at risk from increased profiling. Have you seen a similar impact trend in your context?
  • Human rights impact assessments and evidence based meaningful participation can be used to prevent harmful effects of AI. Do you know of any AI that has had these checks before being employed?
Video Url

Resources centered on community-specific impacts of AI

Black, Indigenous, and people of colour

  1. Blog post: Data Racism, A New Frontier (European Network Against Racism, ENAR). This blog explains what data racism is in the context of an emerging strand of ENAR's work exploring racism in the digital space. 
  2. Book: Race After Technology: Abolitionist Tools for the New Jim Code by Ruha Benjamin (2019). From everyday apps to complex algorithms, Ruha Benjamin cuts through tech-industry hype to show how emerging technologies can reinforce White supremacy and deepen social inequity. In this illuminating guide, Benjamin provides conceptual tools for decoding tech promises by applying sociologically informed scepticism. In doing so, she challenges us to question not only the technologies we are sold but also the ones we ourselves manufacture. Visit the book's free Discussion Guide here.
  3. Book: Algorithms of Oppression: How search engines reinforce racism by Safiya Umoja Noble (2018). The author challenges the idea that search engines like Google provide a level playing field for the entire spectrum of ideas, identities and activities. Based on an analysis of media searches and extensive research on paid online advertising, Noble exposes a culture of racism and sexism in how discoverability is created online.
  4. Working group: Indigenous AI. The Indigenous Protocol and AI Working Group develops new conceptual and practical approaches to building the next generation of AI systems. Here you can find their position paper and several blogs from its members.  
  5. Film: Coded Bias by Shalini Kantayya explores the fallout from MIT Media Lab researcher Joy Buolamwini’s discovery that facial recognition does not accurately see darker-skinned faces, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all.
  6. The Racism and Technology Center uses technology as a mirror to reflect existing racist practices in society and make them visible. This knowledge centre provides a platform as well as  resources, knowledge, skills and legitimacy to anti-racism and digital rights organisations to help them create an understanding of how racism is manifested in technology, the goal being to dismantle systems of oppression and injustice. See for example their collected examples of racist technology.
  7. Toolkit: Artificial Intelligence in HR (European Network Against Racism, ENAR). The toolkit explores the role of human bias and structural discrimination in AI used for human resource management. It provides accessible explanation to how structural racism and bias is reproduced and amplified by intelligent systems and use of key technologies in the field. It also provides clear steps to ensure companies can address these biases that often mostly impact people of colour, women and other marginalised groups.
Image
In the dark blue background there are network nets, b&w mountain range and 2 circles (one in light blue and one in orange). The the foreground there is a woman of colour turning to the left, her hair in ponytail.

Women, girls, and non-binary people

  1. Project: Gender Shades (Timnit Gebru & Joy Buolamwini). The Gender Shades project evaluates the accuracy of AI powered gender classification products. The website features the research, data set, results and a short video explaining the project and its results.
  2. Project: The Oracle for Transfeminist Technologies is a space that provides tools for enabling collective brainstorming on alternative imaginaries surrounding technologies.
  3. Research: My Data Rights, feminist reading of the right to privacy and data protection in the age of AI.
  4. Book: Data Feminism by Catherine D’Ignazio and Lauren F. Klein. The book provides a new way of thinking about data science and data ethics that is informed by the ideas of intersectional feminism.

LGBTQIA+

  1. Report: GLAAD Social Media Safety Index. This report draws on extensive input from leaders at the intersection of tech and LGBTQIA+ advocacy and it contains a broad literature review that distils other reports, articles, research and journalism. It also reviews platform policies and analyses how they match up (or don’t match up) with actual LGBTQIA+ user-experience. The campaign to ban automated recognition of gender and sexual orientation.

Refugees and migrants

  1. Report: Technological Testing Grounds by EDRi, Refugee Law Lab.

People with disabilities

  1. Journal paper: Artificial intelligence and disability: too much promise, yet too little substance? by Peter Smith and Laura Smith, exploring the day-to-day realities of how AI can support, and frustrate, people with disabilities. From this, they draw some conclusions on how AI software and technology might best be developed in the future.

Socio-economic inequality

  1. Book: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neill. See also this video presentation based on the book.
  2.  A Primer on AI in/from the Majority World by Data & Society is a curated collection of more than 160 thematic pieces, designed to explore the presence of AI and technology in the geographic regions that are home to the majority of the global population.
Reflect on what you have learnt:
  • Do you work with any of the communities listed in the package. If so, can you think of examples of the disproportionate impact of AI on these persons? Was the information helpful in understanding the impact further?
  • Can you think of ways AI has been deployed in your context, where improvements could have been made if civil society was consulted in each implementation stage?