Image
4 figures working on laptops around a desk with EU stars, a law paragraph sign, a gravel, a megaphone and binary codes around them
Image
4 figures working on laptops around a desk with EU stars, a law paragraph sign, a gravel, a megaphone and binary codes around them
Image
4 figures working on laptops around a desk with EU stars, a law paragraph sign, a gravel, a megaphone and binary codes around them

Civil society should be aware of the EU AI Act’s limitations, particularly in areas like law enforcement, migration or national security, and advocate for stronger protections for fundamental freedoms in their countries.  

By the end of this learning package, you will: 

  • Understand the key changes that the AI Act brings, including the new tools civil society gets to hold AI developers and deployers accountable in practice; 
  • Know how national-level enforcement is going to look like; 
  • Have learnt how people can contribute their cases to the broader AI Act implementation. 
Introduction

The European Union adopted the AI Act in 2024 after a three-year period of legislative development. It is globally the most comprehensive attempt to regulate AI technologies. The main mission? To set up clear rules for AI development and use across the EU, all while protecting fundamental rights.   

But has that mission really been achieved? Despite vigorous advocacy from CSOs, which succeeded in securing some significant improvements, such as prohibitions of the most harmful systems or the requirement for public authorities to assess the impacts on fundamental rights before using the system, the final version still leaves many ambiguities. The AI Act is full of far-reaching exceptions, which lower protection standards, especially in law enforcement, migration and national security.  Still, it establishes a mandatory framework that, if properly implemented, could bring more transparency and accountability for how AI is developed and deployed, especially in public services. The Act largely relies on secondary legislation (that is, delegated and implementing acts, codes of practice, templates and technical standards) to translate its requirements into concrete processes and benchmarks. Picture it as a giant jigsaw puzzle where the final pieces are crucial to make sure AI developers and deployers play fair and can be effectively held accountable in practice. 

Finally, the AI Act, while far from being a golden standard for AI regulation, is poised to influence other regulatory developments around the world. Civil society should be aware of the AI Act’s downsides and advocate for stronger protections in their countries.   

How the AI Act works 

Timeline 

The AI Act will come into force in phases, with almost full application anticipated by 2026. 

Aug 2024: The AI Act formally enters into force, kicking off the timeline for various prohibitions and obligations enshrined in the law.

Feb 2025: The prohibitions on  "unacceptable risk" AI kick in, such as systems that aim to manipulate or deceive people in order to change their behaviour, or seek to evaluate or classify people by "social scoring".

Aug 2025: A range of obligations go into effect on the providers of the so-called "general-purpose AI" models that underpin generative AI tools like ChatGPT or Google Gemini.

Aug 2026: Rules now apply on "high risk" AI systems, including biometrics, critical infrastructure, education and employment.

Understanding risk within the AI Act  

The Act is a risk-based regulation with four levels: unacceptable, high, limited and minimal, plus an additional category for general-purpose AI (GPAI). 

Image
The EU AI Act defines 4 levels of risks for AI systems. A pyramid with minimal risk at the bottom, limited risk, high risk and unacceptable risk on top.
Levels of risk within the EU AI Act

There are prohibitions on applications with unacceptable risks, and high-risk applications must comply with security, transparency and quality obligations, and undergo conformity assessments. Limited-risk applications only have transparency obligations, while minimal-risk applications are not regulated. There are separate rules for GPAI, which include transparency requirements and additional evaluations for the most advanced models. 

Intended purpose vs. general-purpose AI 

Some AI systems have a pre-defined purpose or field of application. For example, remote biometric identification systems perform a specific task of comparing people’s biometric features with a reference database. However, some AI systems can be applied for many different purposes and their tasks are not pre-defined. This is what the AI Act refers to as “General-purpose AI". For example, large language models that process and generate text, such as ChatGPT, can be used for many different purposes. For such types of systems, it is not easy to determine the level of risk, as they do not have a pre-defined area of use, e.g. only in law enforcement. This is why the AI Act creates separate rules for them. 

The AI Act creates distinct obligations for AI providers and AI deployers, these terms should be understood as follows: 

  • AI provider: an individual or company, public authority, agency or other body that develops an AI system or a GPAI model (or that commissions this development) and makes it available on the EU market (for sale or use) under its own name or trademark, whether for payment or free of charge. 
  • AI deployer: an individual or company, public authority, agency or other body using an AI system under its authority. This does not apply to personal, non-professional use.  
What are the key changes the AI Act brings? 
Prohibited AI systems 

From February 2025, the sale and use of some AI systems is banned in the EU. These include, for example: 

  • Real-time remote biometric identification in public spaces (e.g. face recognition) in the area of law enforcement (with exceptions allowing EU countries to authorise facial recognition in some cases); 
  • Biometric categorisation to infer sensitive information about people (e.g. their race or sexuality); 
  • Creating or expanding facial recognition databases through scraping of facial images from the internet or video surveillance footage; 
  • Emotion recognition in education or employment; 
  • Predictive policing when it is based on profiling individuals (as opposed to predicting crime based on criminal statistics from a certain neighbourhood) and only when it is not supporting an assessment by a police officer.  

While the inclusion of prohibitions is a clear victory of civil society advocacy, the AI Act provisions are riddled with loopholes, which calls into question how effective they will be in protecting civic space and fundamental rights in practice. Civil society has a key role to play in ensuring that exceptions are interpreted narrowly, and that existing data protection and anti-discrimination laws are respected. See for example: Civil society statement on European Commission guidelines implementing the prohibitions

More transparency and accountability of “high-risk” AI systems

Most of the AI Act requirements will apply to so-called ‘high-risk’ AI systems, which require close oversight to prevent societal and individual harm. They include, for example: 

  • Systems which rely on biometrics, and which do not fall into prohibited practices (for example, some uses of remote biometric identification in law enforcement and all such uses in other areas or emotion recognition systems outside the areas of employment and education); 
  • Systems used for evaluating eligibility for public benefits; 
  • Systems used by migration authorities to assess or evaluate the risk posed by visa or asylum applicants; 
  • Credit scoring systems; 
  • Systems used in recruitment or for workers’ management; 
  • Systems used for influencing the outcome of an election or voting behaviour.  

Providers of high-risk AI systems will be required to, for example: 

  • Assess and monitor risks to health, safety and fundamental rights; 
  • Ensure the use of high-quality data for training algorithms and prevent bias; 
  • Maintain up-to-date technical documentation and provide accurate and comprehensive information to deployers (e.g. public authorities procuring the system). 

Additionally, deployers of high-risk systems in the public sector, as well as banks and insurance companies, will have to assess and mitigate impacts on fundamental rights prior to using the system. 

Finally, high-risk AI systems will have to be registered in a publicly accessible EU database to enable more public scrutiny. However, this obligation will not apply to systems developed for or used in the law enforcement or migration context which is a significant loophole given that risks to human rights are arguably the highest in these areas. 

Safeguards for generative AI (for example, ChatGPT) 

The AI Act creates dedicated obligations for providers of generative AI. This includes a policy to comply with copyright rules and publishing a summary of what content they use to train their models. Additionally, the European Commission will designate some of the most powerful generative AI systems as presenting “systemic risks”. These companies will have to assess and mitigate risks, as well as monitor and report serious incidents to the European Commission. In addition, the outputs of generative AI systems will have to be marked and detectable as artificially generated. 

Worrying loopholes 

While the Act requires AI developers to maintain high standards for the technical development of AI systems (e.g. in terms of documentation or data quality), measures intended to protect fundamental rights, including key civic rights and freedoms, are often insufficient. They are riddled with far-reaching exceptions, lowering protection standards.  

Worryingly, a blanket exemption for national security was introduced in the AI Act. The AI Act will automatically exempt AI systems developed or used solely for the purpose of national security from scrutiny, regardless of whether this is done by a public authority or a private company. In practical terms, this means that governments could invoke national security to introduce otherwise prohibited systems, such as mass biometric surveillance. As observed in Member States such as France and Hungary, the justification of protecting national security has already been used to restrict the freedoms of association, assembly and expression to expand the surveillance powers of the police.  

It is also concerning that the AI Act introduced a double standard for protection in the areas of law enforcement and migration. The police and border authorities will not have to publish information about the systems that they use, nor the results of fundamental rights impact assessments. Harmful AI systems used against people on the move, e.g. polygraphs, emotion recognition or biometric surveillance, are not expressly prohibited.  

Complaints and redress 

Unlike the GDPR, the AI Act does not create an individual right to complain. Rather, it gives anyone, including civil society organisations, the right to flag infringements to the market surveillance authority, even if they are not directly affected. Such complaints should be taken into account in the authorities’ work. In cases that affect consumers (e.g. credit scoring, insurance premiums, the use of generative AI), consumer groups will be able to use representative action, in line with the EU collective redress directive, to seek redress for AI Act violations. The details of both redress options will be regulated on the national level.  

Additionally, the right to explanation applies to any person subject to a decision taken based on the output from a high-risk AI system which “produces legal effects or similarly significantly affects that person”. This complements the existing right to an explanation under the GDPR and appears to extend it beyond solely automated decisions to those where AI has supported a human decision.  

Who is going to implement and enforce the AI Act? 

The AI Act will be operationalised by a multifaceted governance framework involving various entities, including the European AI Board, the European AI Office and national authorities, as well as advisory and expert bodies. Each Member State will set up or appoint authorities with the power to enforce the AI Act and hand out fines for non-compliance by August 2025. National human rights institutions will also be able to access documentation developed for AI Act compliance. At the EU level, the European Data Protection Supervisor (EDPS) and the new AI Office under the European Commission will take charge. The AI Board at the EU level will offer support, advice and ensure consistent enforcement in all countries. Moving forward, it is essential for civil society to participate at all levels where possible.  

Formally, civil society will be able to join the advisory forum, a new body tasked with advising the Commission and the AI Board regarding the AI Act implementation and enforcement. Members of the forum should represent, in a balanced way, commercial and non-commercial interest. The Commission is yet to announce a call for expression of interest to join the forum. It will be crucial for the Commission to actively facilitate and encourage meaningful civil society participation and to ensure proper representation of fundamental rights expertise. 

Image
Visual showcasing overview of AI Act enforcement and oversight institutions
Overview of AI Act enforcement and oversight institutions
Role of civil society

How can civil society influence the implementation and enforcement of the AI Act?

CSOs, particularly those focused on social justice and representing marginalised communities most affected by AI, are essential in shaping the response to the AI Act. It is crucial to recognise that the negative impacts of AI often hit marginalised communities the hardest, especially in areas like biometric surveillance, migration, and national security. These technologies can threaten fundamental rights, making it even more important for civil society to advocate for strong protections and ensure that AI systems are used responsibly and fairly. It is important for the EU and Member State institutions to actively engage with a wide range of civil society groups, especially those focused on fundamental rights. This engagement is crucial for upholding the spirit of the AI Act and adhering to the founding values of the European Union.

With this in mind the following video explains the AI Act through the lense of biometrics, migration and new transparency tools and how civil society can use the AI Act to keep AI use to account.  

Video explainer

ECNL's Karolina Iwanska, Ella Jakubowska of European Digital Rights (EDRi), Caterina Rodelli of Access Now and Nikolett Aszódi of AlgorithmWatch unpack the EU AI Act through the lens of biometrics, migration and new transparency tools, and outline how civil society can use the AI Act to keep AI use to account.  

Video Url

How can your organisation use the AI Act for good:

Advocate for a full ban of biometric surveillance on the national level.
Apply to join the advisory forum on the EU level or advocate for a creation of a similar body on the national level.
Use the new public database of high-risk AI systems to investigate potentially harmful uses of AI in the public sector.
Use the complaint procedure to signal infringements to supervisory authorities.
Establish cooperation with national human rights institutions and equality bodies who will be able to access documentation produced under the AI Act to fulfil their tasks.
Advocate for extending AI Act requirements and limits for AI used in national security.
Establish or join national and EU-level coalitions of civil society actors to coordinate activities and exchange knowledge and skills.
Reflect on what you have learnt:
  • Which of the listed advocacy or engagement opportunities align most closely with my organisation’s work, and what concrete steps can we take to contribute to AI Act enforcement?
  • Who are the key allies I can collaborate with to advocate for a full ban of biometric surveillance in my country?
  • Are there any cases or concerns from my community that should be highlighted in the AI Act’s implementation, and how can I contribute them?
  • Who in my country is responsible for enforcing the AI Act, and how can I engage with them to ensure civil society perspectives are considered?
Quiz
Test your knowledge
Quiz
AI LP4
Document
Towards an AI Act that serves people and society
Strategic actions for civil society and funders to shape the outcomes of the AI Act.