IP Protector

EU AI Act Explained: What Organizations Need to Know

Author

Michał Bugajło

Publication Date

March 16, 2026

Share

Introduction

Artificial intelligence has rapidly become one of the most important drivers of digital transformation. AI systems based on machine learning and large-scale data analysis are increasingly used across a wide range of sectors — from automating operational processes to supporting decision-making in finance, healthcare, public administration, and many other industries.

As the use of AI expands, new regulatory challenges are emerging. These challenges primarily concern the safety of AI systems, transparency of automated decision-making, and the protection of fundamental rights of individuals affected by AI technologies.

In response, the European Union adopted the Artificial Intelligence Act (EU AI Act) — the first comprehensive legal framework regulating artificial intelligence at a supranational level.

The EU AI Act establishes rules for the development, placing on the market, and use of AI systems within the European Union.

A central feature of the regulation is its risk-based approach, which differentiates regulatory obligations depending on the potential impact of an AI system on public safety and fundamental rights.

Due to its extraterritorial scope, the EU AI Act may also apply to organizations outside the European Union if their AI systems are placed on the EU market or if the results produced by those systems are used within the EU.

This article explains the key principles of the EU AI Act and outlines the most important implications for organizations developing or using artificial intelligence systems.

1. Key Principles of the EU AI Act

The EU AI Act introduces the first comprehensive supranational framework governing artificial intelligence. The regulation creates new legal obligations for organizations that develop, provide, deploy, or distribute AI systems.

The core principles of the EU AI Act include:

  • introduction of a risk classification framework for AI systems;
  • specific regulatory requirements for high-risk AI systems;
  • regulation of multiple actors in the AI ecosystem, including providers, deployers, importers, and distributors;
  • extraterritorial applicability, meaning the regulation may apply to organizations outside the EU;
  • the possibility of significant administrative penalties for non-compliance.

2. The EU Regulatory Context for Artificial Intelligence

The EU AI Act forms part of a broader European regulatory framework for digital technologies. In recent years, the European Union has developed a comprehensive regulatory ecosystem covering areas such as data protection, cybersecurity, and platform accountability.

Key legal acts in this regulatory landscape include:

  • General Data Protection Regulation (GDPR),
  • NIS2 Directive,
  • Digital Services Act (DSA),
  • Data Act,
  • Cyber Resilience Act.

The EU AI Act complements this framework by addressing risks associated with the development and deployment of artificial intelligence technologies.

The legislative proposal was presented by the European Commission in 2021 as part of the EU strategy for Trustworthy AI.

3. AI Risk Classification Under the EU AI Act

A central element of the regulation is the AI risk classification framework, which determines the level of regulatory obligations applicable to different types of AI systems.

Table 1. Risk Classification of AI Systems Under the EU AI Act

Risk CategoryCharacteristicsExamplesRegulatory Consequences
Prohibited AI systemsSystems considered unacceptable under EU lawsocial scoring, manipulative AI systemscomplete prohibition
High-risk AI systemssystems that may significantly affect fundamental rights or safetyrecruitment systems, critical infrastructurestrict regulatory requirements
Limited-risk AI systemssystems requiring users to be informed that they interact with AIchatbots, generative AItransparency obligations
Minimal-risk AI systemssystems with limited impactspam filtersno specific regulatory requirements

Each category entails a different set of regulatory obligations.

4. Prohibited AI Systems

The most restrictive category includes AI applications considered incompatible with EU values and the protection of fundamental rights.

Examples include systems that:

  • manipulate user behavior,
  • implement social scoring mechanisms,
  • conduct real-time biometric identification in public spaces (subject to limited law-enforcement exceptions).

5. High-Risk AI Systems

The second category consists of high-risk AI systems, which may significantly affect individual rights or public safety.

Examples include AI systems used in:

  • recruitment and employment decisions,
  • education and student assessment,
  • financial services,
  • public administration,
  • critical infrastructure.

A detailed list of high-risk AI applications is provided in Annex III of the EU AI Act.

6. Regulatory Requirements for High-Risk AI Systems

The EU AI Act establishes a comprehensive framework of obligations governing the design, development, and operation of high-risk AI systems.

Table 2. Key Regulatory Requirements

Regulatory AreaScope of Obligations
Risk management systemidentification, analysis, and mitigation of risks associated with the AI system
Data governanceensuring the quality, representativeness, and adequacy of training and testing datasets
Technical documentationpreparation and maintenance of documentation demonstrating regulatory compliance
Human oversightenabling effective human supervision of AI systems
Accuracy and robustnessensuring appropriate accuracy, stability, and resilience against errors or manipulation
Post-market monitoringmonitoring system performance after deployment and addressing identified risks

7. Limited-Risk AI Systems

The third category covers limited-risk AI systems.

For these systems, the EU AI Act focuses mainly on transparency obligations rather than extensive compliance requirements.

In particular, users must be informed when they are interacting with an AI system.

Examples include:

  • customer support chatbots,
  • generative AI tools producing text or images,
  • systems modifying audiovisual content.

8. Minimal-Risk AI Systems

The final category includes minimal-risk AI systems.

In general, the EU AI Act does not impose specific regulatory obligations on these systems.

Examples include commonly used applications such as:

  • spam filtering systems,
  • recommendation algorithms,
  • AI tools supporting data analytics.

9. General-Purpose AI Models (GPAI)

The EU AI Act also introduces a dedicated regulatory framework for general-purpose AI models (GPAI).

These models can be used across a wide range of applications and integrated into multiple systems. This category includes large language models (LLMs) and other generative AI models.

The regulation also introduces the concept of general-purpose AI models with systemic risk.

Due to their scale, computational capabilities, and potential market impact, these models may be subject to additional regulatory requirements.

10. Key Regulatory Roles Under the EU AI Act

The EU AI Act applies to several categories of actors within the AI ecosystem. Regulatory obligations depend on the role an entity plays in the AI value chain.

Table 3. Main Regulatory Roles

EntityRole in the AI ecosystemExample
Providerentity developing or placing an AI system on the market under its own name or trademarkcompany developing an AI system
Deployerorganization using an AI system in its operationsbank using AI for credit scoring
Importerentity placing an AI system from a third country on the EU marketcompany importing AI solutions
Distributorentity making an AI system available within the supply chaintechnology reseller

A key practical distinction exists between the provider, who introduces the system to the market, and the deployer, who uses it operationally.

11. Extraterritorial Scope of the EU AI Act

A notable feature of the EU AI Act is its extraterritorial reach, meaning the regulation may apply to entities outside the European Union.

This occurs when:

  • an AI system is placed on the EU market, or
  • the outputs of the AI system are used within the EU.

In practice, this means the regulation may affect global technology companies operating internationally.

12. Why the EU AI Act May Become a Global Regulatory Standard

The EU AI Act is widely regarded as a potential global benchmark for AI governance.

As previously observed with GDPR, European regulatory initiatives can significantly influence global regulatory standards.

One reason is the extraterritorial nature of the regulation, which may require multinational companies to adapt their AI systems to EU requirements.

Another factor is the risk-based regulatory approach, which allows obligations to be calibrated according to the societal impact of AI technologies.

13. EU AI Act Implementation Timeline

The EU AI Act entered into force on 1 August 2024. However, most provisions will become applicable gradually according to the following timeline.

Table 4. Implementation Timeline

Regulatory StageStart DateScope
Prohibited AI practices2 February 2025prohibition of specific AI practices
General-purpose AI models2 August 2025obligations for GPAI models
Main provisions of the EU AI Act2 August 2026most regulatory requirements become applicable
Sector-specific rules2 August 2027full application of selected sector-specific provisions

14. What Should Organizations Do Now?

Organizations should begin preparing for compliance with the EU AI Act well before its full application.

Key steps include:

  • identifying AI systems used within the organization,
  • classifying systems according to the AI Act risk framework,
  • assessing aplicable regulatory obligations,
  • implementing AI governance processes,
  • preparing compliance documentation.
Does the EU AI Act apply to companies outside the EU?

Yes. The regulation may apply to organizations outside the EU if their AI systems are placed on the EU market or if the outputs of those systems are used within the EU.

What is a high-risk AI system?

A high-risk AI system is one that may significantly affect individual rights or public safety — for example, AI used in recruitment processes.

Who must comply with the EU AI Act?

Regulatory obligations may apply to providers, deployers, importers, and distributors of AI systems.

What penalties does the EU AI Act introduce?

In the most serious cases, administrative fines may reach up to €35 million or 7% of a company’s global annual turnover.