The European Union Artificial Intelligence Act is a framework developed by the EU to regulate artificial intelligence development and use. This piece of legislation is relatively new, but its phased approach means companies must take action now. If you’re not sure where to begin with the EU AI Act, read on for our guide to this emerging standard and next steps for becoming compliant.
What is the EU AI Act and why is it important?
The EU AI Act is one of the first comprehensive frameworks for regulating AI globally. It defines a risk-based AI classification system and requires companies doing business in the EU to proactively implement compliance measures to avoid legal and operational risks. The act classifies AI applications into four risk categories: unacceptable, high, limited, and minimal, each subject to specific rules or restrictions.
Who does the EU AI Act apply to?
The EU AI Act is far-reaching. This standard applies to:
- Organizations placing an AI product on the market in the EU
- Users of AI products and services in the EU
- Providers or users of an AI system intended to be used in the EU
It’s important to note that the EU AI Act isn’t just for organizations or users based in the EU, but also any organization working, selling, or using AI products in the EU.
What are the risk categories defined by the EU AI Act?
The EU AI Act organizes AI products and uses into four risk categories: unacceptable, high risk, limited risk, and minimal risk.
Unacceptable risk
This is the most severe level of risk defined by the EU AI Act. According to the European Parliament, the EU AI Act highlights several banned AI applications in the EU including:
- Cognitive behavioral manipulation of people or specific vulnerable groups. This includes things like voice-activated toys that could encourage dangerous behavior in children
- Social scoring AI: classifying people based on behavior, socio-economic status or personal characteristics
- Biometric identification and categorization of people
- Real-time and remote biometric identification systems, such as facial recognition in public spaces
High risk
This standard considers AI systems that negatively impact safety or fundamental rights as high risk. From there, it breaks these systems into two categories:
- AI systems used in products defined by the EU’s product safety legislation like toys, aviation, card, medical devices, and lifts
- AI systems that fall into specific area that will have to be registered in an EU database, like education, law enforcement, critical infrastructure, and other related areas.
Limited risk
Some AI products and use cases will fall into this category and be subject to transparency requirements. One example of this is ChatGPT. Systems that fall into the limited risk category will need to:
- Disclose that its content was generated by AI
- Design the model to prevent it from generating illegal content
- Publish summaries of copyrighted data used for training
Minimal or no risk
Most AI systems will fall into the minimal or no risk category, meaning they have no further legal obligations.
What is the timeline for compliance?
The EU AI Act became legally binding on August 1, 2024. However, the requirements in the act will begin to take effect gradually over time with a phased roll out. Key milestones include:
- February 2, 2025: Prohibitions on certain AI systems and requirements on AI literacy start to apply.
- August 2, 2025: Rules start to apply for notified bodies, GPAI models, governance, confidentiality and penalties.
- August 2, 2026: The remainder of the AI Act starts to apply, except for some high-risk AI systems with specific qualifications.
- August 2, 2027: All systems, without exception, must meet obligations of the AI Act.
What are the penalties for noncompliance?
According to Article 99 of the EU AI Act, the penalties for noncompliance with the prohibition of the AI practices referred to in Article 5 will be subject to administrative fines of up to EUR 35,000,000, or up to 7% of worldwide annual turnover, whichever is higher.
Noncompliance with any other provisions not laid out in Article 5 will be subject to fines up to EUR 15,000,000 or up to 3% of worldwide annual turnover, whichever is higher.
The EU AI Act also sets fines for those who supply incomplete, incorrect, or misleading information to notified bodies or national competent authorities when they request information. Those fines can be up to EUR 7,500,000 or up to 1% of worldwide annual turnover, whichever is higher.
Why ISO 42001 is essential for EU AI Act compliance
This standard mandates an ongoing governance framework for AI risk management, transparency, and compliance. Unlike one-time risk assessments or ad hoc governance policies, ISO 42001 establishes a systematic, repeatable process for AI compliance, ensuring organizations:
- Proactively manage AI risks rather than responding to enforcement actions.
- Align AI governance with business operations using structured risk-management frameworks.
- Demonstrate compliance through audit-ready documentation and performance evaluation.
ISO 42001 provides an adaptable compliance framework that evolves alongside regulatory requirements, making it an ideal foundation for AI governance. Though it is not an approved harmonized standard for AI Act conformity, it does provide the foundation you’ll need to be successful when the final QMS conformity standard is released.
Next steps
Companies seeking compliance with the EU AI Act need to act now to avoid penalties and stay ahead of the curve. Enforcement will only intensify over the next two years.
We recommend reaching out to a high-quality auditor that can help your organization become compliant with the EU AI Act before it’s too late. Organizations that take action now will be best positioned to thrive in the new AI regulatory environment.
Reach out to A-LIGN today to learn how our team can get your organization on the path to compliance.