EU AI Act Risk Classifier

EU AI Act Risk Classifier

AI Act Risk Classifier

Where does your AI project stand within the new EU regulatory framework?

Loading...
Minimal Risk
Low Risk
Your project seems to fall under the minimal risk category.

Expert Guide to the EU AI Act (2025/2026)

The European Union AI Act is a landmark regulation—the first of its kind globally to provide a comprehensive framework for the development and use of Artificial Intelligence. Its primary objective is to foster "trustworthy AI" within the European internal market by ensuring that systems are safe, transparent, and non-discriminatory. The act follows a **risk-based approach**, where mandates scale according to the potential harm a system could cause. With non-compliance fines reaching up to €35 million or 7% of global turnover, businesses must urgently classify their AI tools to ensure operational continuity.

What are the four risk levels defined in the AI Act?
The regulation categorizes AI into four tiers of risk:
  • Unacceptable Risk: Systems considered a clear threat to safety, livelihoods, and rights. This includes social scoring, cognitive behavioral manipulation, and real-time biometric identification in public (with narrow exceptions). They are prohibited in the EU.
  • High Risk: AI used in critical areas like infrastructure, HR, medical devices, education, and law enforcement. These are permitted but must comply with strict requirements regarding risk management, record-keeping, and human oversight.
  • Limited Risk: Primarily aimed at transparency. Users must be notified when interacting with AI (e.g., chatbots, virtual assistants) or when content is AI-generated (Deepfakes).
  • Minimal Risk: The vast majority of AI systems (e.g., spam filters, gaming AI). These face no new legal obligations, though adherence to voluntary codes of conduct is encouraged.
What are the deadlines for AI Act compliance?
The rollout is staggered. Prohibitions on unacceptable risk systems come into force early in 2025. Rules for General Purpose AI (GPAI) providers, such as those developing foundation models, will apply from mid-2025. The bulk of the requirements for high-risk systems—those most affecting enterprise software—will become mandatory by 2026. Given the technical complexity of establishing audit trails and bias mitigation, firms should begin their compliance audits immediately.
Does the AI Act apply to companies outside the EU?
Yes. Similar to the GDPR, the AI Act applies to any provider who places an AI system on the market in the EU, regardless of where their headquarters are located. Furthermore, if the output produced by an AI system is used within the EU, the provider and the user (deployer) must comply with the regulation. This "extra-territorial effect" ensures a level playing field and establishes the EU as a global standard-setter in AI ethics.
What is "General Purpose AI" (GPAI) and how is it regulated?
GPAI models are AI models that can perform a wide range of tasks (like large language models). The Act imposes transparency obligations on all GPAI models, requiring them to provide technical documentation and information on training data summary. Models that pose a "systemic risk"—evaluated based on their compute power and capabilities—face additional duties, including model evaluations, adversarial testing, and cybersecurity requirements.
What are the penalties for non-compliance?
The penalties are designed to be dissuasive. Violating prohibited practices can result in fines up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year. Breach of high-risk obligations can cost up to €15 million or 3%. Even providing incorrect information can lead to fines of up to €7.5 million. These reflect the serious nature of the regulation and the importance the EU places on citizen protection.
How does the Act support innovation and SMEs?
To prevent stifling innovation, the Act includes provisions for "Regulatory Sandboxes," allowing companies to test AI systems under real-world conditions without the full regulatory burden. SMEs (Small and Medium Enterprises) also benefit from reduced fees for conformity assessments and simplified documentation requirements. Startups are encouraged to use these sandboxes to develop compliant-by-design products before scaling across the Union.
Final Checklist for CTOs and Product Owners
1. Inventory all AI models currently in use or development. 2. Determine if your application falls under Annex III (High Risk). 3. Review your training data for potential biases and quality. 4. Implement "Human-in-the-loop" mechanisms to allow override. 5. Establish a technical documentation framework for lifecycle tracking. 6. Ensure transparency for any generative AI features in your product.
Legal Disclaimer: This tool is for initial guidance only and does not constitute legal advice. The final classification should be carried out by specialized legal experts. Version: Jan 2026 (EU AI Act Compliance).

Embed this Calculator on Your Website

You can integrate this calculator for free into your own website. Get the embed code on our overview page.

Get Embed Code

Nach oben scrollen