AI Risk Categories
aka AI Act risk tiers, AI risk classification
Four-tier risk classification under the EU AI Act: unacceptable-risk (banned), high-risk (regulated), limited-risk (transparency), minimal-risk (unregulated). Determines a system's compliance burden.
Last reviewed April 2026
Definition
The EU AI Act classifies AI systems and models into four risk tiers, and each tier dictates the applicable obligations. Unacceptable-risk practices (Article 5) are prohibited outright: cognitive-behavioural manipulation, social scoring by public authorities, exploitation of vulnerable groups, untargeted scraping of facial images for biometric databases, real-time remote biometric ID in public spaces (with narrow law-enforcement exceptions), emotion recognition in workplaces or schools, and biometric categorisation by sensitive attributes. High-risk AI systems (Annex III) include AI used in employment (CV screening, performance management), credit scoring, education access, critical infrastructure, biometric identification, justice administration, and migration/border control, which face full conformity assessment, technical documentation, human oversight, post-market monitoring, and CE-marking obligations on providers, plus deployer obligations on the businesses that use them. Limited-risk AI (chatbots, deepfakes, AI-generated content) faces transparency obligations: users must be informed they are interacting with AI or viewing AI-generated content. Minimal-risk AI (spam filters, AI in video games, recommender systems below sensitivity thresholds) is unregulated by the Act. General-Purpose AI (GPAI) models sit in a parallel framework with their own obligations, escalating sharply once a model crosses the systemic-risk threshold of 10^25 FLOPs of training compute.
Why it matters for software choice
Most software marketed as 'AI-powered' is minimal-risk or limited-risk and triggers no Act obligations on the buyer. But anything used to make decisions about hiring, credit, education access or essential services likely crosses the high-risk line, and the buyer (deployer) shares the compliance burden with the vendor. Verify the risk tier in writing before adoption.
Authority sources
- EU AI Act Article 5: Prohibited AI practices (artificialintelligenceact.eu)
- EU AI Act Annex III: High-risk AI systems (artificialintelligenceact.eu)
Software categories this affects
Vendors covered by this term
ChatGPT Enterprise
OpenAI's enterprise AI assistant with advanced reasoning, data analysis, and custom GPTs
Claude for Business
Anthropic's AI assistant with strong safety focus, long context handling, and business-grade data privacy
Microsoft Copilot
AI assistant integrated into Microsoft 365, with EU data boundary for European customers
Gemini Business
Google's AI assistant integrated with Google Workspace, with EU data processing for European customers
Related terms
EU AI Act
EU regulation on artificial intelligence, in force from 1 August 2024. Bans some practices, regulates 'high-risk' AI systems, and imposes transparency obligations on general-purpose AI models.
Data Protection Commission
Ireland's national data protection authority. Lead supervisory authority for many large US tech companies headquartered in Dublin under the GDPR's one-stop-shop mechanism.