EU AI Act

aka AI Act, Regulation (EU) 2024/1689

EU regulation on artificial intelligence, in force from 1 August 2024. Bans some practices, regulates 'high-risk' AI systems, and imposes transparency obligations on general-purpose AI models.

Last reviewed April 2026

Definition

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world's first comprehensive horizontal regulation of AI systems. It entered into force on 1 August 2024 with a phased application: the prohibitions on unacceptable-risk practices apply from 2 February 2025; obligations on general-purpose AI (GPAI) model providers from 2 August 2025; and the bulk of the high-risk AI obligations from 2 August 2026, with extended timelines for AI in regulated products until 2 August 2027. The Act is risk-tiered: unacceptable-risk practices are banned outright (social scoring, manipulative AI, real-time remote biometric identification with narrow exceptions); high-risk AI (HR/recruitment, education access, credit scoring, critical infrastructure, biometric ID, justice/border control) faces conformity assessment, technical documentation, human oversight and post-market monitoring; limited-risk AI (chatbots, deepfakes) faces transparency obligations; minimal-risk AI is unregulated. GPAI providers face documentation and copyright transparency duties; GPAI models with 'systemic risk' (compute over 10^25 FLOPs) face additional safety, evaluation and incident-reporting duties. Fines reach EUR 35 million or 7% of global turnover for prohibited-practice breaches.

Why it matters for software choice

Irish SMEs that deploy AI in HR (CV screening, performance), credit/lending, education or biometric ID are likely covered as deployers of high-risk AI. Vendors that publish their AI Act risk classification, conformity assessment status, and human-oversight controls cut compliance work from a six-month project to a vendor-questionnaire.

Authority sources

Software categories this affects

Vendors covered by this term

Related terms