EU AI Act - Last verified 21 April 2026

EU AI Act for Irish Businesses - Phased Deadlines & Vendor Readiness

The EU AI Act's phased deadlines directly affect Irish businesses using AI tools. The prohibited-AI ban took effect on 2 February 2025. General-Purpose AI rules followed on 2 August 2025. High-risk system obligations apply from 2 August 2026. Here is what each phase requires - and which AI vendors have public compliance statements.

EU AI Act implementation clock

Dated milestones from the official EU AI Act implementation timeline. The next big date for Irish deployers is 2 August 2026 when high-risk AI obligations apply in full.

Days to next milestone
103
High-risk AI system obligations apply on 2 August 2026
  1. 1 August 2024 Past
    EU AI Act enters into force
    Regulation (EU) 2024/1689 officially enters into force across the EU, starting the phased compliance clock.
  2. 2 February 2025 Past
    Prohibited AI practices banned + AI literacy
    Bans on prohibited AI practices (social scoring, emotion inference in workplaces, untargeted facial scraping, etc.) take effect. AI literacy becomes a horizontal obligation for all deployers, including Irish SMEs.
  3. 2 August 2025 Past
    General-Purpose AI rules activate
    Obligations for GPAI model providers (OpenAI, Anthropic, Google, Meta, Mistral) apply. Governance structures, notified bodies, and penalty frameworks activate. Irish deployers are largely unaffected beyond vendor-selection diligence.
  4. 2 August 2026 Next
    High-risk AI system obligations apply
    High-risk AI system obligations apply in full. Member States must establish AI regulatory sandboxes. Irish SMEs using AI in recruitment, credit scoring, education assessment, or essential services fall in scope.
  5. 2 August 2027 Upcoming
    Full provider obligations & GPAI grace-period ends
    Full Article 6(1) obligations activate for high-risk systems. GPAI providers whose models were placed on the market before 2 August 2025 reach their compliance deadline.

Milestone clock calculated from verified date 21 April 2026. Dates are drawn from official regulator sources linked below. Always verify with your tax adviser or legal counsel before relying on this for a procurement or compliance decision.

EU AI Act implementation timeline

Four dates Irish SMEs need to plan around.

2 February 2025 - Prohibited AI & AI literacy (past)

Prohibitions on certain AI systems took effect. AI literacy obligations - staff using AI must understand capabilities and limitations - began. Affects all Irish deployers; the prohibited-practice list excludes most everyday SME uses.

2 August 2025 - General-Purpose AI (past)

GPAI model obligations, governance structures, notified bodies, and penalty frameworks activated. Obligations sit with the providers - OpenAI, Anthropic, Google, Meta, Mistral - not with Irish deployers. Providers of earlier GPAI models get a grace period until 2 August 2027.

2 August 2026 - High-risk systems (upcoming)

High-risk AI system obligations apply in full. Most everyday SME AI tools are not classified as high-risk, but AI used in recruitment, credit scoring, employment decisions, or essential services can fall in scope. Member States must establish AI regulatory sandboxes by this date.

2 August 2027 - Post-market monitoring

Full Article 6(1) obligations activate for high-risk systems. GPAI providers placing models on the market before 2 August 2025 reach their compliance deadline. Large-scale IT systems under Annex X have until 31 December 2030 for legacy transition.

Status as of 21 April 2026 - public statements only

AI vendor EU AI Act compliance statements

GPAI obligations sit with the model provider, not the Irish deployer. But reading each vendor's published compliance stance is a fast way to gauge EU-readiness and EU data residency posture.

Anthropic (Claude)

Announced 21 July 2025 it would sign the EU General-Purpose AI Code of Practice. Publishes an EU DSA transparency report (period 1 May to 31 December 2025) via Anthropic Ireland Limited. Operates an EU data residency option. Strong public compliance posture.

OpenAI (ChatGPT)

Committed in July 2025 to signing the Code of Practice and using it to demonstrate EU AI Act compliance. Publishes an EU AI Act primer on openai.com/global-affairs. Ongoing technical documentation for downstream providers and deployers is being prepared.

Microsoft (Copilot)

Publishes an EU AI Act page on the Microsoft Trust Center and a formal "Microsoft EU AI Act Overview" document (January 2025). Microsoft Security Copilot includes a dedicated compliance commitment. Cross-functional working groups address the Act's operational requirements.

Google (Gemini)

Google Cloud publishes EU AI Act commitment statements on its blog. Gemini 2.5 Pro is explicitly cited as one of the most advanced models on the EU market subject to GPAI-with-Systemic-Risk obligations. Data residency and VPC Service Controls are offered as deployer-side tooling.

Notion AI

Publishes a security and compliance page with SOC 2 Type 2 and ISO 27001. Enterprise plan uses zero-retention APIs. A dedicated EU AI Act public statement has not been located - general security posture is strong but specific AI Act documentation is lacking at time of writing.

Jasper AI

Data Processing Agreement references GDPR explicitly. A dedicated EU AI Act compliance statement has not been located. Smaller vendor with less visibility on EU AI Act specifics than the major LLM providers.

Microsoft 365 Copilot

Built on OpenAI models and covered by Microsoft's broader EU AI Act commitments. Deployer-side responsibilities (AI literacy, transparency to end-users) still apply in Irish workplaces. Review Microsoft's Responsible AI FAQ for Security Copilot for product-specific detail.

ChatGPT Enterprise

Covered by OpenAI's Code of Practice commitment and EU AI Act primer. Enterprise-tier data handling and retention controls are the main deployer-facing compliance levers. Pair with an internal AI literacy programme to meet the 2 February 2025 horizontal obligation.

Gemini for Business

Covered by Google Cloud's EU AI Act commitments. Data residency and data-processing terms differ from consumer Gemini. Review Google Cloud's Compliance Center for the current EU AI Act documentation set.

What Irish SMEs should do now

Deployer obligations are lighter than provider obligations, but they exist - and 2 August 2026 is less than four months away.

  1. 1

    Build a basic AI literacy programme

    AI literacy became a horizontal obligation on 2 February 2025. Staff using AI tools should understand what the tool does, its limitations, and how to spot hallucinations and bias. A single training session and an acceptable-use policy cover most small Irish businesses.

  2. 2

    Identify whether you deploy any high-risk AI

    High-risk AI is a narrow list - recruitment scoring, credit decisions, essential services, education assessment, biometric identification, law enforcement, migration. If you use AI in any of these areas, the 2 August 2026 obligations apply and you need to plan now.

  3. 3

    Disclose AI use to end-users and customers

    Chatbots must tell users they are AI. AI-generated content should be labelled where relevant. Update your website privacy notice and any customer-facing AI tools to meet the transparency requirements.

  4. 4

    Pick AI tools with published EU AI Act statements

    Anthropic, OpenAI, Microsoft, and Google have all issued public compliance statements. Favouring vendors with explicit EU AI Act documentation reduces due-diligence friction and signals to regulators that you picked providers credibly.

  5. 5

    Align with your GDPR programme

    The EU AI Act and GDPR overlap heavily. Run your AI Act compliance alongside your existing GDPR programme rather than as a separate workstream. Read our GDPR guide for Irish businesses for the full overlap map.

For Irish business owners, HR teams, and IT leads

EU AI Act - frequently asked questions

What is the EU AI Act?
The EU AI Act (Regulation (EU) 2024/1689) is the first comprehensive AI law in the world. It classifies AI systems by risk - prohibited, high-risk, limited-risk, and minimal-risk - and imposes obligations on providers and deployers accordingly. It entered into force on 1 August 2024 and applies in phases through 2030. Irish businesses using AI tools are deployers under the Act; the vendors who build those tools are providers. Deployer obligations are lighter than provider obligations but still exist - particularly around transparency to end-users and AI literacy.
What are the EU AI Act phased deadlines?
Four key dates matter for Irish SMEs. 2 February 2025: prohibitions on certain AI systems and AI literacy obligations took effect. 2 August 2025: General-Purpose AI (GPAI) model rules, governance structures, notified bodies, and penalty frameworks activated. 2 August 2026: high-risk AI system obligations apply in full. 2 August 2027: full Article 6(1) obligations activate for high-risk systems placed on the market; earlier GPAI providers reach compliance deadline. Member States must establish AI regulatory sandboxes by 2 August 2026.
What AI uses are prohibited under the Act?
The Act prohibits AI systems that deploy subliminal manipulation causing harm, exploit vulnerabilities of specific groups, perform social scoring by public authorities, build facial-recognition databases by untargeted scraping, infer emotions in workplaces or schools, categorise biometric data by sensitive characteristics, or conduct real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions). Most Irish SMEs using everyday AI tools - ChatGPT, Claude, Gemini, Copilot, Notion AI, Jasper - are not affected by the prohibited-practice list.
What does GPAI mean and how does it affect Irish SMEs?
General-Purpose AI (GPAI) covers large foundation models that can be adapted for many tasks - the big consumer LLMs like GPT-4/5, Claude, Gemini, Llama, and Mistral. From 2 August 2025, GPAI providers must publish technical documentation, a summary of training data, copyright policies, and (for models with systemic risk like Gemini 2.5 Pro) enhanced safety frameworks. The burden sits with the provider, not with Irish deployers. Your obligation as an Irish SME is mostly about using these tools transparently - disclose AI-generated content where relevant, and maintain AI literacy.
Which AI vendors have published EU AI Act compliance statements?
Anthropic announced on 21 July 2025 it would sign the EU General-Purpose AI Code of Practice; its Irish DSA transparency report covers 1 May to 31 December 2025 and is published by Anthropic Ireland Limited. OpenAI publicly committed to signing the Code of Practice in July 2025. Microsoft publishes an EU AI Act compliance page on its Trust Center and a Microsoft EU AI Act Overview document. Google Cloud publishes an EU AI Act commitment statement citing Gemini 2.5 Pro as a GPAI model with systemic risk. Notion publishes a security and compliance page but has not issued a dedicated EU AI Act public statement. Jasper's Data Processing Agreement references GDPR but not the EU AI Act directly.
Does Ireland have its own AI regulator?
Not yet. The EU AI Act requires each Member State to designate national competent authorities. Ireland's approach is still being finalised - the Data Protection Commission (DPC) leads on the GDPR overlap, and Ireland's broader digital regulatory framework includes Coimisiún na Meán (media), ComReg (communications), and the CCPC (competition and consumer protection). A single Irish AI regulator has not been announced at time of writing. The European Commission's AI Office (DG CONNECT) handles GPAI enforcement centrally.
What are the obligations for Irish businesses using AI?
Irish SMEs are generally deployers - not providers - of AI systems. Obligations depend on the risk class. For limited-risk AI like chatbots: disclose that users are interacting with AI. For AI-generated content: label synthetic media. For high-risk AI (only a narrow set of use cases): run human oversight, log system activity, and ensure data quality. From 2 February 2025, AI literacy is a horizontal obligation - staff using AI tools should understand their capabilities and limitations.
What penalties apply under the EU AI Act?
Penalties are tiered. Violating prohibited-practice rules: up to €35 million or 7% of global annual turnover, whichever is higher. Violating provider obligations (non-compliance, incorrect information to authorities): up to €15 million or 3% of turnover. Supplying incorrect information to notified bodies or national authorities: up to €7.5 million or 1% of turnover. For SMEs and start-ups, penalties are applied proportionately - the lower of the two caps applies.
What about the overlap between the EU AI Act and GDPR?
The two regulations complement each other. GDPR governs processing of personal data regardless of whether AI is involved. The EU AI Act governs AI systems regardless of whether personal data is processed. Most AI tools used by Irish SMEs trigger both: GDPR for any personal data in prompts or outputs, and the AI Act for AI literacy and transparency. Align compliance programmes across both. For deeper detail, read our guide to AI tools and GDPR for Irish businesses.

Official sources and vendor statements

Every vendor claim on this page is attributable to a public statement. Links below point to the primary source.

Regulator and framework

Vendor compliance statements

Last verified 21 April 2026. Regulatory-clock pages are reviewed quarterly. Next review due 21 July 2026.

Disclaimer

This page is published for general informational and educational purposes only and is not legal, regulatory, or compliance advice. The EU AI Act is a live regulatory framework and vendor compliance postures change. Verify each vendor statement at the primary source linked above, and consult a qualified Irish solicitor or compliance professional before relying on anything here for a procurement or regulatory decision. See our full legal disclaimer.

Related reading on Vendors.ie

Choose AI tools with verified EU-compliance posture

Vendors.ie flags EU data residency, GDPR stance, and public AI Act statements for every AI tool reviewed.