← Back to Blog
EU AI ActMarch 26, 20268 min read

EU AI Act Compliance Checklist for 2026 — What Every AI Company Needs Before August

The EU AI Act's high-risk AI obligations kick in August 2026. Use this checklist to understand the timeline, risk tiers, and exact requirements before penalties reach €30M or 6% of global revenue.

The Clock Is Running — August 2026 Is Closer Than You Think

If your company builds, deploys, or uses AI systems that touch EU residents — customers, employees, users — the EU AI Act is already law. The General Data Protection Regulation (GDPR) reshaped how the world handles personal data. The EU AI Act is doing the same for artificial intelligence, and the August 2026 deadline for high-risk AI system obligations is not a soft target.

Penalties for non-compliance reach €30 million or 6% of global annual turnover, whichever is higher. For a $50M ARR company, that is up to $3 million in fines per violation.

This checklist covers the full compliance picture: who is affected, how to classify your systems, what each risk tier demands, and the concrete steps you need to complete before August 2026.

EU AI Act Timeline — Key Dates You Cannot Miss

The EU AI Act entered into force on August 1, 2024. Compliance obligations roll out in phases:

DateWhat Kicks In
February 2, 2025Prohibited AI practices banned (Article 5)
August 2, 2025GPAI model obligations apply (including systemic risk models)
August 2, 2026High-risk AI system obligations fully enforced
August 2, 2027High-risk systems in Annex I (regulated products) must comply

The August 2026 deadline is the most significant for most AI companies. This is when Conformity Assessments, technical documentation, human oversight systems, and registration requirements become mandatory and enforceable.

Who Does the EU AI Act Affect?

Scope is broader than most founders expect. The Act applies to:

  • Providers — companies that develop or place AI systems on the EU market, regardless of where the provider is headquartered
  • Deployers — businesses that use AI systems in their operations within the EU
  • Importers and distributors — entities that bring non-EU AI systems into the EU market
  • Product manufacturers — companies embedding AI into regulated products (medical devices, machinery, vehicles)

The extraterritorial reach is GDPR-level. A US-based SaaS company with EU customers deploying a high-risk AI system must comply, even if it has no EU office.

Risk Classification — The Four Tiers

The Act organizes AI systems into four risk categories. Your compliance obligations depend entirely on which tier your system falls into.

Tier 1 — Unacceptable Risk (Prohibited)

These are banned outright as of February 2025:

  • Social scoring systems by public authorities
  • Real-time remote biometric identification in public spaces (with narrow exceptions)
  • Subliminal manipulation that bypasses conscious decision-making
  • AI that exploits vulnerabilities of specific groups (age, disability)
  • Predictive policing based solely on profiling

Action: If any feature of your product touches these categories, it must be removed.

Tier 2 — High-Risk AI Systems

This is where most compliance work lives. High-risk systems are defined in Annex III of the Act and include AI used in:

  • Biometric identification and categorization
  • Critical infrastructure (energy, water, transport)
  • Education (scoring, admissions, performance evaluation)
  • Employment (CV screening, promotion decisions, work monitoring)
  • Essential private and public services (credit scoring, insurance, social benefits)
  • Law enforcement, migration, asylum, and border control
  • Administration of justice

If your product touches any of these areas, you are almost certainly high-risk.

Tier 3 — Limited Risk

Chatbots, AI that generates synthetic content, systems that interact with humans directly. Key obligations: transparency. Users must know they are interacting with AI. Deepfakes must be labeled.

Tier 4 — Minimal Risk

Most AI applications fall here — spam filters, AI in video games, recommendation systems with no significant individual impact. No mandatory obligations, but voluntary codes of practice are encouraged.

General Purpose AI (GPAI) Models — A Separate Track

If you train or fine-tune a foundation model (anything with 10²⁵ FLOPs or more training compute), you fall under the GPAI rules that took effect August 2025:

  • Publish technical documentation
  • Provide model cards to downstream deployers
  • Comply with EU copyright law (training data transparency)
  • If your model is designated as systemic risk: adversarial testing, incident reporting to EU AI Office, cybersecurity measures

High-Risk AI Compliance Requirements — The Full List

If you operate a high-risk AI system, these requirements apply before August 2, 2026:

1. Risk Management System

A documented, continuous process covering identification and analysis of known and foreseeable risks, estimation and evaluation of risks that may emerge during use, evaluation of risks based on post-market monitoring data, and adoption of risk mitigation measures.

2. Data and Data Governance

Training, validation, and testing datasets must be subject to appropriate data governance practices, be relevant and representative, address potential biases, and be documented with data lineage and transformation records.

3. Technical Documentation

Must be created before market placement and kept up to date. Includes system description, design specifications, training methodology, performance metrics, and known limitations.

4. Record-Keeping and Logging

High-risk AI systems must automatically log dates and times of operation, reference databases, input data that led to specific outputs, and results of human verification. Logs must be retained for defined periods.

5. Transparency and User Information

Deployers must provide users with a clear description of system capabilities and limitations, accuracy levels, human oversight procedures, and data subjects' rights and redress mechanisms.

6. Human Oversight

Systems must be designed so a human can fully understand the system's capabilities, monitor for anomalies, intervene or shut down the system, and the system does not override human decisions automatically.

7. Accuracy, Robustness, and Cybersecurity

Document accuracy metrics, demonstrate robustness against errors and adversarial manipulation, implement cybersecurity measures proportionate to the risk.

8. Conformity Assessment

Before market placement, undergo a conformity assessment. Most Annex III systems: self-assessment permitted. Biometric identification: third-party notified body assessment required. Result: EU Declaration of Conformity + CE marking.

9. Registration in EU Database

High-risk AI systems must be registered in the EUAI Database before market placement with a unique identification number and full technical documentation reference.

10. Post-Market Monitoring

Ongoing obligation — not a one-time check. Active collection and review of real-world performance data, systematic monitoring plan, serious incident reporting to national authorities, corrective action when systems underperform.

The EU AI Act Compliance Checklist — Print This

Use this as your pre-August 2026 sprint board:

Classification

  • Mapped all AI systems against Annex III categories
  • Determined GPAI applicability (training compute, downstream deployment)
  • Documented risk tier for each system with legal justification

Documentation

  • Technical documentation complete for each high-risk system
  • Data governance policy documented with lineage records
  • Risk management system established and documented
  • Transparency notices drafted for users and affected individuals

Technical Controls

  • Logging and record-keeping system operational
  • Human oversight mechanism designed and tested
  • Accuracy and robustness metrics benchmarked
  • Cybersecurity assessment completed

Legal and Regulatory

  • Conformity assessment pathway confirmed (self vs. notified body)
  • EU Declaration of Conformity drafted
  • CE marking process initiated (if applicable)
  • EUAI Database registration completed
  • Post-market monitoring plan written

Governance

  • AI governance owner designated (equivalent to a DPO for AI)
  • Employee training on prohibited practices completed
  • Third-party supplier AI assessments in procurement contracts
  • Incident response plan for AI failures documented

The Cost of Getting This Wrong

The EU AI Act creates a three-tier penalty structure:

  • Prohibited practices violations: Up to €35M or 7% of global turnover
  • High-risk requirements violations: Up to €15M or 3% of global turnover
  • Incorrect or misleading information to authorities: Up to €7.5M or 1% of global turnover

National market surveillance authorities will have significant enforcement powers, including the right to demand access to training data, algorithms, and testing documentation.

Sources

  • EU AI Act — Regulation (EU) 2024/1689, Official Journal of the EU, July 12, 2024
  • EU AI Office — High-Level Summary of the AI Act (2024)
  • European Commission — AI Act Implementation Timeline, October 2024
  • EU AI Office — GPAI Code of Practice Draft Guidelines (2025)
  • Conformity Assessment Guidance — EU AI Act Article 43 and Annex VII

DingDawg builds automated AI compliance infrastructure. This post is informational and does not constitute legal advice. Consult qualified EU legal counsel for your specific situation.

Get Your Automated EU AI Act Compliance Report

See exactly where you stand across all risk tiers. Prioritized remediation list. Documented evidence trail. In hours, not months.

Get Your Compliance Report →