The wider AI Act framework applies from 2 August 2026. That is four months away. Organizations that cannot produce a credible inventory of where AI appears in their products, what role they play in the regulatory chain, and which use cases might qualify as high risk are already behind on the work they need to do.
The AI Act prohibitions and AI literacy requirements have applied since 2 February 2025. The AI Office published implementing rules on prohibited AI practices that quarter. GPAI model obligations and governance structure requirements followed on 2 August 2025. The broader framework for high-risk AI systems and general deployer obligations arrives on 2 August 2026 — covering most of the obligations that affect product teams directly.
THE_INVENTORY_PROBLEM
Most product organizations do not have a reliable list of where AI appears in their systems. They have teams that know about the features they built, procurement records that mention "AI" in varying degrees of specificity, and a general awareness that machine learning runs somewhere in the stack. That is not a sufficient compliance posture under the Act.
The inventory problem is also a role problem. The Act distinguishes between providers (who place an AI system on the market), deployers (who put it into use), importers, and distributors. Most product teams are deployers — they integrate models built by others — but many have not yet asked their suppliers the questions that determine whether those systems were built to Act requirements.
"You cannot prepare for the AI Act if the organisation cannot first find its own AI."
THE_HIGH_RISK_CATEGORY_QUESTION
Annex III of the Act lists the use cases that qualify as high risk, including AI systems used in: education and vocational training, employment and worker management, access to essential private and public services, law enforcement, migration and border control, administration of justice, and certain safety-critical components. The definition catches more products than teams typically assume at first read — particularly in HR tooling, customer service automation, and financial decision-making products.
High-risk systems carry the heaviest obligations: conformity assessments, technical documentation, human oversight mechanisms, logging capabilities, and registration in the EU database before deployment. The August 2026 milestone is when most of those obligations become enforceable for deployers, not just providers.
WHAT_TO_DO_BEFORE_AUGUST
The GPAI code of practice consultations are underway, with final text expected before the August 2026 application date. That is useful context, but it does not change the three things product teams should be doing in parallel right now:
- Complete the AI use-case inventory: list every AI-enabled feature, internal tool, and third-party dependency with the model source, intended use, user population, and a failure-impact assessment.
- Classify by role for each use case — provider, deployer, importer, or distributor. The answer determines which obligations attach and which pass through to your upstream supplier.
- Review supplier contracts for AI Act compliance commitments. If a model provider cannot demonstrate compliance, that needs to change before August — not at the next contract renewal.
Four months sounds like enough time. In practice, the inventory alone takes longer than expected, because AI appears in features and tools that are not labeled as AI anywhere in the internal product taxonomy.