Frameworks Crosswalk For Advisors Our Mission Get the Suite — $397 Free Preview
Decision Protocol · AI Edition · Documentation Layer

EU AI Act readiness

A documentation layer for organizations preparing for AI governance obligations under Regulation (EU) 2024/1689.

The Decision Protocol produces structured artefacts — classifications, vendor reviews, contemporaneous decision records, literacy attestations — that internal legal and compliance work can reference when preparing for AI Act obligations. It is operational documentation, not legal advice. It does not certify compliance and does not substitute for independent legal review.

Lamberto Iezzi
Method developed by Lamberto Iezzi — 35+ years in European institutional banking, fiduciary governance, and regulatory risk. Read bio → · LinkedIn · Foundation

Regulatory context — what is in force today

The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024. Its obligations apply in phases. The summary below reflects the official timeline as of 3 May 2026; the timeline is subject to ongoing legislative process noted below.

  1. 1 Aug 2024
    Entry into force. The Regulation becomes binding EU law; obligations begin to apply on the dates below.
  2. 2 Feb 2025
    Prohibitions and AI literacy obligations applicable. Practices listed in Article 5 are prohibited. Article 4 requires organizations operating AI systems to ensure adequate AI literacy of personnel involved.
  3. 2 Aug 2025
    General-purpose AI rules and governance applicable. Obligations for providers of GPAI models begin. Member States designate national competent authorities. EU-level governance bodies operational.
  4. 2 Aug 2026
    Most remaining obligations applicable. Rules for high-risk AI systems listed in Annex III enter into application. Article 50 transparency obligations toward natural persons (including chatbots and synthetic content) apply. Enforcement begins at national and EU level.
  5. 2 Aug 2027
    High-risk AI in regulated products applicable. Rules covering AI systems that are safety components of products listed in Annex I (with extended transition).

Ongoing legislative process

In November 2025 the European Commission published the Digital Omnibus on AI, a legislative proposal that would, if adopted, defer certain high-risk AI obligations from 2 August 2026 to a later date. The proposal is in trilogue between Parliament, Council, and Commission and has not been adopted as of 3 May 2026. Until adoption, the original 2 August 2026 application date applies as written. Organizations are advised to verify current status with their legal counsel before acting on this summary.

What the AI Act requires — in summary

A non-exhaustive overview of provisions most often relevant to deployers (organizations using AI systems) rather than providers (those who develop and place them on the market). The Act distinguishes the two roles and assigns different obligations to each. Always verify scope and applicability against the original text of the Regulation and with qualified legal counsel.

Article 4

AI literacy

Providers and deployers shall ensure a sufficient level of AI literacy among staff and other persons dealing with the operation and use of AI systems on their behalf.

Article 5

Prohibited practices

Listed AI practices are prohibited — including subliminal manipulation, exploitation of vulnerabilities, social scoring, real-time remote biometric identification (with narrow exceptions), and others.

Article 9

Risk management system (high-risk)

A risk management system shall be established, implemented, documented, and maintained in relation to high-risk AI systems — covering identification, analysis, evaluation, and mitigation of foreseeable risks.

Article 12

Record-keeping

High-risk AI systems shall technically allow for the automatic recording of events (logs) during their lifetime, supporting traceability of functioning and post-market monitoring.

Article 13

Transparency and information to deployers

High-risk AI systems shall be designed to allow deployers to interpret system output and use it appropriately. Providers must supply instructions for use containing specified information.

Article 14

Human oversight

High-risk AI systems shall be designed to be effectively overseen by natural persons during their period of use — with measures appropriate to the risks, level of autonomy, and context of use.

Article 26

Obligations of deployers

Deployers of high-risk AI systems shall use them in accordance with provider instructions, assign human oversight to qualified persons, ensure relevant input data is appropriate, monitor operation, and keep automatically generated logs for an appropriate period.

Article 27

Fundamental Rights Impact Assessment

Certain deployers (notably bodies governed by public law and operators of specified high-risk systems) shall perform an assessment of the impact on fundamental rights before first use, covering process description, affected categories of persons, specific risks, and human oversight measures.

Article 50

Transparency obligations toward natural persons

Persons interacting with AI systems shall be informed of that fact (including chatbots). AI-generated or manipulated content (deepfakes, synthetic media, AI-generated text on matters of public interest) shall be marked as such.

How the Decision Protocol contributes

The Decision Protocol does not satisfy any AI Act obligation by itself. What it does is produce structured documentation — the kind that internal legal, compliance, and audit work routinely needs to evidence that governance was applied contemporaneously, not reconstructed retroactively. Below: where it contributes, and where it explicitly does not.

Article 4 — AI literacy

The AI Literacy Checklist (in the Use & Control Framework) produces an attestation per role: which staff have completed which literacy modules, when, and against what content. Output is a dated record that can be referenced in support of internal compliance evidence on Article 4 obligations.

Article 5 — prohibited practices

The AI Use Case Triage Tool and Classification Guide support identification of use cases that fall within prohibited or sensitive categories before deployment, producing a documented categorization decision. Whether a specific use is prohibited under Article 5 remains a legal determination.

Article 9 — risk management system

Decision Records produced by the Protocol document risks identified at decision time, alternatives considered, mitigations adopted, and residual risks accepted. They are one input into a broader risk management system; they do not constitute that system in full.

Article 12 — record-keeping

The Protocol's central artefact is a contemporaneous Decision Record covering classification, rationale, approval chain, and trade-offs — recorded before consequences are known. It complements the technical event-logging required by Article 12 with an organizational layer of governance documentation.

Article 13 — transparency and information to deployers

Out of scope. Article 13 places obligations on providers, not deployers. The Protocol addresses the deployer perspective.

Article 14 — human oversight

Partial. The Control Ownership and Approval Matrix documents which roles are accountable for oversight of which AI uses. The substantive design of oversight mechanisms (interruption capability, override authority, second-line review) remains a contextual decision the Protocol does not specify.

Article 26 — deployer obligations

Several Article 26 expectations align with Protocol outputs: assignment of human oversight to qualified persons (Approval Matrix), monitoring of operation (Decision Records updated upon material change), retention of logs and governance documentation. The Vendor & Data Control Framework adds vendor onboarding and review documentation to the deployer-side evidence base.

Article 27 — Fundamental Rights Impact Assessment

The Protocol's Conscious Trade-Off principle — documenting risks accepted with a stated benefit and proportionality reason — produces material that can feed into the FRIA process. The FRIA itself, where required, remains a distinct legal instrument; the Protocol provides operational substrate, not the assessment.

Article 50 — transparency to natural persons

Out of scope (substantive). Disclosure language and labelling of AI interactions, deepfakes, and synthetic content are product-design choices made by the deployer or provider in the user-facing surface. The Protocol can document the policy decision (when disclosure is required, by which channels) but does not provide the disclosure templates themselves.

For the full mapping across NIST AI RMF, ISO/IEC 42001, EU AI Act, and SR 11-7: see the regulatory crosswalk — free download.

What this does not do

Stated explicitly so it cannot be misread later.

  • × The Decision Protocol does not constitute legal, regulatory, or compliance advice. It is procedural governance documentation. Independent legal review is the responsibility of the adopting organization.
  • × It does not certify that the adopting organization is compliant with the EU AI Act. Certification of conformity is a separate process governed by the Regulation itself.
  • × It does not satisfy the conformity assessment obligations placed on providers of high-risk AI systems and does not produce CE-marking documentation.
  • × It does not include technical model evaluation (TEVV) — performance metrics, robustness testing, fairness testing, bias measurement — which remain the responsibility of the AI vendor, the provider, or the adopting organization's data-science function.
  • × It does not provide user-facing disclosure templates for Article 50 transparency obligations (chatbot disclosure language, deepfake labelling) — these are product-design artefacts not covered by the Protocol.
  • × It does not include consulting, implementation services, or post-purchase advisory support. All instruments are delivered as a one-time digital download.
Recommended for AI Act readiness

Complete AI Governance Suite

Both frameworks combined — covering internal AI use (Article 4 literacy, Articles 9 and 26 risk management and oversight evidence, classification, triage, Decision Records) and AI vendor governance (vendor reviews, restricted data triage, onboarding, escalation). Plus four exclusive components, including the 20-case Execution Library that turns templates into a working system.

$397

One-time purchase · Immediate digital access · Save $97 vs. buying separately

Get the Complete Suite — $397

Other ways to engage

Individual Frameworks — $247 each

Adopt only the AI Use & Control Framework (internal use governance) or only the AI Vendor & Data Control Framework (third-party AI tooling). Suited for organizations whose exposure is concentrated on one side.

See the frameworks →

Decision Record Toolkit — $29

A single defensible decision record. For documenting one specific AI decision — before scaling up to a full framework. Low-risk entry point to the method.

Get the Toolkit →

Regulatory Crosswalk — free download

Two volumes mapping the Decision Protocol method to NIST AI RMF, ISO/IEC 42001, the EU AI Act, and SR 11-7 Model Risk Management. Honest about coverage. Explicit about gaps. For evaluation before commitment.

Download free →

For advisors and consulting firms Advising client organizations on AI Act readiness? The method can be applied within your professional engagements with firm-branding permissions and broad-scope deployment terms. See the Agency License →

Common questions

EU AI Act readiness — FAQ

No. EU AI Act compliance is achieved through your own legal, regulatory, and operational work — including independent legal review, conformity assessment where applicable, and adaptation of internal processes to your specific AI deployment context. The Decision Protocol provides the documentation layer that your work will reference; it does not substitute for that work.

The Protocol produces structured artefacts — Decision Records, classification logs, vendor reviews, literacy attestations — that can be presented as evidence of internal governance. Whether they satisfy a specific regulator depends on the nature of the AI system, the scope of the audit, the standards applied by that regulator, and how the artefacts are used alongside other evidence. The Institute makes no guarantee on audit outcomes.

No. The Decision Protocol is procedural governance documentation. Independent legal review of any adopted policy, classification, or assessment remains the responsibility of the organization. The summaries on this page reflect publicly available information about Regulation (EU) 2024/1689 and are not a substitute for qualified counsel.

The Digital Omnibus on AI is a legislative proposal under negotiation as of 3 May 2026. If adopted, it would defer certain high-risk obligations beyond 2 August 2026. Its outcome will affect deadlines, not the substantive obligations: documentation, risk management, record-keeping, oversight, literacy, and prohibited-practice avoidance remain expected of organizations deploying AI in regulated contexts. Preparation now is not invalidated by potential timeline adjustments. We recommend verifying the current status with your legal counsel before relying on any specific deadline.

Several obligations apply across risk levels — AI literacy under Article 4 applies to any organization operating AI systems, the prohibitions under Article 5 apply universally, and Article 50 transparency rules apply to AI systems interacting with natural persons regardless of high-risk classification. Organizations also commonly rely on the same documentation discipline (classification, vendor review, decision records) for internal accountability beyond the AI Act itself.

The Regulation defines a provider as the entity that develops or has developed an AI system and places it on the EU market or puts it into service. A deployer is an entity using an AI system under its authority for non-personal purposes. Most organizations integrating commercial AI tools fall on the deployer side. The Decision Protocol is built primarily for deployers; providers face additional obligations (technical documentation, conformity assessment, post-market monitoring) outside the Protocol's scope.

The Suite ($397) integrates both frameworks — AI Use & Control plus AI Vendor & Data Control — covering internal AI use governance and AI vendor governance respectively. It adds four bundle-exclusive components: a 20-case Execution Library, an Execution Library Navigator, a 10-page Executive Briefing, and an integrated 30-day Deployment Roadmap. Individual frameworks ($247 each) are appropriate when the organization's exposure is concentrated on one side.

Yes. All instruments are delivered as a structured digital download immediately after payment. Small teams can begin internal review and adoption within a single working session; mid-sized organizations typically deploy in 2–4 weeks; the Enterprise Deployment Checklist (included) provides a phased plan for organizations with 200+ employees.

The free Regulatory Crosswalk publishes the mapping across NIST AI RMF, ISO/IEC 42001, the EU AI Act, and SR 11-7 Model Risk Management — including explicit notation of full coverage, partial coverage, and out-of-scope items.

Page last updated: 3 May 2026. Regulatory information is informational only and may not reflect the most recent legislative developments. Verify current obligations with qualified legal counsel before acting.