5 Things Due Before August 2 [EU AI Act Checklist]
Field Guide
5 Things Due Before August 2 [EU AI Act Checklist]
The EU AI Act high-risk deadline hits August 2. Five compliance actions you can start this week, with a printable checklist.
Key takeaway
August 2, 2026 is when EU AI Act high-risk compliance becomes mandatory for most providers
Key takeaway
Five concrete actions—from risk management systems to conformity assessment—can start this week
Key takeaway
Preparation now prevents scrambling later. Use the next 19 weeks to build compliance infrastructure, not rush it
You have 19 weeks. That’s not long.
On August 2, 2026, the EU AI Act high-risk requirements go live. If you’re building or deploying AI systems in the EU—or selling to people who are—this deadline matters. Not as a theoretical risk. As a hard date when things change.
The difference between companies that ship compliant systems in August and companies that ship them in November comes down to what they do between now and June.
Answer-First Summary
The EU AI Act’s high-risk system requirements take effect August 2, 2026. Providers must implement risk management systems, data governance frameworks, technical documentation, conformity assessment procedures, and human oversight mechanisms. You can start all five this week. Delay, and August becomes chaos.
The Calendar Is Not Negotiable
Most people hear “August 2” and think there’s wiggle room. There isn’t.
The EU doesn’t issue soft deadlines. Wilson Sonsini published 2026 compliance guidance last month. Drata’s regulatory tracker lists this date as immovable. The EU AI Act implementation timeline is public. Your sales team probably saw it already. Your legal team definitely did.
Here’s what changes on that date: Article 16 obligations for high-risk AI system providers become enforceable. That means risk management systems, data governance requirements, technical documentation, human oversight protocols, and conformity assessment procedures shift from “recommended” to “legally required.”
If you’re selling a high-risk AI system (biometrics, critical infrastructure, hiring, law enforcement, benefits administration) to the EU, this applies to you.
The Five Things You Need to Build
Not think about. Build.
1. Risk Management System (Start This Week)
High-risk AI systems need documented risk management. That means identifying sources of risk. That means measuring probability and severity. That means defining how you’ll mitigate them.
This isn’t vague. Article 17 specifies the framework: identify risks throughout the system lifecycle, evaluate them, implement controls, monitor them continuously, and document it all. If a compliance officer pulls your file on August 3, they’re looking for evidence that you did this systematically, not optimistically.
What to do now: Audit your current system for failure modes. Where could the model produce a biased decision? Where could it fail silently? Where could someone manipulate its output? Document three to five scenarios. Then assign ownership. Risk management without ownership is fiction.
2. Data Governance Framework (Start This Week)
The EU doesn’t want you guessing about your training data. They want provenance. They want source documentation. They want evidence that you’re not training on scraped copyrighted material without disclosure.
California already moved here. AB 2013 took effect January 1, 2026, requiring generative AI developers to publicly disclose training data sources and dataset composition. The EU’s high-risk requirements follow a similar logic: if you can’t tell me where your data came from, I don’t trust your system.
This ties to a bigger shift. Texas enacted the Responsible AI Governance Act effective January 1, 2026, with prohibitions on certain uses and disclosure requirements. Same pattern: data transparency matters now.
What to do now: Map your training datasets. Source. Size. Preprocessing steps. Retain this documentation. If you’re using publicly available datasets, document which ones and why. If you’re using proprietary data, document the licensing agreements. You’ll need this for the conformity assessment.
3. Technical Documentation and Logs (Start This Week)
High-risk systems must maintain logs. Automatically generated logs. The system needs to record what it’s doing so you can audit it later.
This is about accountability. If someone claims your hiring algorithm discriminated against them, you need to be able to retrieve the inputs, the model state at that time, and the outputs. You need to show the work. “It’s a black box” is not acceptable.
Article 19 spells this out: maintain documentation. Keep logs. Make both auditable.
What to do now: Audit your current logging infrastructure. What are you capturing? What are you missing? Do you have timestamps? Input records? Output records? Model version information? Set up logging for the blind spots. This isn’t optional after August 2. Start now so it’s automatic by then.
4. Conformity Assessment Procedure (Planning Window: Now Through June)
This is the big one. Before you can legally place a high-risk system on the EU market after August 2, you need to complete a conformity assessment. That’s a third-party review confirming your system meets the requirements.
You don’t do this yourself. An independent assessor does. And it takes time. If you wait until July, you’ll miss August.
What to do now: Identify notified bodies (the EU’s approved assessors) that work with your system type. Get on their calendar. Schedule a pre-assessment consultation to understand what they’ll need. This is a 12 to 16-week process if you start now. It’s a nightmare if you start in July.
5. Human Oversight Mechanism (Design Now, Implement by July)
High-risk systems need humans in the loop. That’s not bureaucratic. That’s operational necessity.
For hiring systems, employment decisions require human review. For systems affecting critical infrastructure, operational decisions require human oversight. The oversight has to be meaningful. Not a checkbox. Not a senior person signing off on a report they didn’t read.
This means designing how decisions flow to humans. Which decisions require manual approval? Who approves them? What information do they see? How do they override the system if it’s wrong?
What to do now: Map your current decision workflows. Find the gaps where a human should be deciding but a system is. Design the override mechanism. Train the team that will do the actual oversight. You can’t bolt this on in August. You have to build it now.
The Human Parallel
Deadlines do two things to people: they clarify and they terrify.
Right now, this August 2 date is abstract. Nineteen weeks away. Easy to deprioritize. Easier to assume your legal team is handling it.
But individuals and organizations follow the same curve. Without external pressure, work expands infinitely. Add a hard deadline, and priorities crystallize instantly.
The best teams are the ones that treat this deadline the way they’d treat any hard constraint: work backward from it. What needs to happen first? What needs to happen before that? What can start today?
Call it preparation. Call it self-preservation. Either way, it works.
Start One Thing This Week
Pick one of the five. Not all of them. One.
If your org is new to this: start with the data governance framework. Know where your training data comes from. Document it. Everything else follows from that foundation.
If you’re further along: start with the conformity assessment. Schedule that pre-assessment call. Get on a notified body’s calendar. That’s the critical path item nobody can accelerate later.
The evidence of preparedness is entries in your project tracker starting today.
What Comes Next
This is post 7 of 7 for the week. The theme across all of them: “The Evidence Is In.”
We’ve talked about agents without security. We’ve talked about NIST guidance. We’ve talked about the mindset of building like you’ll get it wrong.
This checklist is about the oldest form of evidence: compliance deadlines. They reveal what was real all along. You either built these systems defensively from the start, or you’re rushing to catch up now. August 2 will tell the difference.
Next week, we shift. New theme. Different angle. But the principle stays the same: the work you do between now and then determines what August looks like.
Build now. Breathe in August.
Sources:
Join the Intelligence Brief
Threat intelligence, agentic vulnerabilities, and engineering frameworks delivered straight to your inbox.