Whistleblowing and the EU AI Act
This page aims to provide an overview of the EU Whistleblowing Directive (2019) and how it relates to the EU AI Act, as well as provide useful resources for potential whistleblowers. This resource was put together by Santeri Koivula, an EU Fellow at the Future of Life Institute, and Karl Koch, founder of the AI […]
What this rule actually says
The EU AI Act requires companies to have a safe way for employees, contractors, and third parties to report problems with AI systems—like if an AI medical scribe is giving dangerous advice or a hiring tool is systematically discriminating. Companies must protect whistleblowers from retaliation. This isn't a new rule; it builds on existing EU whistleblower protections and extends them specifically to AI harms.
Who it applies to
- If you're based in the EU or sell to EU customers: This applies to you, period. If you're US-based but have EU users, the jurisdictional question is murky—but assume it could apply.
- If you're building "high-risk" AI: Medical devices (including scribes), hiring tools, credit decisions, and safety-critical systems are explicitly high-risk under the EU AI Act. General chatbots and support assistants are lower-risk unless they make consequential decisions about individuals.
- If you have employees, contractors, or external testers: They need a way to report problems. The rule covers your own team and anyone with access to your systems.
- If you process user data: You need to handle whistleblower reports confidentially. User data doesn't trigger the rule—but *how* you handle allegations about your use of user data does.
What founders need to do
- Assess your risk level (1–2 days). Does your product make decisions about people's health, employment, or finances? If yes, treat it as high-risk. If it's a support chatbot that can't deny services, you're lower-risk but still covered.
- Set up a confidential reporting channel (3–5 days). This can be simple: a dedicated email, a anonymous form, or a third-party hotline. It needs to be documented and easy for employees and contractors to find.
- Write a whistleblower protection policy (2–3 days). One page is fine. State that people won't face retaliation for reporting AI harms in good faith. Cover confidentiality and what happens after a report lands.
- Train your team on the policy (1 day). Make sure everyone knows the reporting channel exists. Document that you did this.
- Keep records and respond to reports (ongoing). Log reports, investigate promptly, and take action if harms are real. Don't retaliate against reporters—that's the main enforcement risk.
Bottom line
If you're in the EU or selling high-risk AI to EU customers, implement a basic whistleblower channel and policy now; it's a few days of work and protects you legally.