RegImpact
eu ai acteffective· Published 11/8/2024

Overview of all AI Act National Implementation Plans

This post gives an overview of the national authorities to be designated under the AI Act and what we know about the national implementation plans.

What this rule actually says

The EU AI Act is now live, and each EU country is setting up its own national authority to enforce it. Rather than one global rulebook, there are now 27+ different national implementation plans—meaning compliance might look slightly different depending on which EU country a user is in. Think of it like GDPR but for AI: centralized rules, decentralized enforcement.

Who it applies to

  • If you sell or operate your AI product in the EU (or to EU users), you need to care about this—even if your company is based elsewhere.
  • If you're building high-risk AI systems (medical scribes, hiring assistants, and some support chatbots that make consequential decisions all qualify), this definitely applies to you.
  • If you process personal data about EU residents, GDPR rules already apply to you; the AI Act layers on top with AI-specific requirements.
  • If you're only serving US or non-EU customers, you can monitor but don't need to act immediately—though some US states are building similar rules.
  • If your AI just summarizes support tickets or drafts templates without making final decisions, you're lower-risk (though still potentially in scope).

What founders need to do

  1. Understand your risk tier (1-2 days). Categorize your product: Is it high-risk (medical, hiring, safety-critical), limited-risk (targeted ads, emotion detection), or minimal-risk? Most AI medical scribes and hiring assistants are high-risk.
  1. Map the relevant national authority (1 day). Identify which EU countries your users are in and check their AI Act implementation plans (published by their national authorities). Requirements may vary slightly.
  1. Audit your current practices (2-5 days). Document how you collect data, how your model works, what you log, and how you handle errors. You'll need this for compliance filings.
  1. Implement core safeguards (ongoing, 2-4 weeks initial setup). For high-risk systems: add explainability documentation, human oversight workflows, error logging, and a way to notify users when AI is making decisions about them.
  1. Prepare for registration or notification (1-2 weeks). High-risk systems typically need to register with the national authority. Check your country's specific plan—timelines vary.

Bottom line

If you're selling an AI product to EU users, act now; if you're US-only, monitor what your country does but you're not immediately blocked.