RegImpact
eu ai acteffective· Published 5/24/2024

Robust governance for the AI Act: Insights and highlights from Novelli et al. (2024)

In their recent publication on robust European AI governance, Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal, and Luciano Floridi pursue two main objectives: explaining the governance framework of the AI Act and providing recommendations to ensure its uniform and coordinated execution (Novelli et al., 2024). The following provides a selective overview of the publication, […]

What this rule actually says

The EU AI Act requires companies using AI in certain high-risk ways to set up proper governance—basically, someone needs to be in charge of making sure the AI doesn't break things. This isn't about banning AI; it's about having documented processes, testing procedures, and accountability so regulators can audit how decisions got made. Think: medical AI needs a human review process; hiring tools need bias testing; support chatbots need clear disclosure that they're not human.

Who it applies to

  • If you're selling into the EU (or users there can access your product): this applies. UK, Switzerland, and other non-EU countries have their own rules, but EU rules are the strictest baseline.
  • If your AI makes decisions that significantly affect people's rights (medical diagnoses, hiring/firing, credit decisions, content moderation at scale): you're in scope. A chatbot answering routine support questions is lower risk than an automated hiring screener.
  • If you process personal data (user emails, health records, employment history): GDPR already applied; this adds AI-specific governance on top.
  • If you're training on scraped data without permission: governance requirements are tighter.
  • If you're a solo founder or tiny team: the rule doesn't exempt small companies, but enforcement typically targets higher-risk use cases and larger players first.

What founders need to do

  1. Assess your risk level (1-2 days): Does your AI make consequential decisions about individuals? Does it process sensitive data? If "no" to both, you're likely lower-priority for enforcement. Document this.
  1. Document your AI's design and testing (ongoing, ~1-2 weeks initial): Write down what your model does, how you tested it for bias or errors, and how users can contest wrong outputs. This isn't a legal brief—it's a working record.
  1. Assign accountability (1 day): Designate someone (could be you) responsible for AI governance. Have a process for handling complaints or problems.
  1. Be transparent with users (ongoing): Tell people when they're interacting with AI. If decisions are automated, explain how they work.
  1. Monitor compliance guidance (ongoing): EU regulators are still finalizing detailed rules. Subscribe to updates from relevant bodies (your industry's regulator or the European AI Office) and adjust as guidance lands.

Bottom line

If you're building AI for the EU market in any "serious" use case (medical, hiring, lending, moderation), start documenting your process now—enforcement is coming, and you want evidence you were trying to do this right.