Modifying AI Under the EU AI Act: Lessons from Practice on Classification and Compliance
This is a guest post written by legal compliance professionals Øystein Endal, Andrea Vcric, Sidsel Nag, Nick Malter and Daylan Araz (see section about authors at the end), drawing on their experience from running or consulting businesses integrating AI. For any questions or suggestions, please contact Nick Malter at [email protected]. Disclaimer: Please note that the […]
What this rule actually says
The EU AI Act lets you modify (update, fine-tune, or customize) an AI system you're using—but you have to keep it compliant with the Act's rules. If you tweak a hiring assistant to work better for your clients, or retrain a medical scribe on new data, that modified system still needs to meet the same safety and transparency requirements as the original. Basically: don't let customization become a loophole for dodging regulations.
Who it applies to
- Geography: You if your AI product reaches users in the EU, even if you're based elsewhere.
- Use cases that trigger this: Medical AI (like diagnostic or scribe tools), hiring/employment decisions, credit/loan decisions, law enforcement tools, and systems that could affect fundamental rights.
- Customization that counts: Fine-tuning on new datasets, retraining, prompt engineering at scale, adding new features, or integrating third-party models into your product.
- Data scope: Personal data used for training/customization is in scope. Anonymized or synthetic data is generally lower-risk.
- What's probably safe: Running a pre-built commercial model as-is without modification, or using it only for internal non-decision-making tasks.
What founders need to do
- Audit what you're actually modifying (1-2 days). Document every change you make to any AI model—retraining, prompt templates, output filtering. Know whether you're "using" or "modifying."
- Check your AI's risk category (2-3 days). High-risk uses (hiring, medical, credit decisions) require more documentation and testing. Low-risk uses (general chatbots, content generation) require less. Be honest about what your tool actually does.
- Document modifications like you'd document code (ongoing, ~1 hour per release). Keep records of: what data you trained on, how you tested it, what risks you identified, and what safeguards you added. This is your legal evidence you tried to stay compliant.
- Test for bias and failure modes (3-5 days initial, then spot-checks). Run your modified AI on diverse test cases. If you're screening job applicants or analyzing medical images, test it doesn't discriminate or hallucinate.
- Be transparent with users (1-2 days). Tell customers what you modified, provide clear documentation, and explain how the AI works in plain terms. Don't hide customization.
Bottom line
If you're modifying AI systems for EU users in high-stakes domains (medical, hiring, lending), act now and document your modifications; if you're using commercial models unchanged for low-stakes tasks (customer support, content), monitor for regulatory updates but you're probably fine for now.