RegImpact
eu ai acteffective· Published 7/3/2024

An Introduction to the Code of Practice for General-Purpose AI

Last updated: 14 August 2025. As AI Act implementation gradually unfolds, it is important to understand the different mechanisms of enforcement included in the Regulation. One of the most important is the general-purpose AI Code of Practice, which was developed by the AI Office and a wide range of stakeholders. This summary, detailing the Code […]

What this rule actually says

The EU AI Act requires makers of general-purpose AI models (like large language models) to follow a voluntary Code of Practice. This Code sets expectations around transparency, safety testing, and responsible deployment—basically, good practices that reduce risks from misuse. It's not a strict legal mandate with jail time; it's a set of commitments that regulators and the public expect responsible AI builders to follow.

Who it applies to

  • You build a general-purpose AI model (like a foundation model or LLM) that you offer to others—this applies to you, even if you're wrapping it for a specific use case.
  • You only use existing models from others (OpenAI, Anthropic, open-source) to build your medical scribe or hiring tool—this likely doesn't apply to you directly, though your provider's compliance matters.
  • You operate in the EU or serve EU customers—this regulation applies if you're selling to Europe or your users include EU residents.
  • Your model is "general-purpose" (can do multiple tasks, not purpose-built for one narrow job)—if you fine-tuned an existing model for just medical transcription, you're in a grayer zone; custom, single-task AI is less likely to trigger this.
  • User data scope: The Code covers how you document risks, test for misuse, and communicate limitations—not data collection per se. Privacy rules (GDPR) are separate.

What founders need to do

  1. Audit what you actually built (1 day). Do you own a general-purpose model, or are you a user of one? If you're using ChatGPT or Llama 2 and just wrapping it for customers, you're not the Code's primary target.
  1. Document your safety testing and known risks (3–5 days). Write down what you've tested for (bias, hallucinations, misuse scenarios), what gaps you know exist, and how you're mitigating them. This is the core of the Code.
  1. Create a transparency statement for customers (2–3 days). Tell users what your model can and can't do, what its limitations are, and how it should (and shouldn't) be used.
  1. Set up a process for handling misuse reports (1–2 days ongoing). Decide how you'll monitor for abuse and respond when users try to misuse your tool.
  1. Monitor EU regulatory updates (ongoing, ~2 hours/quarter). The AI Act is still rolling out; follow the EU AI Office guidance to catch new requirements early.

Bottom line

If you're building with someone else's model, monitor but don't panic. If you're releasing your own model into the EU, act now to document and communicate your safety practices—regulators are watching, and the Code is the bar they're using.