Alex is Sprintlaw’s co-founder and principal lawyer. Alex previously worked at a top-tier firm as a lawyer specialising in technology and media contracts, and founded a digital agency which he sold in 2015.
- General-Purpose AI (GPAI) And Foundation Models: Why This Matters For SMEs
- When Does The AI Act Start Applying? (Rollout And Timing)
What Does Compliance Look Like In Practice For UK Startups And SMEs?
- 1) Map Your AI Use-Cases (Before You Map The Law)
- 2) Check If Any Use-Case Could Be “High-Risk”
- 3) Build Transparency Into Your Product (And Your Customer Journey)
- 4) Get Your Data Protection House In Order
- 5) Lock Down Confidentiality And Security (Because AI Tools Don’t “Forget”)
- 6) Update Your Customer Terms And Product Contracts
- Key Takeaways
If you’re building, buying, or using AI in your business, you’ve probably seen the phrase “EU AI Act” popping up everywhere.
Even though the UK isn’t in the EU anymore, the EU AI Act can still matter a lot to UK startups and SMEs - especially if you sell into the EU, support EU customers, or your product ends up being used there.
This guide breaks the Act down in plain English, with a practical focus on what you should do now to stay commercially and legally safe as you scale.
What Is The EU AI Act (And Why Should A UK Business Care)?
The EU AI Act is a major piece of EU regulation designed to set rules for how AI systems are developed, placed on the market, put into service, and used.
The big idea is simple: the higher the risk, the stricter the rules.
Does The AI Act Apply To UK Businesses?
Potentially, yes. The AI Act is designed to have extra-territorial reach (meaning it can apply outside the EU in some circumstances), similar in spirit to how EU GDPR affected businesses globally.
You should treat the AI Act as relevant if, for example:
- You sell an AI product or AI-enabled service to customers in the EU (including B2B sales).
- You provide an AI system that is used in the EU, even if you’re based in the UK.
- You run a UK platform and your AI features are accessible by EU users.
- You’re a UK business acting as an EU client’s “AI supplier” (for example, you customise, integrate, or fine-tune AI models that they deploy).
Even if you’re not directly in scope today, the AI Act is already shaping procurement, investor due diligence, and partner expectations. In practice, UK SMEs may be asked to prove AI governance maturity to win deals.
AI Act vs UK AI Rules: Are We Getting A UK Version?
The UK’s approach to AI regulation (so far) is more “principles-based” and spread across regulators, rather than a single AI law like the EU AI Act.
But UK businesses still have existing legal duties that overlap heavily with AI risk, including:
- UK GDPR and the Data Protection Act 2018 (especially if AI touches personal data).
- Consumer protection rules (if AI affects advertising, pricing, refunds, or product claims).
- Equality and discrimination law (if AI influences recruitment, promotions, or customer decisions).
- Contract and negligence risk (if AI outputs cause loss and your terms don’t handle it).
So, it’s not just about “EU compliance” - it’s also about protecting your business from real operational and legal blow-ups.
How The AI Act Classifies AI Risk (In Plain English)
The AI Act is built around a risk-based framework. That matters because your obligations depend on where your AI use-case sits.
1) Prohibited AI (Unacceptable Risk)
Some AI practices are essentially “off-limits” under the AI Act (subject to limited exceptions). While you shouldn’t rely on this list as a complete compliance test, it’s a useful red flag category.
Examples can include AI used for certain manipulative or exploitative purposes, and specific forms of sensitive biometric categorisation or social scoring-style systems.
Business takeaway: If your product roadmap includes behavioural manipulation, biometric profiling, or highly intrusive surveillance-type features, you’ll want advice early - because the compliance risk can become existential.
2) High-Risk AI
“High-risk” is the core category where the AI Act becomes very compliance-heavy.
High-risk systems include AI used in areas like:
- Recruitment and workplace decisions (screening candidates, performance management, termination recommendations).
- Education (assessment, admissions).
- Creditworthiness and access to essential services.
- Critical infrastructure and certain safety components.
- Law enforcement / migration / justice-related contexts (generally less relevant for most SMEs, but important if you sell into these sectors).
Business takeaway: If your AI makes, recommends, or strongly shapes decisions about people’s opportunities (jobs, housing, credit, education), assume “high-risk” might be in play until proven otherwise.
3) Limited-Risk AI (Transparency Obligations)
Some AI is allowed but triggers transparency requirements.
Common examples include where users might reasonably think they’re dealing with a human (for example, certain chatbots), or where content is generated or altered in a way that could mislead (for example, synthetic audio/video or “deepfake” style edits). In these cases, the AI Act leans towards: label it, explain it, and don’t trick people.
Business takeaway: If you use chatbots, AI customer support, AI-generated marketing content, voice cloning, or deepfake-like editing, you’ll want a transparency playbook.
4) Minimal-Risk AI
This is the “most AI” category - everyday tools like spam filters, basic recommendation engines, and internal productivity AI features (depending on context).
You may have few or no AI-Act-specific obligations here, but “minimal-risk” doesn’t mean “no compliance work”. You still need to manage:
- Data protection and confidentiality risk
- Customer claims and misleading advertising risk
- Security and IP risk
- Contract risk between supplier and customer
And depending on what you’re doing, other AI Act rules can still apply (for example, transparency duties for certain AI-generated content, and separate obligations for general-purpose AI models - see below).
General-Purpose AI (GPAI) And Foundation Models: Why This Matters For SMEs
A key part of the EU AI Act that many UK startups miss is that it includes specific rules for general-purpose AI (GPAI) - meaning AI models that can be used across many different tasks (often called “foundation models”).
This matters because many SMEs don’t “train models from scratch” - they build products on top of third-party GPAI via API, fine-tuning, or integrations. In practice:
- If you develop and place a GPAI model on the EU market, you may have model-provider obligations (including technical documentation, information for downstream providers, and policies to respect EU copyright rules).
- If you deploy or integrate third-party GPAI, you may still need to meet product/system-level obligations (for example, transparency, data protection, security, and - if your use-case is high-risk - the high-risk system requirements).
Business takeaway: If your product relies on a foundation model (even if a vendor provides it), treat “what does the model provider promise, and what do we need to do downstream?” as a core due diligence item - especially for EU-facing deals.
Are You A Provider Or A Deployer? (Your Role Changes Your Duties)
One of the most important (and most confusing) parts of the AI Act is that your obligations depend on your role.
Provider (Roughly: The Builder / Seller)
You’re likely a provider if you develop an AI system, or have one developed, and then place it on the market under your name (including offering it as a SaaS tool).
If you’re a provider of a high-risk system, your obligations can include:
- Risk management and quality management systems
- Technical documentation and record-keeping
- Appropriate data governance (including training/validation/testing data quality)
- Human oversight design
- Accuracy, robustness, and cybersecurity measures
- Conformity assessment steps before placing it on the market
Deployer (Roughly: The User Inside A Business)
You’re likely a deployer if you use an AI system in your operations (for example, using an AI recruitment tool, or an AI system to assess customer eligibility).
Deployers can have duties around:
- Using the system according to instructions
- Ensuring appropriate human oversight in practice (not just “in theory”)
- Monitoring outcomes and responding to issues
- Keeping certain logs/records (depending on the system)
UK Startup Reality Check: Most Businesses Are Both
Many UK startups and SMEs are both:
- You use third-party AI tools internally (deployer), and
- You also ship AI features to customers (provider), or integrate/fine-tune models (which can push you closer to provider obligations).
This is why a quick “we don’t build models, so we’re fine” approach often doesn’t hold up under EU-style compliance scrutiny.
When Does The AI Act Start Applying? (Rollout And Timing)
The AI Act doesn’t switch on overnight. Different parts take effect in phases. The exact dates and transitional rules can be technical, but from a UK startup/SME perspective the key point is:
- Some obligations start earlier (including rules around prohibited AI and certain baseline requirements).
- Other obligations ramp up later (including many high-risk system requirements and parts of the GPAI regime).
Business takeaway: If you sell into the EU (or plan to), you’ll usually want to start your “AI Act readiness” work now - because procurement and enterprise customers won’t wait for the final compliance deadline.
What Does Compliance Look Like In Practice For UK Startups And SMEs?
The AI Act can sound abstract. For a smaller business, what matters is: what do you actually need to put in place without slowing your team to a standstill?
Below is a practical, startup-friendly approach.
1) Map Your AI Use-Cases (Before You Map The Law)
Start with a simple inventory. For each AI system, capture:
- What the system does (and what decisions it influences)
- Who uses it (staff, customers, both)
- Where your users/customers are located (UK only, EU, global)
- What data goes in (personal data? sensitive data? confidential business info?)
- What comes out (recommendations, scores, generated content, automated decisions)
- Who supplies it (in-house, vendor, open-source, API)
This step also helps with broader legal compliance like UK GDPR and cybersecurity risk management.
2) Check If Any Use-Case Could Be “High-Risk”
If your AI touches recruitment, HR, credit decisions, essential services, or safety-related functions, treat it as a priority review item.
A very common SME example: using AI to shortlist job candidates or score them. Even if a human “approves” the final decision, the AI may still be influential enough to trigger high-risk obligations.
If you’re using AI at work, it’s also worth aligning policy expectations with staff. A clear Generative AI Use Policy can prevent accidental confidentiality breaches and set rules on what staff can input into AI tools.
3) Build Transparency Into Your Product (And Your Customer Journey)
Transparency is one of the most achievable “quick wins” for SMEs - and it’s often the difference between looking trustworthy vs looking risky.
Practical transparency measures include:
- Labeling AI chat features clearly (for example, “AI assistant” rather than “agent”).
- Explaining the limits: “AI can make mistakes - please verify before acting.”
- Giving a path to a human where appropriate (especially for complaints or sensitive decisions).
- Documenting what your AI does in product documentation and onboarding.
These steps also reduce consumer-law and misrepresentation risk if customers rely on outputs and things go wrong.
4) Get Your Data Protection House In Order
For most SMEs, the fastest way AI creates legal risk is data - what goes into the system, where it goes, and whether you’re allowed to use it that way.
If your AI involves personal data (customers, users, leads, employees), you’ll likely need to think about:
- Your lawful basis for processing under UK GDPR
- Data minimisation (only using what you need)
- Security measures and access controls
- Vendor due diligence and processing terms
- Whether a DPIA (Data Protection Impact Assessment) is appropriate
It’s hard to do this properly without clear documentation. A tailored Privacy Policy is a good baseline, but AI-specific workflows may also need additional notices or internal assessments.
And if you’re sharing personal data with AI vendors, a Data Processing Agreement is often a key part of showing you’ve handled supplier risk properly.
5) Lock Down Confidentiality And Security (Because AI Tools Don’t “Forget”)
SMEs often move fast, and team members will naturally want to paste content into AI tools to save time. The problem is that confidential business information and client data can end up being processed in ways you didn’t intend.
If your team uses AI tools day-to-day, it’s worth setting boundaries through an Acceptable Use Policy (for example: what tools are approved, what must never be input, and how outputs can be used).
It’s also sensible to align on whether your AI use is actually confidential in practice. The answer can depend on settings, contracts, and how the tool provider processes data - which is why businesses often ask us whether ChatGPT is confidential (as a concept, it’s a reminder that you need to check the terms and your own internal rules).
6) Update Your Customer Terms And Product Contracts
If you provide AI outputs to customers, you should expect questions like:
- “Can we rely on these outputs?”
- “Who owns the outputs?”
- “What happens if the output is wrong and we lose money?”
- “How do you handle bias, errors, and misuse?”
These are commercial questions, but they quickly become legal questions if your contract doesn’t clearly allocate risk.
Your website and platform terms are often where you set those boundaries. Depending on your model, you may need robust Website Terms And Conditions or platform terms that address acceptable use, disclaimers, and limitations of liability (tailored to your actual product and customers).
A Simple AI Act Readiness Checklist For SMEs
If you’re time-poor (and most founders are), here’s a practical checklist you can work through over a couple of focused sessions.
Step 1: Scope And Risk
- Create an AI inventory (tools you use + features you ship).
- Identify EU touchpoints (customers, users, distribution partners).
- Flag any “people-impacting” uses (HR, recruitment, eligibility, pricing, credit, education).
Step 2: Governance And Documentation
- Assign an internal owner for AI compliance and risk.
- Document what your AI is for, what it isn’t for, and its limitations.
- Put approval steps around new AI features (so you don’t ship risk accidentally).
Step 3: Data And Security
- Confirm whether personal data is processed (customer, user, employee data).
- Review vendor terms and processing arrangements.
- Implement rules for staff AI use and protect confidential information.
Step 4: Customer-Facing Transparency
- Label AI-generated content and AI interactions where required/appropriate.
- Provide escalation to a human for sensitive issues.
- Ensure product claims match reality (avoid “magic AI” marketing statements).
Step 5: Contracts
- Update customer terms for AI disclaimers, liability, and acceptable use.
- Ensure supplier agreements cover security, service levels, and data processing.
- Make sure IP and confidentiality clauses reflect how AI outputs are created and used.
Done properly, this isn’t just a compliance exercise - it’s part of building a business that enterprise customers and investors can trust.
Key Takeaways
- The EU AI Act is an EU law, but it can still affect UK startups and SMEs if your AI system (or, in some cases, your AI outputs) are placed on the EU market or used in the EU.
- The AI Act is risk-based: high-risk AI (especially HR, recruitment, and access-to-services tools) brings heavier compliance duties.
- Your responsibilities depend on whether you’re a provider (building/selling) or a deployer (using AI in your operations) - and many SMEs are both.
- The Act also introduces a separate regime for general-purpose AI (GPAI) and foundation models, which matters if you train models, fine-tune them, or build EU-facing products on top of third-party models.
- Practical readiness usually means: mapping AI use-cases, improving transparency, tightening data protection and security, and updating customer and supplier contracts.
- Even where the AI Act doesn’t apply, UK businesses still need to manage overlapping obligations under UK GDPR, consumer law, equality law, and contract law.
If you’d like help reviewing how the AI Act affects your business model, or you want your policies, privacy documents, and customer terms set up properly from day one, you can reach us at 08081347754 or team@sprintlaw.co.uk for a free, no-obligations chat.


