Alex is Sprintlaw’s co-founder and principal lawyer. Alex previously worked at a top-tier firm as a lawyer specialising in technology and media contracts, and founded a digital agency which he sold in 2015.
Artificial intelligence has quickly become part of everyday life for UK businesses. It sits behind recruitment tools, powers customer-service chatbots, shapes online advertising, highlights suspicious transactions and quietly improves back-office efficiency. Because much of this technology isn’t labelled as “AI”, many organisations underestimate how deeply it runs through their operations.
The EU’s new Artificial Intelligence Act is the first attempt anywhere in the world to regulate AI comprehensively. Although the UK is no longer part of the EU, the law still affects British businesses whenever their AI systems or the outputs they generate are used inside the EU. A UK business can fall within scope even without a EU office, presence or subsidiary. What matters is where the AI is used and where its effects are felt.
This guide explains the EU AI Act in clear, practical terms, focusing on what UK SMEs need to know and how they can prepare without unnecessary complexity.
What the EU AI Act Is Trying to Achieve
The legislation aims to ensure AI is used safely, transparently and in a way that respects fundamental rights. To achieve this without stifling innovation, the Act takes a risk-based approach. It does not treat all AI systems equally. Instead, the obligations placed on a business depend on how likely an AI system is to cause harm and in what context.
Most AI used by UK SMEs will fall into low-risk categories. However, AI used in recruitment, credit decisions, education, healthcare and essential services may fall into regulated categories with additional requirements.
What Counts as an AI System?
The Act defines AI in a specific and modern way. It covers machine-based systems that infer from data how to generate outputs such as predictions, decisions, recommendations or content. These outputs must be capable of influencing a physical or digital environment.
This definition captures systems built using machine learning, logic- or knowledge-based approaches, and statistical or optimisation techniques. It does not cover simple, transparent automation like basic workflow rules. But it does include many tools businesses use every day: applicant-ranking engines, predictive analytics, generative content tools, fraud indicators and document-classification systems.
Understanding which tools qualify as AI is the first step, because the Act’s obligations only apply if a tool meets this definition.
Does the EU AI Act Apply to UK Businesses?
The short answer is yes, if your AI system is used within the EU or its outputs influence decisions there. Geography is not the determining factor; effect is.
A UK business providing AI-powered services to European clients, or allowing EU users to interact with AI tools on its platform, could fall within scope. Even the outputs of an AI system created or run in the UK can bring the business under the Act if those outputs materially affect circumstances in the EU.
In practice, this means a London recruitment consultancy using AI to shortlist candidates for a client in France, or a Manchester fintech offering automated affordability analysis to a lender in Spain, may need to comply. A creative agency using AI to generate synthetic marketing content for EU audiences may also have transparency duties.
Simply having a website accessible in Europe does not trigger the Act. Actual use of AI by EU users or reliance on AI-generated outputs within the EU is what matters.
How the EU AI Act Categorises Risk
The Act creates four risk categories: unacceptable, high, limited and minimal. Understanding them helps clarify your obligations.
| Risk Level | Meaning | What It Requires |
| Unacceptable Risk | AI uses that pose unacceptable harm to people’s rights, safety or democratic values. These systems are banned entirely. | Cannot be sold, provided or used in the EU. Includes practices such as social scoring and certain forms of biometric surveillance. |
| High Risk | AI used in sensitive areas where decisions can significantly affect individuals (such as hiring, credit, education or essential services). | Strict obligations: documented risk management, high-quality data, human oversight, technical documentation, monitoring and (in many cases) conformity assessments. |
| Limited Risk | AI where the main concern is that users may not realise AI is involved. | Transparency duties. Users must be told when they are interacting with AI or receiving AI-generated or AI-altered content. |
| Minimal Risk | Everyday AI tools with very low potential for harm (such as spam filters and simple recommendations). | No additional risk-specific duties. Businesses still need basic responsible-use practices and staff AI literacy. |
The most serious category, unacceptable risk, covers AI systems that the EU considers too harmful to allow at all. Examples include social-scoring systems used by public authorities, tools that manipulate vulnerable individuals, emotion-recognition technology in schools or workplaces and untargeted scraping of facial images to build biometric databases. UK businesses rarely operate in these areas directly, but they can inadvertently rely on third-party tools containing prohibited features, which is why reviewing vendors is important.
Most of the practical obligations sit in the high-risk category. High-risk systems are not banned but are tightly regulated because they can affect people’s rights, access to essential services or personal safety. The law sets out specific contexts where systems are presumed high-risk, including recruitment and worker management, education and exam monitoring, creditworthiness and affordability assessments, certain healthcare applications, and systems influencing public-sector decisions such as migration or justice. Not every tool used in these areas is automatically high-risk; the exact function and purpose matter.
Here it is useful to distinguish between providers and deployers. Providers develop or substantially modify an AI system. Deployers use one. The heaviest obligations apply to providers, who must ensure data quality, maintain detailed documentation, perform ongoing monitoring and build human- oversight features into the system. Deployers, by contrast, must use the system correctly, supervise it appropriately, supply lawful and suitable data and give users the transparency and information required by the provider.
Limited-risk systems require transparency rather than heavy controls. A UK retailer using an AI chatbot for EU customers must tell users they are interacting with AI. A creative agency producing synthetic imagery for EU clients may need to label it where it could be mistaken for authentic content. The aim is not to ban such tools but to ensure people know when AI is involved.
Finally, minimal-risk systems include everyday AI such as spam filters, writing assistants or simple recommendation tools. These systems do not carry additional risk-specific duties. However, all organisations - even those using only minimal-risk systems - must meet the Act’s general obligations, such as ensuring staff have adequate AI literacy and using systems responsibly.
Providers, Deployers and the AI Office
Most UK SMEs will be deployers. They use AI developed by others rather than building or substantially modifying it themselves. But a business becomes a provider if it fine-tunes a model, significantly alters an existing system or integrates AI into a product offered to EU users in a way that changes the system’s intended purpose.
The EU has also established the AI Office, a new body responsible for overseeing general-purpose AI models like GPT, Claude and Gemini and coordinating enforcement across the EU. For many UK businesses, the AI Office will set the tone for how AI is expected to be used safely in global markets.
General-Purpose AI and UK Businesses
General-purpose AI (GPAI) refers to large, flexible models that can be used for multiple tasks. The strongest obligations fall on the developers of these models, but UK businesses may still take on responsibilities if they fine-tune a GPAI model, build a high-risk system on top of one or offer access to a GPAI-enabled service in a manner that changes its risk profile.
Understanding whether your use of GPAI crosses the line from deployment to provision is important, especially if EU customers rely on it.
Timeline: When the Rules Apply
- The EU AI Act came into force in August 2024.
- From February 2025, prohibited practices and certain AI-literacy duties apply.
- From August 2025, obligations for general-purpose AI models begin.
- The more detailed requirements for high-risk systems come into effect gradually from 2026 to 2027.
This phased timeline allows UK businesses time to prepare.
What UK SMEs Can Do Next
A practical first step is simply determining whether any of your AI systems or outputs reach the EU. Once you know this, mapping your AI tools and understanding what they do becomes far easier. From there, assessing the level of human oversight required, strengthening your data governance practices and ensuring transparency for users will help you meet the Act’s expectations.
Many EU clients will increasingly request contractual assurances about AI compliance. Updating your agreements and internal processes early can prevent problems later and even give you a competitive advantage in the European market.
Final Thoughts
The EU AI Act is reshaping global expectations around responsible AI use. For UK businesses, the biggest question is not whether the UK has left the EU, but whether your AI touches EU markets. For many SMEs, the answer is yes.
The reassuring news is that most tools fall into low-risk categories and carry only light obligations. Where more stringent requirements apply - particularly in areas like hiring, finance or education - early preparation goes a long way. Mapping your AI systems, improving oversight and updating your contracts will put your organisation in a strong position to navigate the EU’s new AI landscape. When questions arise, targeted legal support can help you approach compliance efficiently and confidently.
If you would like a consultation on legal compliance with the EU AI Act, you can reach us at 08081347754 or team@sprintlaw.co.uk for a free, no-obligations chat.


