Alex is Sprintlaw’s co-founder and principal lawyer. Alex previously worked at a top-tier firm as a lawyer specialising in technology and media contracts, and founded a digital agency which he sold in 2015.
- Why the business is often the first party exposed
- What “legally responsible” actually means
- When an AI mistake becomes a consumer law problem
- When an AI mistake becomes a data protection issue
- When an AI mistake becomes a discrimination issue
- The software provider may still matter - but usually behind the scenes
- What should a business do if something has already gone wrong?
- What should a business put in place before anything goes wrong?
- AI may change the process, but not the need for oversight
AI is now part of everyday business. Small businesses are using it to answer customer enquiries, automate admin, draft communications, support internal workflows and speed up decision-making. Used carefully, it can save time and reduce costs.
But when an AI system gets something wrong, the legal risk does not simply disappear into the software.
A chatbot might tell a customer the wrong thing about a refund. An automated billing process might generate the wrong invoice. A staff member might paste personal data into a public AI tool without understanding where that information goes. An AI-assisted screening process might produce an unfair outcome for a job applicant or customer.
In each of those situations, the legal question is not just whether the technology failed. The real question is what legal obligation may have been breached - and who is exposed when that happens.
In many cases, that will be the business using the tool. In the UK, regulators increasingly approach AI in that way: not as a separate legal actor, but as part of a business’s systems, governance and decision-making. The ICO’s guidance makes clear that UK data protection law applies where AI systems process personal data, and the CMA’s current unfair commercial practices guidance under the Digital Markets, Competition and Consumers Act 2024 shows the same broader approach in the consumer context.
Why the business is often the first party exposed
The simplest reason is also the most important one: the customer usually deals with the business, not the AI provider.
So if your business chooses to use AI in customer service, marketing, onboarding, billing, recruitment or internal operations, the law will often focus on the role your business played in deploying and relying on that system. That does not mean the business is automatically liable every time an AI output is wrong. But it does mean the business is often the first party exposed where the output affects a customer’s rights, personal data, or access to a service.
That distinction matters. Saying “the software made the mistake” may explain what happened, but it does not usually end the legal analysis. The next question is whether the business made a misleading representation, mishandled personal data, made an unfair decision, or failed to put proper oversight around a system it chose to use. ICO guidance on AI and data protection emphasises exactly that kind of organisational responsibility around fairness, accountability, accuracy and governance.
What “legally responsible” actually means
One reason this topic can be confusing is that “legal responsibility” can mean several different things.
Sometimes it means a customer complaint, refund demand or dispute. Sometimes it means regulatory scrutiny under consumer protection or data protection law. Sometimes it means the business has to assess whether a personal data breach has occurred. And sometimes it means a contractual dispute with the software provider over who ultimately bears the loss.
Not every AI mistake creates the same kind of liability. A clumsy or inaccurate response might simply be a service issue with no real legal consequence. But the position changes when the output causes financial loss, affects legal rights, involves personal data, or contributes to an unfair or discriminatory outcome.
That is why AI liability is rarely a single rule. A business is often the first party exposed, but whether it is actually liable depends on the facts, the harm caused, and the legal framework involved.
When an AI mistake becomes a consumer law problem
A good example is customer-facing information.
If an AI chatbot tells a customer they are not entitled to a refund when the law says they may be, the issue is not just that the bot generated the wrong answer. The legal issue may be that the business communicated misleading information about the customer’s rights.
In the UK, the unfair commercial practices regime is now set out in the Digital Markets, Competition and Consumers Act 2024 for commercial practices from 6 April 2025 onward. CMA guidance says those provisions replace and update the old Consumer Protection from Unfair Trading Regulations 2008. Under that regime, businesses must not engage in misleading actions or omissions that affect consumers’ transactional decisions. So if AI-generated responses misstate pricing, cancellation rights, service features, refund rights or subscription terms, those statements may still be treated as representations made by the business itself.
The same point applies more broadly under the Consumer Rights Act 2015. If your AI tools are effectively speaking on behalf of the business, the practical question is whether the business can stand behind what is being said.
When an AI mistake becomes a data protection issue
Data protection is another major risk area, particularly where employees use off-the-shelf generative AI tools casually.
The UK GDPR and the Data Protection Act 2018 apply whenever personal data is processed, including where AI systems are used to analyse, generate or handle that data. The ICO’s AI guidance makes clear that organisations remain responsible for complying with core data protection principles such as fairness, transparency, accuracy and security when using AI.
That means if staff paste personal data into a public AI platform, or if an AI system generates or infers personal data that the organisation then relies on, the issue is not merely a technical mistake. It may raise compliance questions under UK data protection law.
Where AI is used to make decisions about individuals, businesses may also need to consider the UK GDPR rules on automated decision-making and profiling. ICO guidance explains that a process may still count as solely automated unless a human genuinely weighs up and interprets the result before applying it, so superficial sign-off is unlikely to be enough.
If an AI-related incident involves unauthorised disclosure of, loss of, or access to personal data, the organisation may also need to assess whether a personal data breach has occurred. The ICO says notifiable breaches must be reported where there is likely risk to people’s rights and freedoms, and this should be done as soon as possible and, where feasible, within 72 hours. If the risk is high, affected individuals may also need to be told without undue delay.
That is one of the key practical points many businesses overlook: data protection risk is not just about cybersecurity. It also depends on how staff use AI tools, what data is entered into them, and whether the organisation understands how that data is stored, reused or acted on.
When an AI mistake becomes a discrimination issue
Some AI risks are less obvious until you look at the decisions being made.
If AI is used to screen job applicants, rank customers, detect fraud, triage complaints or assess eligibility for services, the legal concern may be less about technical accuracy and more about fairness.
In the UK, the Equality Act 2010 can be engaged if automated tools contribute to discriminatory outcomes. The EHRC has made clear that AI is part of its broader equality regulatory work, and its published guidance on AI and equality highlights the risk that AI systems can reproduce bias if they are not carefully designed and monitored.
For a small business, that means the risk is not only in what the AI says, but also in what the AI does. The more a system influences decisions about people, the more important human oversight, testing and governance become.
The software provider may still matter - but usually behind the scenes
None of this means the software vendor is irrelevant. It simply means the vendor’s role is usually a second question, not the first one.
From the customer’s perspective, the relevant relationship is often with your business. From your perspective, the next question may be whether your contract with the AI provider allows you to recover any losses.
That is where liability caps, warranties, indemnities, data-use clauses, service levels and exclusions become important. Many standard software terms are drafted to protect the provider and may place responsibility for reviewing outputs on the business using the system.
So there are often two separate legal discussions happening at once: one involving the business and the customer or regulator, and another involving the business and the technology provider. Confusing those two conversations is one of the easiest ways to misunderstand AI risk.
What should a business do if something has already gone wrong?
If an AI-related issue has already occurred, the first step is to understand what kind of legal issue it may have created.
Was a customer misled about their rights or the service being provided? Was personal data disclosed inappropriately? Did the AI output influence a hiring or service-access decision? Once the issue is identified, the response becomes more practical.
The business may need to correct the statement, review complaint handling, assess whether customer remedies may apply, investigate a potential personal data breach, or review the supplier contract to understand whether there is any recourse against the AI vendor. Where personal data is involved, the business may also need to consider whether the incident triggers reporting obligations to the ICO and notification obligations to affected individuals.
That is often the point at which legal advice becomes especially useful, because the right next step depends on which legal framework has actually been triggered.
What should a business put in place before anything goes wrong?
The most effective legal protection usually comes before the problem occurs.
If your business is using AI in customer service, internal operations or staff workflows, it is worth reviewing whether your legal documents and processes reflect that reality. In many cases they do not. Privacy notices may have been written before the business started using generative AI. Terms and conditions may assume all customer communications are handled manually. Staff may experiment with AI tools without any clear internal guidance. Supplier terms may have been accepted online without any real legal review.
Addressing those gaps can significantly reduce legal risk. That may involve reviewing customer-facing terms and conditions, checking supplier contracts with AI vendors, updating privacy documentation, preparing an internal AI use policy for staff, and identifying where human review should be built into important workflows.
These are familiar legal and risk-management tasks, but the increasing use of AI makes them more urgent.
AI may change the process, but not the need for oversight
One of the biggest misconceptions about AI in business is that using sophisticated technology shifts responsibility somewhere else. In practice, the opposite is often true.
The more a business relies on AI for communication, data handling and decision-making, the more important it becomes to understand the legal consequences of that reliance.
That does not mean a business will be liable every time an AI output is imperfect. But where an AI system creates misleading information, mishandles personal data, contributes to a discriminatory outcome or causes loss, the organisation using the system will often be the first place customers, regulators or courts look.
The practical takeaway is simple: if your business is using AI, the legal question is not just whether the tool works. It is whether your contracts, data protection practices, internal policies and oversight processes are ready for what happens when it does not.
If you would like a consultation on the legal side of using AI in your business, you can reach us at 08081347754 or team@sprintlaw.co.uk for a free, no-obligations chat.


