Alex is Sprintlaw’s co-founder and principal lawyer. Alex previously worked at a top-tier firm as a lawyer specialising in technology and media contracts, and founded a digital agency which he sold in 2015.
Using tools like ChatGPT can boost productivity across your business - from drafting emails to brainstorming marketing copy and summarising documents.
But if your team is pasting real customer, employee or supplier information into prompts, you’re processing personal data. That means UK data protection rules apply, and you’ll need a clear, compliant ChatGPT privacy policy and internal controls from day one.
In this guide, we’ll walk you through the legal requirements, what your ChatGPT privacy policy should say in plain English, and the practical steps to roll out AI safely at work.
Why Your Business Needs A ChatGPT Privacy Policy
Even if your business already has a general Privacy Policy, the way AI models handle data is different enough that it’s wise to address it specifically.
A ChatGPT privacy policy (either as a dedicated section in your main policy or a standalone notice) helps you:
- Explain when and why you use ChatGPT (or similar large language models) in your operations.
- Be transparent about the types of personal data that may appear in prompts or outputs, and how you minimise that data.
- Set out your lawful basis and retention practices for prompts, outputs and logs.
- Address international data transfers and vendor relationships with OpenAI (or other AI providers).
- Tell people how they can exercise their data protection rights in this context (access, deletion, objection, etc.).
Transparency is a legal requirement under UK GDPR, but it’s also good business. Clear privacy information builds trust with customers and partners - especially when you’re adopting new technology.
If you don’t have a fit-for-purpose notice yet, start by updating your core Privacy Policy and then add an AI-specific addendum you can link across your website and onboarding materials.
Which UK Privacy Laws Apply To ChatGPT Use?
When you use ChatGPT in your business, you need to consider these core frameworks:
- UK GDPR and the Data Protection Act 2018: These set the rules for processing personal data - including fairness, transparency, purpose limitation, data minimisation, security, retention, and individuals’ rights.
- PECR: If you deploy AI features on your site or app that set cookies or track users, the Privacy and Electronic Communications Regulations (PECR) apply to cookies/consent. Make sure your cookie banner and controls are compliant.
- Confidentiality and contract: Client confidentiality obligations and NDAs still apply. You must ensure staff do not share confidential or commercially sensitive information in prompts unless you have strong safeguards in place.
You should also map roles under UK GDPR. Often your business will be the controller (you decide why and how personal data goes into prompts) and the AI vendor will be a processor for certain services (especially via API/enterprise features). In some contexts, the vendor may act as a separate controller. Your privacy policy should describe this at a high level and your contracts with the vendor should reflect the actual roles.
For a broader overview of practical controls, our guide on ChatGPT GDPR steps for UK companies is a helpful companion to this article.
What To Include In A ChatGPT Privacy Policy
Your ChatGPT privacy wording should be written for humans, not robots. Aim for clear, plain-English statements that align with your actual practices. At a minimum, cover the following areas.
1) The AI Tools You Use And Your Purposes
Briefly name the AI models or providers (e.g. “we use large language models such as OpenAI’s ChatGPT”) and explain typical use cases. For example:
- Drafting routine business communications and internal documentation.
- Brainstorming marketing ideas and outlines (without real customer data).
- Summarising long documents for internal review.
- Providing first-draft code snippets or knowledge-base articles.
Be explicit if staff are instructed not to input personal data into prompts. If there are limited exceptions (e.g. redacted case summaries for internal training), explain them and your safeguards.
2) The Types Of Personal Data In Scope
Explain what personal data may be processed in this context. Keep this tight - ideally your internal policy prohibits personal data in prompts wherever possible. If personal data could occur, describe it (e.g. names, contact details, order references, free-text enquiry content) and state that you minimise and redact it wherever feasible.
3) Your Lawful Basis
Under UK GDPR you need a lawful basis. Common options include:
- Legitimate interests for back-office efficiency (balanced against individuals’ rights).
- Contract if processing is necessary to deliver a service someone has requested (used cautiously).
- Consent if you plan to use personal data in prompts for optional features (ensure it’s specific and withdrawable).
Your policy should name the basis you rely on and briefly justify it. If you rely on legitimate interests, note any measures you take to minimise risks (e.g. redaction, strict access controls).
4) Special Category Data And Children
Make it clear that staff must not input special category data (health, biometrics, beliefs, etc.) or children’s data into ChatGPT unless there is a documented lawful basis and additional safeguards. If your business doesn’t handle that data, say so.
5) International Transfers
AI vendors may process data outside the UK. Your policy should state whether data may be transferred internationally and that you use approved transfer mechanisms (such as the UK International Data Transfer Addendum or the UK Extension to the EU-US Data Privacy Framework, where applicable).
6) Retention Practices
Tell people how long prompts, outputs and related logs are kept and why. If you have disabled training or prompt retention with your vendor, say so. If you extract outputs into your systems (e.g. CRM notes), apply your normal retention schedule and reference it.
7) Security Measures
Set out high-level safeguards, for example:
- Access limited to trained staff only and use of enterprise accounts.
- Redaction and minimisation of personal data in prompts.
- Logging and audit trails for AI usage.
- Vendor due diligence and contractual protections.
It’s also wise to adopt an internal Generative AI Use Policy so your team has clear, practical do’s and don’ts.
8) Sharing And Vendors
Explain that personal data may be shared with AI vendors who help deliver your services, and that those vendors are bound by data protection obligations. If your use cases involve passing data between systems (e.g. CRM to an AI assistant), also reference any integrators or processors in your data map.
Where appropriate, ensure your vendor contracts include a robust Data Processing Agreement and, for two independent controllers exchanging data, a Data Sharing Agreement.
9) Individuals’ Rights
Reaffirm how people can exercise their rights (access, correction, erasure, restriction, objection and portability) in the context of AI. Make your process easy to follow and link to your contact route for requests.
Having a practical process and templates for a Subject Access Request will save you time when requests arrive.
10) Contact, Complaints And DPO Details
Include your contact details for privacy requests and complaints, whether you have a DPO, and how to contact the ICO if someone is unhappy with your response.
Using ChatGPT At Work: Policies, Prompts And Training
Your privacy notice is only one piece of the puzzle. Day-to-day compliance depends on how your team actually uses AI. The following internal controls are essential.
Set Clear Rules For Prompts
- No personal data in prompts unless there is a specific, documented reason and lawful basis.
- Redact or anonymise wherever possible (initials instead of names, remove emails/phone numbers, use generic descriptors).
- Never paste confidential or commercially sensitive information (unreleased IP, pricing, legal advice, API keys) into public AI tools.
- Use enterprise accounts with disabled training/retention settings where available.
Adopt An Internal AI Policy And Training
Give staff a concise, practical playbook and train them on it. A tailored Generative AI Use Policy should cover approved tools, permitted use cases, prompt hygiene, security, bias and quality checks, and escalation paths for incidents.
It also helps to back this up with an Acceptable Use Policy for systems generally, so employees know the boundaries of your tech stack.
Decide On Output Ownership And Quality Checks
Make it clear who is responsible for reviewing AI outputs and what “human-in-the-loop” checks are mandatory before publishing or sending content externally. This is important for both data protection and IP risk management.
If staff ever ask “is ChatGPT confidential?”, share guidance on context-specific risks and refer them to this explainer: Is ChatGPT Confidential?
Vendor, Data And Security Checklist Before You Deploy ChatGPT
Before you roll out AI across your business, work through a quick governance checklist. This will inform your privacy policy wording and keep you compliant.
1) Map Your Use Cases And Data
- List the AI tools you plan to use and what for.
- Identify whether any personal data will enter prompts or be generated in outputs.
- Remove or redesign any use case that can’t be done safely with redaction/minimisation.
2) Run A DPIA (Data Protection Impact Assessment)
If your AI use is likely to result in high risk to individuals, a DPIA is required. Even where it’s not mandatory, a proportionate privacy assessment helps you spot and mitigate risks early. Document your lawful basis, minimisation steps, vendor roles and security controls.
3) Lock Down Vendor Settings
- Disable training on your data and prompt retention where possible.
- Use enterprise or API options with stronger contractual terms and SSO.
- Enable logging and audit trails for accountability.
4) Put Contracts In Place
Ensure your vendor terms reflect the actual controller/processor roles and include processing instructions, confidentiality, security standards, sub-processor controls, assistance with rights requests and deletion on termination. A solid Data Processing Agreement will help cover these bases.
5) Address International Transfers
Confirm where prompts and outputs may be processed. If data leaves the UK, implement appropriate transfer tools and reference them in your policy. Record your transfer risk assessment.
6) Align Your Public Notices And Internal Policies
Update your Privacy Policy to reflect your AI use. Make sure your internal rules and staff training match what you promise publicly.
7) Prepare For Requests And Incidents
Decide how you’ll search for and export prompt records to respond to data rights requests. Keep an incident runbook and a tested Data Breach Response Plan in case of misdirected prompts or vendor issues.
Handling Data Subject Requests And Incidents
AI doesn’t change people’s rights - it just makes your data map a little more complex. Build the following into your process so you can respond quickly and lawfully.
Access And Deletion Requests
- Be ready to locate prompts and outputs containing someone’s data (logs, admin consoles, ticketing systems).
- Where deletion is requested and lawful, remove personal data from prompts/outputs and any downstream systems.
- If you can’t fully delete a record due to legal obligations, minimise and restrict access instead.
Templates and an internal playbook for a Subject Access Request will keep responses consistent and on time.
Corrections And Objections
Make it easy for people to tell you if AI-generated text about them is inaccurate, and have a process to correct or remove it. If someone objects to processing via AI under legitimate interests, assess and honour the objection where required.
Cookie And Tracking Compliance
If you deploy AI widgets or analytics that use cookies on your site, ensure your cookie controls are compliant. Your banner should offer a genuine choice (including an easy “reject all”) and list the purposes accurately. If you’re unsure, review your cookie approach against best practice and consider tightening it up.
Security Incidents
If personal data is pasted into a prompt by mistake, treat it as an incident. Contain the issue, assess the risk to individuals, decide whether it’s reportable to the ICO, and take steps to prevent a repeat (training, prompt guards, access restrictions). Keeping a prepared Data Breach Response Plan makes a stressful situation much easier to manage.
How To Structure Your ChatGPT Privacy Wording (With Example Phrases)
Here’s a simple structure you can adapt for your public notice. Keep it specific to what you actually do - don’t overpromise or undershare.
- Overview: “We use large language models (such as ChatGPT) to help our team draft routine communications, summarise documents and support internal research.”
- Data minimisation: “We instruct our staff not to include personal or confidential information in prompts unless strictly necessary, and to anonymise data where possible.”
- Lawful basis: “We rely on our legitimate interests in running an efficient business, balanced against your rights. Where appropriate, we may rely on contract or consent.”
- Sharing and transfers: “Your information may be processed by our service providers (including AI vendors) under contract. Where data is transferred outside the UK, we use approved transfer tools.”
- Retention: “We do not use prompts to train public AI models. We restrict prompt retention and apply our standard retention schedule to any content exported to our systems.”
- Security: “We limit access to trained staff, use enterprise controls, and log AI usage for accountability.”
- Your rights: “You can ask us for a copy of your information, to correct or delete it, or to object to our use of AI for your data.”
- Contact: “To exercise your rights or ask a question, contact us at or . You can also complain to the ICO.”
If you’d like a lawyer to draft or review your policy so it aligns with your actual tools and settings, our GDPR Package can bundle the documents and advice you need.
Common Pitfalls To Avoid
- Copying generic wording that doesn’t reflect your real AI use - regulators expect accuracy, not boilerplate.
- Letting staff “experiment” with real personal or confidential data in public tools without guardrails.
- Missing the vendor role analysis - your contracts should match whether the provider is acting as processor or separate controller.
- Ignoring international transfers - you need appropriate safeguards documented.
- Keeping prompts forever - set short retention periods and delete when no longer needed.
- Forgetting about rights requests - ensure you can find and export relevant prompts and outputs.
Key Takeaways
- Be transparent: update your public Privacy Policy with clear, plain-English wording about how your business uses ChatGPT, what data is involved, and people’s rights.
- Minimise risk: adopt a staff-facing Generative AI Use Policy, ban personal/confidential data in prompts by default, and train your team.
- Do your homework: run a proportionate privacy assessment, lock down vendor settings, and put an appropriate Data Processing Agreement (or Data Sharing Agreement) in place.
- Plan for rights and incidents: prepare a practical Subject Access Request process and keep an up-to-date Data Breach Response Plan.
- Align law and reality: make sure the roles (controller/processor), international transfers and retention you describe match how your tools are actually configured.
If you’d like help drafting a ChatGPT privacy policy and the internal policies to go with it, you can reach us at 08081347754 or team@sprintlaw.co.uk for a free, no-obligations chat.


