The businesses we work with are not slow to see the value in AI. A law firm partner wants to search a decade of case files in seconds. An accountant wants to draft reports faster. An HR consultancy wants to stop drowning in candidate documents. The potential is obvious. What stops them isn't enthusiasm — it's a question they can't get a clean answer to: where does our data actually go when we use these things?
For many businesses, that question is the whole ballgame.
Why Some Data Can't Leave the Building
Most of the AI tools you hear about work the same way. You send your text, your documents, your queries to a server somewhere — usually in the United States — and the AI processes it there and sends back a response. That's fine for plenty of tasks. But for a lot of the work that small and medium businesses actually do, it's not fine at all.
Law firms operate under strict professional confidentiality obligations. Sending client communications, case notes, or legal strategy documents to a third-party server — even a reputable one — raises serious questions about privilege and professional conduct. Most bar associations and law societies have not caught up with AI yet, which means the risk sits with the firm.
Medical practices face a harder constraint still. Under GDPR, personal health data is a special category requiring explicit legal grounds to process, and transferring it outside the European Economic Area — which most major US cloud providers do — requires additional safeguards that are frequently not in place. The fine for getting it wrong is not a warning letter.
Accountants and financial advisors sit on data their clients expect to stay private. Employment records, salary data, and tax affairs are confidential both legally and commercially. The idea of that material being ingested by a cloud service — potentially visible to staff at that company under certain legal circumstances, potentially used to improve future models — is not a hypothetical risk. Several major providers have faced scrutiny over exactly this.
The pattern is consistent: the data is too sensitive, too regulated, or too commercially valuable to hand to a service you don't control.
Three Ways to Use AI — and What Each One Means for Your Data
Not all AI deployments work the same way. The three main options sit on a spectrum between convenience and control.
| Cloud API (e.g. OpenAI, Claude) | Private Cloud LLM | On-Premise LLM | |
|---|---|---|---|
| Where the AI runs | Provider's servers, usually in the US | Dedicated server in a data centre of your choice | Your own hardware, in your building |
| Where your data goes | Leaves your network, processed externally | Stays within your chosen geography | Never leaves your premises |
| Who manages the infrastructure | The provider | A hosting provider or you | You (or us) |
| GDPR / data residency | Requires careful review; international transfers often apply | Manageable with the right provider and location | Full control; simplest compliance position |
| Setup complexity | Very low — works immediately | Medium | Medium to high |
| Cost | Pay per use | Fixed monthly infrastructure cost | One-off setup, then low running cost |
| Best for | Non-sensitive tasks, general productivity | Sensitive data, no appetite for on-site hardware | Highly confidential data, maximum control |
The cloud API option — tools like ChatGPT, Claude, or Gemini accessed via a subscription or developer API — is the easiest to start with. You send a request, you get a response. The trade-off is that your data travels to and is processed on someone else's infrastructure. For many tasks, that's fine. For the work described above — client files, health records, commercial negotiations — it usually isn't.
Private cloud sits in the middle. You rent dedicated infrastructure from a hosting provider, typically in a European data centre, and run an open-source AI model on it. The data stays in a jurisdiction you specify, the infrastructure is yours alone, but you're not responsible for physical hardware. It's a good option for businesses that want data residency without an on-site server room.
On-premise means the AI runs on hardware inside your building, on your network. Nothing leaves. This became genuinely practical in 2023 and 2024, when a new generation of capable open models — Llama, Mistral, and others — became available for businesses to run on their own hardware. These are not toy versions. They handle document reading, drafting, and question-answering to a useful professional standard. They run on a server in your office. They do not phone home.
The European Context
This is not just a question of company policy. It has become a political and legal question across Europe.
In 2020, the Court of Justice of the European Union issued the Schrems II ruling, which invalidated the EU-US Privacy Shield — the framework that had allowed European businesses to transfer personal data to US companies relatively freely. The core problem: US surveillance law gives American authorities the power to compel US companies to hand over data, including data stored on servers in Europe. That makes transfers to US cloud providers legally difficult to justify under GDPR, regardless of where the servers physically sit.
A replacement framework — the EU-US Data Privacy Framework — was adopted in 2023. It is already being challenged in court, and its long-term stability is uncertain. The underlying legal tension has not gone away.
Beyond the legal mechanics, the political climate in Europe has shifted considerably. The phrase "digital sovereignty" — Europe's ability to control its own data and digital infrastructure — has moved from niche policy discussion to mainstream concern. The Gaia-X initiative, launched by France and Germany, is an attempt to build a federated European cloud infrastructure that operates under European rules. European providers like Hetzner, OVHcloud, and IONOS offer infrastructure that is entirely within EU jurisdiction and not subject to US law — an increasingly relevant distinction.
The European Commission's digital strategy frames this as a question of strategic autonomy. The Data Act, the Data Governance Act, and the AI Act all have data sovereignty dimensions. The direction of travel is clear: European institutions and regulators want European data to stay under European control.
For businesses that work with sensitive client data, this context matters. The question of where AI processes your documents is not just a technical preference — it is increasingly a legal and political one. The tools now exist to answer it cleanly.
What You Get
A private AI deployment is not a stripped-down version of what the big cloud tools offer. For the work that professional services firms actually do — reading documents, writing drafts, answering questions, pulling information out of large files — a well-configured local model performs extremely well.
What you gain beyond capability is certainty. You know where your data is. You can tell your clients where their data is. You are not dependent on a pricing model that can change without warning or a service that can be discontinued.
For businesses operating under GDPR, there is a structural benefit: when data never leaves your premises, the obligations around international transfers, third-party processing agreements, and data subject requests become considerably simpler. You don't eliminate compliance complexity, but you reduce the surface area.
For businesses that handle commercially sensitive information — ongoing negotiations, board-level decisions, competitive strategy — the value is different again. The risk isn't just regulatory. It's the possibility that information your clients gave you in confidence ends up somewhere you didn't intend.
Let's Talk
If you work in a field where confidentiality matters — law, medicine, finance, HR, or anywhere else where your clients trust you with information they would not want shared — and you've been watching AI from a distance because you couldn't see a safe path in, we'd be happy to talk.
We can usually tell fairly quickly whether a private deployment makes sense for your situation, what the realistic cost and timeline looks like, and what you would actually be able to do with it. There's no pressure and no pitch — just a straightforward conversation about whether this is the right fit.