It’s time to govern your team’s AI use

(Watch the video summary at the end or read the full article below.)
Do you know which AI tools your team is using at work… and what they’re putting into them?
Most senior leaders we speak to think they do. Until we dig a little deeper.
Generative AI tools like ChatGPT and Gemini have slipped into everyday work incredibly fast. They’re great for productivity. Drafting emails. Summarising documents. Brainstorming ideas. Solving problems faster.
The trouble is, they’ve arrived so quickly that governance hasn’t kept up.
A recent report looked at how staff are using GenAI, and the findings are eye-opening.
AI usage in organisations has surged. The number of users tripled in just a year.
People aren’t just trying it out either. They’re relying on it. Prompt usage has exploded, with some organisations sending tens of thousands of prompts every month.
At the very top end, usage runs into the millions.
On the surface, that sounds like efficiency.
Underneath, it’s something else entirely.
Nearly half of people using AI tools at work are doing so through personal accounts or unsanctioned apps.
This is called “shadow AI”. It means staff are uploading text, files, and data into systems the school or business doesn’t control, can’t see, and can’t audit.
That’s where the risk creeps in.
When someone pastes information into an AI tool, they’re not only asking a question. They’re sharing data.
Sometimes that data includes customer details, internal documents, pricing information, intellectual property, or even login credentials. Often without you realising it.
According to the report, incidents where sensitive data is sent to AI tools have doubled over the past year. On average, organisations now face hundreds of these incidents every month.
Personal AI apps, which operate outside of company controls, have become a major insider risk, not because of malicious intent, but because well-meaning people are simply trying to do their jobs more quickly.
This is where many organisations get caught out, they assume AI risks come from external hacking, when in fact the biggest risks often come from within.
It can look like an employee copying and pasting the wrong thing into the wrong box, at the wrong time.
There’s also a compliance angle here.
If you operate in a regulated environment, or handle sensitive customer data, uncontrolled AI use can put you in breach of your own policies, or someone else’s regulations, without anyone noticing until it’s too late.
The warning is blunt: As sensitive information flows freely into unapproved AI ecosystems, data governance becomes harder and harder to maintain.
At the same time, attackers are getting smarter, using AI themselves to analyse leaked data and tailor more convincing attacks.
So what’s the answer?
It’s not banning AI. That ship has sailed. It’s not pretending it’s harmless either.
The real answer is governance.
Which means deciding which AI tools are approved for work use, being clear about what information can and cannot be shared, and putting visibility and controls in place so data stays where it should. It also means helping your team understand the risks in a practical, sensible way, so people use AI confidently and responsibly rather than cautiously or blindly.
AI is already here. Avoiding it isn’t safer. Governing it is.
We can help you put the right policies in place and educate your team on the risks of AI. Get in touch.
☎️ Camb: 01223 209920 ☎️ London: 020 3519 0124
☎️ Suffolk: 0144 059 2163 ☎️ Sheffield: 0114 349 8054
💻 www.breathetechnology.com | 📧 lucy@breathetechnology.com
Watch our short video below:


