Shadow AI: The Hidden Risk in Your Organisation
Shadow AI is the new shadow IT. Employees are using ChatGPT, Gemini, and other AI tools without oversight. Here is how to manage the risk without killing innovation.
Nerdster Team
10 November 2025
Shadow IT was the challenge of the 2010s — employees adopting cloud services, personal devices, and unsanctioned applications outside the oversight of IT departments. Shadow AI is its successor and it is moving faster, with higher stakes. Across London businesses, employees are already using ChatGPT, Google Gemini, Claude, and dozens of specialised AI tools to draft emails, analyse data, summarise documents, and generate code. Most of them have never read the terms of service for the tools they are pasting company data into.
The Scale of the Problem
A 2025 survey by Salesforce found that 55% of UK employees use generative AI tools at work, and of those, 40% use tools that are not approved or provided by their employer. Microsoft’s 2025 Work Trend Index reported that 78% of AI users bring their own AI tools to work, a phenomenon they termed “BYOAI.”
The implication is stark: the majority of AI use in most organisations is invisible to IT, legal, and compliance teams. Employees are not being malicious. They are being productive. They have found tools that help them work faster, and in the absence of organisational guidance, they are making their own choices.
What Are the Actual Risks?
Data Leakage
This is the primary risk. When an employee pastes client data, financial figures, legal documents, or proprietary strategies into a public AI service, that data may be used to train the AI model, stored in jurisdictions outside your control, or exposed through security vulnerabilities in the AI provider’s infrastructure.
Consider what employees might reasonably paste into ChatGPT:
- Client names and financial details to draft a proposal
- Contract terms for summarisation
- Employee performance data for writing reviews
- Code that contains API keys or database credentials
- Board meeting minutes for action item extraction
Each of these represents a potential data breach, and in a regulated environment, a potential compliance violation.
Regulatory Compliance Failures
For FCA-regulated firms, using AI tools to process client data without appropriate data processing agreements, impact assessments, and governance frameworks may breach multiple regulatory requirements. GDPR requires a lawful basis for processing personal data, and sharing that data with a third-party AI provider requires documented safeguards.
The FCA itself has signalled increasing scrutiny of AI use in financial services, particularly around model risk, transparency, and client data protection.
Accuracy and Liability
AI-generated content can be wrong. Employees who use AI to draft regulatory filings, client communications, or financial analyses without adequate review create liability for the firm. A hallucinated statistic in a client report or an incorrect regulatory interpretation in a compliance document could have serious consequences.
Intellectual Property Exposure
Data input into AI tools may not remain confidential, depending on the provider’s terms. Proprietary trading strategies, research methodologies, or client relationship data pasted into a public AI service may lose their confidential status.
The Wrong Response: Banning AI
Some organisations have responded by blocking AI tools entirely. This approach fails for three reasons:
- It drives usage underground. Employees use personal devices, personal accounts, or VPN workarounds. The risk does not disappear; it becomes invisible.
- It creates competitive disadvantage. Firms that do not use AI effectively will fall behind those that do. The productivity gains from AI are genuine and significant.
- It signals organisational rigidity. In a competitive talent market, blocking productive tools sends the wrong message to the people you most want to retain.
The Right Response: Governed AI Adoption
The goal is to enable AI use that is productive, secure, and compliant. Here is a practical framework:
1. Establish an AI Acceptable Use Policy
Create a clear, concise policy that covers:
- Approved tools: Which AI tools are sanctioned for use, and under what circumstances?
- Data classification: What categories of data can and cannot be input into AI tools?
- Review requirements: What AI-generated outputs require human review before use?
- Prohibited uses: What specific use cases are not permitted (e.g., AI-generated client advice, automated regulatory filings)?
Keep the policy short and practical. A 20-page document that nobody reads is worse than a one-page document that everyone follows.
2. Provide Sanctioned AI Tools
If employees need AI capabilities — and they do — provide approved alternatives that your organisation controls:
- Microsoft Copilot within your Microsoft 365 tenant processes data within your existing data boundaries and compliance framework
- Enterprise versions of ChatGPT or Claude with data processing agreements that prevent training on your data
- Industry-specific AI tools that are designed for regulated environments
When you provide good tools with clear governance, the incentive to use unsanctioned alternatives disappears.
3. Implement Technical Controls
Policy alone is insufficient. Back it up with technical measures:
- Cloud access security broker (CASB) policies that monitor and control access to AI services
- Data loss prevention (DLP) rules that detect sensitive data being uploaded to AI platforms
- Network-level controls that block or log access to unapproved AI services
- Browser-based controls that prevent copy-paste of classified content into web-based AI tools
These controls should inform and guide rather than simply block. When an employee tries to upload a confidential document to ChatGPT, the ideal response is a redirect to the approved alternative, not a silent block that frustrates without educating.
4. Train Your Team
Run practical training sessions that cover:
- What AI can and cannot do well
- How to use approved AI tools effectively
- What data should never be entered into any AI tool
- How to verify AI-generated outputs
- The regulatory and legal implications of misuse
Frame the training as enablement, not restriction. “Here is how to use AI productively and safely” is more effective than “here is what you cannot do.”
5. Monitor and Adapt
AI tools and capabilities are evolving rapidly. Your governance framework needs regular review:
- Quarterly review of the approved tools list
- Regular analysis of CASB and DLP logs to understand usage patterns
- Feedback collection from employees on whether the approved tools meet their needs
- Updates to policies as regulatory guidance evolves
An AI Governance Checklist
Use this to assess your current position:
- Do you have an AI acceptable use policy? Is it communicated to all staff?
- Do you provide sanctioned AI tools that meet employee needs?
- Can you detect when employees use unapproved AI services?
- Do you have DLP controls that prevent sensitive data entering AI tools?
- Have employees received training on responsible AI use?
- Is there a named person or team responsible for AI governance?
- Do you review and update your AI governance framework regularly?
If you answered “no” to more than two of these questions, you have a shadow AI problem that needs addressing.
How Nerdster Helps
We help London businesses implement practical AI governance frameworks that balance productivity with security and compliance. From deploying Microsoft Copilot with proper data governance to implementing CASB and DLP controls, we build AI-ready environments that keep your data safe without holding your team back.
Book a free IT assessment with Nerdster to understand your current AI exposure and build a governance framework that works for your business.