Shadow AI: The Risk Your Employees Are Creating Right Now
Your employees are using ChatGPT, Claude, and Gemini with client data. Here's what that exposure looks like — and how to map it.
Here is something that is almost certainly true about your firm right now: at least one of your employees pasted a client's financial data into ChatGPT this week. Another summarized a confidential deal memo in Claude. A third used Gemini to draft a client deliverable and included personally identifiable information in the prompt. None of them asked for permission. None of them thought they were doing anything wrong. And none of it shows up anywhere in your risk register.
This is Shadow AI — the use of consumer AI tools by employees outside of any sanctioned, monitored, or governed process. It is the fastest-growing source of uncontrolled data exposure in professional services today, and most firms have no idea how deep it runs.
What Shadow AI Actually Is
Shadow AI is a subset of Shadow IT — the broader category of technology employees use without official approval. But it moves faster and carries more data risk than most Shadow IT because the entire value proposition of consumer AI tools is that you give them context. The more context you provide, the better the output. So employees are incentivized to share as much as possible.
The tools driving Shadow AI in professional services are predominantly:
- ChatGPT (OpenAI) — free and Plus tiers, used for drafting, summarizing, and analyzing documents
- Claude (Anthropic) — increasingly popular for long-document analysis and client communications
- Gemini (Google) — integrated into Google Workspace, making it frictionless to use with existing documents
- Microsoft Copilot — embedded in Office 365, often used without awareness that prompts and documents are processed externally
- Perplexity, Notion AI, Grammarly, and dozens of other AI-assisted productivity tools
Each of these tools has different data retention policies, different terms of service around training data, and different security postures. Most employees using them have read none of this. They are using the free tier of a consumer product to process your most sensitive client information.
What the Exposure Actually Looks Like
Abstract risk is easy to dismiss. Concrete exposure is harder to ignore. Here is what Shadow AI exposure looks like in practice for professional services firms:
The Real Estate Broker
A commercial real estate associate pastes a confidential letter of intent — including buyer identity, purchase price, and deal terms — into ChatGPT to generate a clean summary for internal circulation. The free tier of ChatGPT, at the time of this prompt, is configured to use conversation data for model improvement. The deal terms are now in OpenAI's training pipeline. The buyer's identity is exposed. The confidentiality clause in the LOI has been breached.
The Accounting Associate
A junior accountant uses Claude to help draft variance analysis commentary for a client's quarterly financial review. To get accurate output, they paste the client's P&L, including revenue figures and employee compensation data. The document contains PII. The firm has no record that this occurred. When the client's CFO later asks about data handling practices, the firm cannot truthfully certify that client financial data has never left its controlled environment.
The Law Firm Associate
A first-year associate uses an AI writing tool to polish a contract. The contract contains personally identifiable information for both parties. The AI tool's privacy policy reserves the right to use inputs to improve its service. Attorney-client privilege may have been waived. The state bar association in several jurisdictions now considers this a potential ethical violation.
The Consultant
A management consultant uploads a client's organizational chart and internal strategy document to an AI tool to help build a presentation. The document is marked confidential. The consultant's engagement contract prohibits sharing client materials with third parties. The AI vendor qualifies as a third party. The consultant is technically in breach of contract.
In every one of these scenarios, the employee was trying to do their job better. Shadow AI is not a malicious behavior problem. It is a governance vacuum problem.
Why It Keeps Happening
Shadow AI persists for the same reasons Shadow IT always has: the tools are better than the approved alternatives, the friction of the official process is too high, and the consequences feel abstract until they aren't. But AI tools have an additional dynamic that makes them stickier than previous Shadow IT generations: the productivity lift is immediate and dramatic. An employee who discovers they can produce a first draft in ten minutes instead of two hours is not going to stop using the tool because of a policy they have never been shown.
This means the solution is not simply prohibition. Firms that ban AI tools without providing governed alternatives will lose the productivity benefit and still have the Shadow AI problem — employees will just be more careful about hiding it.
How to Map Your Shadow AI Exposure
A Shadow AI audit does not need to be a months-long project. Here is a practical framework for mapping your exposure in three to four weeks:
Step 1: Network and DNS Traffic Analysis
Your firewall and DNS logs will show you which AI domains your employees are hitting. Look for traffic to openai.com, anthropic.com, gemini.google.com, perplexity.ai, and the API endpoints for each. This gives you volume and frequency data — who is using what, and how often. It will not tell you what data was sent, but it tells you where to focus.
Step 2: SaaS and Browser Extension Audit
Audit your SaaS subscriptions for AI add-ons — Notion AI, Grammarly Business, Otter.ai for meeting transcription, and any others. Then audit browser extensions across managed devices. AI browser extensions are a particularly underexamined attack surface because they often have read access to page content, which means they can capture data from your internal tools.
Step 3: Employee Survey
A short, anonymous survey asking employees which AI tools they use and what types of tasks they use them for will surface tools your technical audit missed — particularly mobile apps and personal device usage. Frame the survey as an effort to find tools the firm should officially support, not as a compliance investigation. You will get more honest responses.
Step 4: Data Classification Cross-Reference
Once you have a map of which tools are being used for which tasks, cross-reference it against your data classification framework. Which tool categories are being used with Tier 1 data — PII, financial records, privileged communications? Those are your immediate remediation priorities. Which tool categories are being used with lower-sensitivity data? Those may be candidates for sanctioned use with appropriate controls.
What to Do With What You Find
A Shadow AI audit typically produces three categories of findings:
- Immediate remediation: Tools being used with sensitive client data that have no acceptable data handling terms. These need to be blocked or restricted, and employees need to be notified with an explanation — not just a block page.
- Sanction and control: Tools that employees are using productively and that can be made compliant with appropriate configuration (enterprise tier, data processing agreement, approved use cases). These are opportunities to capture the productivity benefit while closing the risk.
- Policy gaps: Use cases where the risk is low but no policy exists. These need simple guidance — a one-page acceptable use policy — so employees know what is permitted.
The output of a Shadow AI audit is not a list of things to prohibit. It is a map of your AI risk landscape that tells you where to invest in governance, where to invest in sanctioned alternatives, and where the exposure requires immediate action.
The Longer-Term Answer: Sovereign AI
For professional services firms that handle sensitive client data at scale, the medium-term answer to Shadow AI is not better blocking — it is providing a governed, private AI alternative that delivers the same productivity benefit without the data risk. A private LLM deployment inside your own Azure or AWS cloud tenant means your employees get the AI tools they need, and your client data never leaves your controlled environment.
Shadow AI is a symptom. The underlying condition is that your employees have found a better way to work and your firm has not yet built the infrastructure to let them do it safely. The firms that close that gap will have a sustainable competitive advantage. The firms that do not will eventually face the incident that makes the risk register entry real.
Ready to get governance in place?
Take the free AI Governance Risk Score to understand your firm's current exposure, or talk to BerTech about building a governance program.
