The 15-Point AI Governance Checklist Every Professional Services Firm Needs
A practical, actionable checklist across shadow AI, PII controls, audit trail, access governance, and compliance policy.
AI governance does not have to be complex to be effective. For most professional services firms, the difference between a defensible governance posture and an indefensible one comes down to a handful of concrete, operational decisions — not a multi-year enterprise transformation. This checklist covers the 15 items that matter most, organized across the five areas where AI governance failures most often occur.
Work through each item and mark it as done, in progress, or not started. Where you have gaps, prioritize the shadow AI and PII control sections first — those represent your most immediate and consequential exposures.
Section 1: Shadow AI
Shadow AI — employee use of unsanctioned AI tools — is the most widespread and underreported AI risk in professional services. Before you can govern AI use, you need to know what AI is actually being used.
1. Conduct a Shadow AI inventory
Review your network and DNS logs for traffic to known AI tool domains: openai.com, anthropic.com, gemini.google.com, perplexity.ai, and others. Audit SaaS subscriptions and browser extensions on managed devices for AI components. Cross-reference with an anonymous employee survey asking which AI tools they use and for what tasks. Document the results. Most firms discover three to five times more AI tool usage than leadership expected.
2. Establish an approved AI tool list
Identify which AI tools your firm officially sanctions for which use cases and data types. For each approved tool, confirm it has a data processing agreement, enterprise-tier data handling terms, or other contractual protection appropriate to the data being processed. Publish the approved list to all staff. Tools not on the list should be treated as prohibited for client-related work.
3. Communicate clearly — prohibition without alternatives fails
Banning AI tools without providing governed alternatives does not eliminate Shadow AI — it drives it underground. If you restrict a tool employees are actively using, you must simultaneously offer an approved alternative that meets their productivity need. Firms that provide a governed AI capability see faster compliance and lower policy violation rates than firms that issue blanket prohibitions.
Section 2: PII Controls
Personally identifiable information entering an AI tool represents a direct regulatory and contractual exposure. These three controls define the boundary between acceptable and prohibited AI use.
4. Define your data classification tiers
Establish at minimum three tiers: Tier 1 — restricted data that must never enter an AI tool (SSNs, financial account numbers, PHI, privileged communications, deal-specific client data); Tier 2 — sensitive data permissible only in approved tools with a data processing agreement; Tier 3 — general business data permissible in any approved tool. Map your typical work product categories to these tiers. Publish the classification framework to all staff.
5. Implement technical PII detection
Policy alone is insufficient. Implement a technical control that detects PII in AI prompts before they reach a model — and either blocks the prompt or redacts the sensitive values. This can be accomplished through a governance gateway like BerTechCORE's Redactor module, which strips and tokenizes PII within your own cloud environment. Without a technical control, your PII policy depends entirely on employee vigilance, which is not a reliable control.
6. Address AI tool training data opt-out
Confirm that every AI tool in use has opted out of using your firm's inputs for model training — or that you are using it in a mode where training data use is contractually excluded. Free consumer tiers of most AI products use inputs for model improvement. Enterprise tiers typically do not. Verify this for each tool on your approved list and document the confirmation. This is a specific question SEC examiners and enterprise client procurement teams now ask.
Section 3: Audit Trail
When a regulator, a client, or opposing counsel asks what AI was used and how, your audit trail is your answer. These three items determine whether you can respond.
7. Log AI interactions at the user and matter level
Every AI interaction in a client context should be logged with at minimum: the user, the tool, the timestamp, the task category, and any policy flags triggered. Logs should be tied to the matter or engagement where applicable. This does not require logging full prompt content for every interaction — but you need to know who used what, when, and on which client matter.
8. Ensure logs are tamper-evident and retained per policy
AI interaction logs must be stored in a way that cannot be modified after the fact — the same standard that applies to your financial records and legal files. Determine the retention period that applies to your firm (typically aligned with your general matter record retention policy), confirm your logging system meets that standard, and document the confirmation.
9. Test your audit export capability
Run a drill: simulate a regulatory inquiry requiring you to produce all AI interactions associated with a specific matter or user over a defined time period. Can you produce that export in under an hour? If the answer is no, you have a gap that will be exposed at the worst possible time. Building the audit export capability before you need it is significantly less expensive than reconstructing it during an examination.
Section 4: Access Governance
Access governance defines who can use which AI capabilities, under what conditions. Without it, your AI governance program has no enforcement mechanism.
10. Assign AI governance ownership
Name a specific individual or role responsible for AI governance at your firm. This person owns the approved tool list, reviews new tool requests, investigates policy violations, and is the point of contact for regulatory inquiries about AI use. Without named ownership, AI governance becomes everyone's responsibility and therefore no one's.
11. Implement role-based AI access controls
Not all AI capabilities are appropriate for all users. Define which AI tools and features are available to which roles — by seniority, practice area, or function. A junior associate and a senior partner may have different appropriate AI tool access. An administrative assistant and a client-facing professional may have different appropriate data classification permissions. Document the role-based access matrix and implement it technically where possible.
12. Establish an AI tool approval process
Create a defined process for evaluating and approving new AI tools before they are used with client data. The process should include: security review, data processing agreement confirmation, data classification tier assignment, and sign-off from your AI governance owner. The process does not need to be slow — a streamlined review can be completed in a week — but it must exist. Ad hoc tool adoption is how Shadow AI starts.
Section 5: Compliance Policy
Policy is the documentation layer that makes governance defensible. Regulators, clients, and courts look for evidence that your firm made good-faith efforts to govern AI use. These three items create that evidence.
13. Write and publish an AI acceptable use policy
Your AI acceptable use policy does not need to be long. Two to three pages covering: approved tools and prohibited tools, data classification rules for AI use, supervision and review requirements for AI-assisted work product, incident reporting obligations, and consequences for policy violation. The policy must be distributed to all staff, acknowledged in writing, and stored where it can be produced during an examination.
14. Conduct annual AI governance training with records
Every person who uses AI tools in connection with client work must receive training on your AI governance policies. Training should cover: what tools are approved and prohibited, what data may and may not be entered into AI tools, how to report a suspected policy violation or data exposure, and the regulatory context for why these rules exist. Training records — who completed it and when — must be maintained and producible on request.
15. Complete impact assessments for consequential AI systems
For any AI system your firm uses to assist in decisions that affect clients — credit assessments, lease approvals, employment decisions, investment recommendations, legal work product — complete and document an impact assessment before deployment. The assessment should address: what the system does, what data it uses, what its known limitations are, what bias risks have been identified and mitigated, and how the system will be monitored. This is required by the Colorado AI Act for high-risk systems and represents best practice for all consequential AI use.
A governance program with honest gaps documented is far more defensible than a governance program that looks complete on paper but has never been operationalized. Start with what you have. Document where you are. Build from there.
Where to Start
If you are working through this checklist for the first time, prioritize in this order: Shadow AI inventory first (you cannot govern what you have not mapped), PII controls second (your most immediate regulatory and contractual exposure), audit trail third (your ability to respond to the first inquiry). Access governance and compliance policy can be built in parallel once you have the first three underway.
The AI Governance Risk Score gives you a scored baseline across all five categories in under two minutes — a useful complement to this checklist for understanding where your firm sits relative to peers and what your highest-priority gaps are.
Ready to get governance in place?
Take the free AI Governance Risk Score to understand your firm's current exposure, or talk to BerTech about building a governance program.
