,

You Already Have AI Governance. It’s Just Not Working the Way You Think.

By

With or without a formal AI policy, every CPA firm is already making AI decisions. This article is about recognizing that reality and understanding why unclear governance quietly stalls AI adoption inside CPA firms.

AI governance already exists in CPA firms, whether you planned it or not

Before we go further, a quick clarification. When we say “AI governance,” we do not mean a 20-page policy document. We mean the practical way a firm decides who can use AI, for what work, with which tools and under what level of review. If those decisions are happening inconsistently, governance already exists. It just is not working the way you think.

If a staff accountant asks three different people whether they can use AI for a client memo and gets three different answers, that is AI governance. It may not be written down. It may not be consistent. But it exists.

Some CPA firms have not written an AI policy yet. Others have a document sitting in SharePoint that no one has opened since it was approved. In both cases, governance (or lack thereof) shows up in the same places: how quickly IT says yes or no, what staff believe they are allowed to try and whether people ask questions or quietly work around the rules.

Research from the International Association of Privacy Professionals shows many organizations are already in this state. AI governance exists, but it is fragmented, informal and inconsistently applied. That inconsistency discourages adoption and increases risk at the same time.

AI governance in CPA firms is not a policy problem

When leaders hear “AI governance,” they often picture a policy, a committee or another layer of oversight. That framing misses the point. Governance is not a document. Governance is what happens in the daily operations of the firm; it’s how decisions get made when real work needs to happen.

Most CPA firms already govern AI through assumptions, one-off approvals and hallway conversations. Staff learn what is allowed by watching what gets corrected and what gets ignored. Over time, that becomes the rulebook.

In practical terms, AI governance means deciding who can use AI, for what work, with which tools and under what level of review. When those decisions are unclear, staff default to avoidance or workarounds. Neither outcome helps the firm.

Why practical AI governance starts with decisions, not principles

Most AI policies focus on principles like responsibility and caution. Those matter, but they do not help someone decide what to do at 9 p.m. during busy season. What actually changes behavior are clear answers to a small set of decisions. In practice, governance consistently breaks down around the same questions.

Decision one: Who is allowed to say yes to AI decisions?

If a partner wants to try a new AI tool tomorrow, who can approve it and how fast?

When this decision is unclear or slow, staff and partners find workarounds. Committees feel safe, but they often introduce delays that quietly undermine governance.

The Harvard Law School Forum on Corporate Governance has emphasized that AI governance is a leadership responsibility, not just a technical one. When this decision is clear, AI questions stop bouncing between IT and partners, and usage becomes more consistent.

Decision two: Which AI tools does a CPA firm actually support?

Many CPA firms confuse “approved” with “supported.”

This matters because staff are already using AI, whether it is officially sanctioned or not. Thomson Reuters’ 2025 Generative AI in Professional Services Report shows adoption accelerating across tax, accounting and audit, often faster than formal controls evolve.

When firms do not clearly state which tools they will support, secure and stand behind, AI usage does not stop. It just becomes invisible. Clarity here is often the difference between visible adoption and quiet workarounds.

Decision three: What client and firm data can be used with AI?

Generic data classifications rarely help staff make real decisions.

People want to know whether they can use AI for tax research, drafting client explanations or reviewing contracts. Without clear guidance, they either avoid AI or take risks they do not fully understand.

Academic research shows that contextual, scenario-based governance is more effective than abstract rules. A systematic literature review published by Springer highlights the importance of situational guidance in AI governance. Firms that get this right give staff confidence without forcing them to interpret policy language.

Decision four: What is AI allowed to do and not do?

AI can draft, summarize and identify patterns. It does not replace final technical judgment, review or accountability.

Thomson Reuters’ work on the ethics of AI in professional services reinforces this boundary in regulated environments. Clear boundaries here tend to speed review rather than slow it.

Decision five: How does review and accountability change with AI?

Review still exists, but what reviewers are looking for changes.

When expectations are not stated, teams revert to old habits, and frustration grows. The IAPP AI Governance Profession Report highlights how organizations struggle to operationalize accountability once AI use begins. Without resetting expectations, AI often creates friction instead of leverage.

Decision six: What happens when AI governance rules are ignored

Zero-tolerance policies sound tough, but they are rarely enforced.

Research on trustworthy AI frameworks emphasizes proportional accountability and consistent enforcement over rigid punishment models. Governance without enforcement is just documentation. Without enforcement clarity, governance exists on paper but not in practice.

Why most CPA firms stall and what actually moves things forward

Many CPA firms can identify where AI governance breaks down, but still struggle to act. The challenge is not awareness. It is turning decisions into guidance that staff can actually use without slowing down work.

That gap is where AI governance usually fails in practice.

The CPA firms seeing value from AI are not the most permissive or the most restrictive. They are the clearest. Whether a firm has a formal AI policy or none at all, AI governance is already happening. Making it practical turns AI from a quiet risk into a usable capability.

The next step is not writing a longer policy. It is pressure-testing the six decisions outlined above against how your firm actually operates today. Where are answers consistent? Where do they depend on who you ask? Where are staff guessing?

Start there.

If your firm is navigating these questions and wants a structured way to think through practical AI governance without overengineering it, let’s talk. A short working session can surface where clarity already exists and where a few targeted adjustments could reduce risk and support more confident adoption.