Enterprise AI Governance Is Still Mostly Theatre


I’ve reviewed AI governance frameworks at over 30 organisations in the past year. Most have impressive documentation. Principles statements. Risk matrices. Approval workflows.

Almost none of them actually govern how AI gets used.

The Theatre of Governance

Typical AI governance consists of:

A policy document that states high-level principles—transparency, fairness, accountability, human oversight. These read well and satisfy board requirements.

An approval committee that reviews proposed AI use cases. They meet monthly, examine slide decks, and approve projects that already have momentum.

Risk assessment templates that project teams complete. The templates capture potential concerns but rarely change project direction.

An ethics board or committee that convenes occasionally for particularly visible applications.

This structure creates the appearance of governance without fundamentally constraining or guiding AI development and deployment.

What Real Governance Would Look Like

Genuine AI governance would:

Affect Actual Decisions

A governance framework that approves 95% of proposals unchanged isn’t governing—it’s documenting. Real governance means some proposals get rejected, modified, or delayed based on legitimate concerns.

If your AI governance hasn’t blocked or significantly altered a project that leadership wanted, it’s not functioning.

Include Technical Enforcement

Policies that exist only as documents are advisory at best. Actual governance requires:

  • Technical controls preventing unauthorised model deployment
  • Monitoring systems that detect policy violations
  • Audit capabilities that track what AI systems actually do

If a developer can deploy an AI model without triggering any governance controls, governance is aspirational.

Have Clear Escalation Paths

When frontline teams face unclear situations, what happens? In most organisations, they make their best judgment and hope for the best.

Functioning governance provides clear escalation channels—and those channels actually resolve ambiguous cases rather than bouncing them back.

Connect to Consequences

What happens when governance policies are violated? In most organisations, nothing. No project has ever been cancelled, no career has ever suffered, no system has ever been rolled back.

Consequences don’t have to be severe, but they have to exist.

Why Governance Is Hard

The gap between documented governance and effective governance exists for understandable reasons:

Speed Pressure

AI projects move fast. Detailed governance review slows them down. Business leaders pushing for rapid AI deployment resent anything that creates friction.

Governance that significantly slows projects faces constant pressure to become faster—which usually means less thorough.

Technical Complexity

Most governance committee members don’t deeply understand AI systems. They can assess business risk frameworks but struggle to evaluate technical concerns like model bias, training data quality, or inference reliability.

This creates deference to technical teams, who have incentives to minimise perceived obstacles.

Ambiguous Accountability

When AI decisions cause harm, who’s responsible? The model developer? The product owner? The governance committee that approved it? The executive sponsor?

Distributed accountability often means no one feels truly responsible for ensuring governance works.

Moving Targets

AI capabilities evolve rapidly. Governance frameworks designed for chatbots don’t clearly apply to autonomous agents. Policies written for text generation don’t address multimodal systems.

Frameworks that try to be comprehensive become outdated quickly. Frameworks that stay general provide insufficient guidance.

Signs of Genuine Progress

Some organisations are making real progress. Signs include:

Governance rejection examples: They can point to specific projects that governance blocked or substantially changed.

Technical integration: Governance controls are built into deployment pipelines, not separate review processes.

Incident learning: When problems occur, they conduct genuine post-mortems that result in governance changes.

Regular policy updates: Governance frameworks evolve with technology, not once per year.

Third-party scrutiny: External audits or red teams evaluate whether governance actually constrains behaviour.

What It Would Take

Effective AI governance requires:

Executive Commitment to Friction

Leadership must accept that good governance slows some things down. If governance is measured purely by how fast it approves projects, it will optimise for speed at the cost of rigour.

Technical Capability

Governance teams need members who understand AI systems technically—not just conceptually. This might mean hiring, training, or restructuring governance functions.

Metrics Beyond Activity

Measure governance by outcomes, not by process completion:

  • Incidents prevented (hard to measure but important)
  • Issues caught before deployment
  • User complaints and harms after deployment
  • Regulatory compliance findings

Realistic Scope

Trying to govern all AI use cases identically doesn’t work. Some applications are high-risk and need intensive oversight. Others are low-risk and need lightweight review. Calibrating governance intensity to actual risk makes the system sustainable.

The Uncomfortable Reality

Most organisations have AI governance because they’re supposed to, not because they believe it protects them from harm.

The documents exist for:

  • Board reassurance
  • Regulatory checkbox compliance
  • PR responses if something goes wrong
  • Risk management appearances

These aren’t wrong reasons to have governance. But they don’t produce governance that actually governs.

The question isn’t whether you have AI governance documentation. It’s whether that documentation changes what actually happens.

For most organisations, the honest answer is: not much.

That’s fine as long as AI applications remain relatively low-stakes. But as AI takes on more consequential decisions—hiring, lending, medical recommendations, safety-critical automation—governance theatre becomes genuinely dangerous.

The best time to build real governance was before you deployed AI widely. The second-best time is now.