Key Highlights
- AI governance must be embedded into organizational design, involving cross-functional teams to ensure accountability and oversight from the outset.
- Effective governance extends beyond models to include data architecture, emphasizing the importance of data quality, lineage, and transparency.
- Explainability is critical, especially in regulated industries, to ensure AI decisions are understandable, defensible, and trustworthy.
- Generative AI introduces new risks due to less predictable outputs, requiring clear boundaries and monitoring mechanisms.
- A risk-based approach to governance allows organizations to tailor controls according to the potential impact of AI applications, balancing innovation with safety.
In early 2024, one of the nation’s largest insurers faced mounting scrutiny over how artificial intelligence (AI) was being used in claims processing, with lawsuits alleging that algorithm-driven decisions may have influenced outcomes in ways that were difficult to fully explain or audit. Whether those claims ultimately hold up in court, the signal to business leaders is clear: When AI systems move from pilots into production, they don’t just scale efficiency — they scale risk.
As organizations navigate this shift, leaders are being forced to rethink not just how they adopt AI, but how they govern it. According to Bill Devine, managing partner and founder of Naitiv and former senior vice president at Travelers Business Insurance, many companies are moving faster on deployment than on accountability.
This is the moment organizations are now entering. AI is no longer a side experiment tucked inside innovation teams. It is increasingly embedded in pricing models, underwriting decisions, customer interactions and operational workflows. And as that shift happens, the conversation is rapidly moving beyond asking what else AI can do to a more urgent and complex question: How do we govern it?
From experimentation to accountability
For years, many organizations approached AI as a contained initiative by testing models, exploring use cases and evaluating potential ROI. But as adoption accelerates, those boundaries are disappearing. AI systems are now influencing real decisions with real consequences.
That shift fundamentally changes the stakes. What was once a technical exploration is now a business-critical capability that carries legal, regulatory and reputational implications. This is where many organizations are starting to feel exposed. The issue is not a lack of technology. It is a lack of clarity around ownership, oversight and accountability.
“The stakes have changed. These systems are no longer experimental — they’re embedded in real business processes and decisions,” Devine says.
AI governance, in this context, is not a compliance exercise layered on after deployment. It is a leadership discipline that must be built into how organizations design, deploy and manage AI from the outset.
Why governance is now a leadership issue
AI governance is a framework of policies, procedures and ethical standards designed to oversee the development, deployment and management of AI systems. It ensures AI operates responsibly, transparently and fairly, mitigating risks such as bias, safety issues and legal non-compliance and ensuring human accountability throughout the AI life cycle.
One of the most common missteps organizations make is treating AI governance as an IT responsibility. In reality, the risks associated with AI, such as bias, explainability, decision integrity and regulatory exposure, cut across the entire enterprise. That makes AI governance a leadership issue.
Devine emphasizes, “This isn’t just a technical issue. It’s a business risk and a strategic capability.”
Effective governance requires alignment across technology, legal, risk, compliance and business teams. It requires leaders to define who owns AI-driven decisions, how those decisions are validated, and what guardrails are in place to ensure consistency and accountability.
Without that alignment, organizations face a fragmented landscape in which models are deployed without a clear understanding of how they influence outcomes or who is ultimately responsible for them. The result is not just inefficiency; it is exposure.
A new category of risk
Traditional technology risk has typically centered on system uptime, cybersecurity and infrastructure reliability. AI introduces a different class of risk, one tied to decision-making itself.
“AI risk isn’t about whether the system is running. It’s about the quality of the decisions it’s making, and whether those decisions can be explained,” commented Devine.
When an AI model recommends a pricing adjustment, flags a claim, or influences a customer interaction, the risk is not whether the system is running. It is whether the decision is accurate, fair and defensible.
This introduces new questions for leaders:
- Can we explain how this decision was made?
- Can we trace it back to the underlying data?
- Can we defend it to regulators, customers, or in a court of law?
These are not theoretical concerns. As regulatory scrutiny increases and litigation emerges, organizations are being asked to demonstrate not just that their systems work, but that they work responsibly.
Data architecture is governance
One of the more subtle — yet critical — shifts happening in AI is the expansion of what governance actually includes. It is no longer limited to the model itself.
AI outputs are only as reliable as the data that feeds them. That means governance must extend across the entire data pipeline. From how data is sourced and integrated to how it is structured, accessed and maintained.
Devine reminds us, “Governance isn’t just about the model … it’s about the entire data pipeline that feeds it.”
In organizations with fragmented systems, inconsistent data definitions or limited visibility into data lineage, governance becomes nearly impossible. Models built on unstable foundations will produce inconsistent or unreliable outcomes, regardless of how sophisticated the algorithms may be.
In this sense, data architecture is not just an infrastructure concern. It is a governance issue, and increasingly, a competitive differentiator.
The explainability mandate
As AI systems take on a larger role in decision-making, explainability is becoming non-negotiable.
In regulated industries such as insurance, financial services and healthcare, the ability to explain how a decision was made is not just a best practice. It is often a requirement. But even outside of regulatory pressure, explainability is essential for trust.
If leaders cannot clearly articulate how an AI model arrives at its conclusions, they cannot confidently stand behind those decisions. And if customers or stakeholders cannot understand those decisions, trust erodes quickly. This is why many organizations are beginning to adopt a simple but powerful standard: If a model cannot be reasonably explained, it should not be used in high-impact decision-making.
Generative AI raises the stakes
The rapid rise of generative AI introduces an additional layer of complexity. In many cases, organizations are deploying generative AI faster than they can fully understand or control its outputs.
Unlike traditional models, generative systems can produce outputs that are less predictable and more difficult to trace. While they offer significant potential in areas like content generation, customer interaction and knowledge work, they also increase the risk of inconsistency, inaccuracy and unintended outcomes. This makes governance even more critical.
Organizations must establish clear boundaries around where and how generative AI is used, along with mechanisms to monitor outputs and intervene when necessary.
The Leadership Playbook: Where To Start
For leaders looking to move from awareness to action, Devine tells us that the starting point is not technology — it is structure. Three priorities stand out:
1. Define ownership
Establish clear accountability for AI systems and the decisions they influence. This includes both technical ownership and business accountability.
2. Set risk tolerance
Not all AI use cases carry the same level of risk. Leaders must define where stricter controls are required and where more flexibility is acceptable.
3. Build foundational capabilities
This includes data governance, model monitoring and cross-functional oversight mechanisms that ensure AI systems are operating as intended.
Importantly, this work cannot happen in isolation. Many organizations are finding value in forming cross-functional governance councils that bring together stakeholders from across the business to align on standards, policies and decision-making frameworks.
AI amplifies what already exists
Perhaps the most important, and often overlooked, insight is that AI does not fix broken processes. Rather, it amplifies them.
Devine stresses, “AI doesn’t replace workflows. It layers on top of them. If the fundamentals aren’t strong, it will amplify the gaps.”
Organizations with clear workflows, strong data foundations and well-defined decision-making structures will see AI enhance their capabilities. Those without those fundamentals will see existing gaps become more pronounced.
In this way, AI acts as a forcing function. It exposes inconsistencies, highlights inefficiencies and challenges organizations to confront the realities of how decisions are actually made.
Balancing governance and innovation
As organizations build governance frameworks, there is an inherent tension to manage. Too little governance increases risk. Too much governance can slow innovation and limit the value AI can deliver.
The goal is not to apply a uniform set of controls across all use cases. It is to adopt a risk-based approach that aligns the level of oversight with the potential impact of each application. Low-risk use cases may require lighter controls. High-impact decisions, particularly those affecting customers, pricing or compliance, demand more rigorous governance.
The path forward
The organizations that will succeed in the next phase of AI adoption will not be the ones with the most advanced models. They will be the ones that treat AI as a managed, governed capability — integrated into the fabric of the business. That requires a shift in mindset.
AI is not just a technology investment. It is a leadership challenge, one that forces organizations to rethink how decisions are made, how data is managed and how accountability is defined.
And in many cases, it leads to a more fundamental question: Do we truly understand how decisions are made in our organization today?
Bill Devine leaves us with the reminder, “Technology alone isn’t enough. It’s how you manage it, govern it and align it to the business that ultimately determines success.”
For leaders willing to confront that question, AI governance becomes more than a risk mitigation exercise. It becomes a pathway to building more transparent, consistent and resilient organizations in the age of intelligent systems.
About the Author

Jess Mand
Contributor
Jess Mand is an award-winning communications strategist and founder of INDEMAND Communications, where she helps organizations translate complex ideas into clear, compelling narratives that drive connection and action. She partners with Fortune 500 companies, growth-stage firms, and mission-driven organizations to design communication strategies, content programs, and experiential campaigns that engage employees and elevate leadership messages. Known for her creative storytelling and pragmatic approach, Jess brings a rare blend of strategic insight and human-centered perspective to every project she leads.
Resources
Quiz
Make smart decisions faster with ExecutiveEDGE’s weekly newsletter. It delivers leadership insights, economic trends, and forward-thinking strategies. Gain perspectives from today’s top business minds and stay informed on innovations shaping tomorrow’s business landscape.

