AI Governance Library

Generative AI Governance Framework (v1.0) – Connor Group

A practical governance framework designed to help organizations harness generative AI while managing emerging risks across strategy, data, operations, ethics, and accountability.
Generative AI Governance Framework (v1.0) – Connor Group

⚡ Quick Summary

The Generative AI Governance Framework by Connor Group is a practitioner-oriented blueprint for governing GenAI adoption at scale. Built with input from more than 1,000 experts across academia, industry, internal audit, and regulatory environments, it translates abstract AI risk discussions into concrete governance domains, risks, and control considerations. The framework is intentionally cross-functional, positioning GenAI not as a pure IT issue but as a strategic, organizational capability that affects decision-making, compliance, workforce dynamics, and trust.

Rather than focusing on model-level technical controls, the document emphasizes enterprise governance maturity: aligning GenAI use with corporate strategy and risk appetite, embedding controls into existing frameworks like COSO and ERM, and continuously adapting governance as GenAI capabilities evolve. It is designed to be scalable, adaptable, and usable both as a board-level discussion tool and an operational risk management reference.

🧩 What’s Covered

The framework is structured around five governance domains that together form a holistic GenAI control environment.

First, Strategic Alignment and Control Environment focuses on ensuring that GenAI initiatives support organizational objectives rather than evolve opportunistically. It addresses risks related to misalignment with strategy, unclear accountability, missing policies, and overreliance on AI-generated outputs. Control considerations include governance committees, GenAI inventories, ethics frameworks, KPIs, and scenario planning.

Second, Data and Compliance Management tackles data-centric and legal risks, including data leakage, model hallucinations, GenAI dependency, intellectual property exposure, and cross-border regulatory complexity. The framework emphasizes data governance, access controls, encryption, lineage tracking, continuous compliance monitoring, and GenAI-specific legal risk assessments.

Third, Operational and Technology Management covers the integration of GenAI into business processes and IT environments. It highlights risks around validation of outputs, vendor selection, system security, and change management. Recommended controls include SOPs, performance monitoring, vendor risk assessments, post-implementation reviews, and enhanced cybersecurity measures tailored to GenAI-specific threats such as social engineering.

Fourth, Human, Ethical, and Social Considerations addresses workforce impact, bias, reputational harm, and ESG implications. This includes training employees on GenAI limitations, managing job displacement concerns, bias detection mechanisms, human-in-the-loop policies for sensitive outputs, and environmental impact assessments.

Finally, Transparency, Accountability, and Continuous Improvement focuses on traceability, explainability, and adaptability. It introduces documentation requirements for GenAI decision-making, audit trails, stakeholder reporting, monitoring of technological evolution, and innovation sandboxes to test emerging capabilities safely.

💡 Why it matters?

This framework matters because it operationalizes GenAI governance at a time when many organizations are deploying generative tools faster than their control environments can adapt. It reframes GenAI from a productivity tool into a governance challenge that touches strategy, accountability, compliance, and trust. For organizations preparing for regulatory scrutiny, internal audits, or board-level oversight, it provides a shared language between legal, risk, audit, IT, and business leaders. Importantly, it acknowledges uncertainty and continuous change as core features of GenAI, embedding adaptability directly into governance design.

❓ What’s Missing

The framework deliberately avoids deep technical guidance on model development, evaluation metrics, or security architecture, which may limit its usefulness for highly technical AI teams. It also does not explicitly map its controls to emerging AI-specific regulations such as the EU AI Act, leaving that translation work to the user. Practical examples, sector-specific case studies, or maturity benchmarks embedded directly in the document would further strengthen its applied value.

👥 Best For

Board members, audit committees, internal audit teams, risk and compliance leaders, and executives responsible for enterprise-wide GenAI adoption. It is especially useful for organizations seeking a governance starting point that integrates with existing ERM, internal control, and compliance frameworks rather than replacing them.

📄 Source Details

Generative AI Governance Framework v1.0, developed by Connor Group with academic and industry contributors, including experts in accounting, governance, internal audit, and technology risk.

📝 Thanks to

Scott A. Emett, Marc Eulerich, Jason Pikoos, David A. Wood, and the broader community of contributors, reviewers, and practitioners who shaped this framework.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.