AI Governance Library

Generative AI Governance Framework (Connor Group, v1.0)

This framework aims to help organizations harness the power of generative artificial intelligence (GenAI) while appropriately managing its risks.
Generative AI Governance Framework (Connor Group, v1.0)

⚡ Quick Summary

The Generative AI Governance Framework by Connor Group is a practitioner-driven, control-oriented model designed to help organizations adopt GenAI safely and strategically. Built with input from over 1,000 experts, it bridges internal audit, risk management, and operational governance. The framework is structured around five domains that cover strategy, data, operations, ethics, and accountability. It combines high-level governance principles with detailed risk-control mappings, making it both boardroom-friendly and implementation-ready. What stands out is its strong alignment with traditional governance frameworks (like COSO), positioning GenAI not as a separate challenge, but as an extension of enterprise risk management. It is especially useful for organizations at early or mid-maturity stages looking for a structured entry point into GenAI governance.

🧩 What’s Covered

The framework is built around five core governance domains, each tied to specific risks and control mechanisms. These include: strategic alignment, ensuring GenAI initiatives match organizational goals and risk appetite; data and compliance management, focusing on data integrity, privacy, and regulatory obligations; operational and technology management, addressing deployment, validation, and IT security; human, ethical, and social considerations, including bias mitigation, workforce impact, and ESG concerns; and transparency and continuous improvement, ensuring traceability and adaptability over time .

A key strength is the detailed mapping of risks to control considerations. For example, the framework includes controls such as GenAI inventories, governance committees, vendor risk assessments, bias detection frameworks, and human-in-the-loop safeguards. It also introduces a four-step implementation model: defining objectives, scoping risks, conducting a structured risk assessment, and executing a governance plan. The risk assessment itself follows a five-stage lifecycle—from planning and data collection to prioritization and reporting—mirroring traditional audit methodologies.

The document also emphasizes integration with existing frameworks like COSO ERM and COBIT, rather than creating a standalone governance silo. It further includes practical elements such as training programs, incident response planning, and continuous monitoring mechanisms. Importantly, it acknowledges different GenAI deployment contexts—from direct tool usage (e.g., ChatGPT) to embedded AI systems and internal “Company GPT” solutions—making it adaptable across use cases .

đź’ˇ Why it matters?

This framework matters because it translates GenAI governance from abstract principles into operational controls. Many organizations understand AI risks conceptually, but struggle to implement governance in practice—this document fills that gap. It also aligns AI governance with internal audit and risk functions, which are often overlooked in AI discussions dominated by technical or legal perspectives.

❓ What’s Missing

The framework is heavily control-focused, which makes it strong for auditors but less intuitive for product teams or AI developers. It lacks concrete real-world case studies or implementation examples that would help organizations operationalize the controls. There is also limited alignment with emerging regulatory frameworks like the EU AI Act, and no clear mapping to risk classification systems (e.g., high-risk AI).

👥 Best For

Internal auditors, risk managers, and compliance teams
Organizations building initial GenAI governance structures
Enterprises aligning AI governance with COSO or ERM models
Boards and executives seeking structured oversight tools

đź“„ Source Details

Connor Group – Generative AI Governance Framework v1.0 (2025), developed with contributions from over 1,000 experts across academia, industry, and regulatory fields.

📝 Thanks to

Scott A. Emett, Marc Eulerich, Jason Pikoos, David A. Wood, and the broader contributor community behind the framework.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.