What’s Covered
The paper begins by grounding the concept: self-regulation must be voluntary, non-legally binding, flexible, and built with multi-stakeholder participation. Importantly, it distinguishes self-regulation from co-regulation and traditional command-and-control models.
Drawing on interviews with domain experts across government, industry, and civil society, the paper captures stakeholder sentiments:
- Government sees self-regulation as a pro-innovation tool, especially for a developing economy like India.
- Industry prefers a two-tiered approach—voluntary commitments first, binding rules for high-risk areas if necessary.
- Civil society remains skeptical, warning about the lack of true accountability.
Mohanty outlines several core challenges:
- Whether self-regulation should cover high-risk use cases.
- How to localize global AI principles to India’s diverse socio-cultural landscape.
- Who should be the initial targets (developers, deployers, users?) of voluntary frameworks.
- How to address issues like training data and model inputs.
The paper closes with five key policy recommendations:
- Risk-based classification of AI systems before applying self-regulation.
- Active government involvement to convene, endorse, and facilitate self-regulatory initiatives.
- Market incentives—linking voluntary frameworks to procurement, grants, and sandboxes.
- Alternative accountability mechanisms like transparency reports, self-certifications, and peer monitoring.
- Institutional support—particularly through bodies like the proposed AI Safety Institute for India and a Technical Advisory Council.
Overall, the study calls for a phased, pragmatic approach—starting with voluntary commitments for low-risk applications, gradually expanding to more sectors and higher-risk cases based on real-world evidence.
💡 Why It Matters?
Self-regulation is being championed globally as a way to maintain innovation while responding to growing public concerns about AI harms. India’s stance, shaped by its economic priorities and cultural diversity, could become a blueprint for other countries seeking alternatives to heavy-handed regulation.
The report highlights a crucial insight often missing from global conversations: voluntary commitments must be incentivized, technically feasible, and locally adapted—or they risk becoming empty checkboxes.
It also warns policymakers: without meaningful accountability, voluntary measures could erode public trust rather than build it.
What’s Missing
While the report systematically addresses the design of self-regulation frameworks, it only lightly touches on how enforcement gaps will be monitored in practice, especially in fragmented sectors like social media or healthcare. Also, the discussion around civil society’s role—beyond skepticism—could have been more detailed, offering pathways for NGO participation in oversight or redress.
Another open question: How will India ensure that its approach remains aligned with international trade and AI governance norms as cross-border AI flows accelerate?
Best For
- Policymakers designing AI regulatory strategies.
- Industry leaders considering early voluntary commitments.
- Researchers tracking AI governance models across the Global Majority.
- Civil society groups preparing engagement strategies around AI accountability.
Source Details
- Title: Making AI Self-Regulation Work
- Author: Amlan Mohanty
- Published by: Centre for Responsible AI (CeRAI), IIT Madras (2025)
- Credentials: Associate Research Fellow at CeRAI, Non-Resident Fellow at Carnegie India, former Google India Public Policy Lead.