Context
Under the vision of “AI for All,” the Government of India aims to integrate scale with inclusion, sustainability, and resilience. To manage risks like deepfakes, algorithmic bias, and threats to national security, a drafting committee was constituted by the Ministry of Electronics and Information Technology (MeitY) in July 2025. This report establishes a strategic, coordinated, and consensus-driven framework to balance innovation with safety.
1. Foundational Philosophy: The Seven Sutras
India’s AI governance is grounded in seven core principles (Sutras), designed to be technology-agnostic and applicable across all sectors:
1. Trust is the Foundation: Innovation and adoption will stagnate without trust across the value chain.
2. People First: Prioritizes human-centric design, human oversight, and empowerment.
3. Innovation over Restraint: Responsible innovation is prioritized over cautionary restraint.
4. Fairness & Equity: Focuses on inclusive development and avoiding discrimination, especially against marginalized groups.
5. Accountability: Clear allocation of responsibility based on function and risk.
6. Understandable by Design: Systems must provide disclosures and explanations that users and regulators can comprehend.
7. Safety, Resilience & Sustainability: Ensuring robust systems that withstand shocks and remain environmentally responsible.
2. The Six Pillars of Governance
The report offers recommendations across three key domains: Enablement, Regulation, and Oversight.
-
-
- Infrastructure (Enablement): Leveraging Digital Public Infrastructure (DPI) to make AI scalable and affordable. Key highlights include providing over 38,231 GPUs to startups and researchers and launching AIKosh with 1,500+ datasets.
-
GPU (Graphics Processing Unit): A co-processor designed to accelerate graphics and image processing, and specialized tasks in Machine Learning and Deep Learning involving heavy matrix operations.
-
-
- Capacity Building (Enablement): Expanding programs like India AI FutureSkills to increase AI literacy in Tier-2 and Tier-3 cities and training government officials in AI procurement.
- Policy & Regulation: Adopting agile and flexible frameworks. The committee currently assesses that a separate AI law is not needed; existing laws (IT Act, DPDP Act, BNS) should be updated with targeted amendments.
- Risk Mitigation: Developing an India-specific risk assessment framework focused on real-world evidence of harm. It categorizes risks into malicious use, bias, transparency failures, systemic risks, loss of control, and national security.
- Accountability: Implementing a graded liability system proportionate to the entity’s function (developer, deployer, user) and the risk of harm.
- Institutions (Oversight): Adopting a “whole-of-government” approach where all ministries and regulators collaborate.
-
3. Institutional Framework
-
-
- AI Governance Group (AIGG): A permanent inter-agency body responsible for overall policy development and coordination.
- Technology & Policy Expert Committee (TPEC): A specialized group (scientists, legal experts, officials) that briefs the AIGG on emerging risks and global developments.
- AI Safety Institute (AISI): The technical anchor for research, safety testing, standards development, and international collaboration.
- Sectoral Regulators: Bodies like the RBI, SEBI, and TRAI continue to exercise enforcement powers and issue domain-specific rules.
-
4. Action Plan & Timeline
| Timeframe | Key Priorities |
|---|---|
| Short-term | Establish AIGG and TPEC; develop India-specific risk frameworks; publish a master circular of applicable regulations. |
| Medium-term | Operationalize the National AI Incidents Database; pilot regulatory sandboxes; amend laws to plug gaps. |
| Long-term | Continuously review frameworks; adopt new laws for emerging risks; lead global diplomatic engagements (e.g., AI Impact Summit 2026). |
Conclusion
India’s approach is pragmatically rooted in a techno-legal framework supported by voluntary measures and Digital Public Infrastructure.
Spread the Word

