Sovereign Confusion: Why India is Regulating AI Upside Down

India is not deciding whether to regulate Artificial Intelligence. That ship has sailed. Instead, we are already regulating AI—through a series of backdoors and stop-gap rules that risk locking the country into a system that is simultaneously overbearing for innovators and ineffective for citizens.
The real danger to India’s tech future isn’t a lack of a law; it is the “sovereign confusion” being baked into policy. We are attempting to regulate 21st-century intelligence with 19th-century mentalities designed for static databases. Nations do not lose technological races because they lack talent. They lose them because they confuse control with clarity. India is dangerously close to doing both.
The Input-Output Fallacy: Why DPDP Fails AI
The Government’s primary defense is the Digital Personal Data Protection (DPDP) Act. But using the DPDP to govern AI is like using a parking permit to regulate a supersonic jet. The DPDP focuses on input—the data. But AI harms occur at the output—the decision.
We have strict rules for how a name is stored—and none for how it destroys a life. When an AI system denies a loan to a woman in rural Bihar based on an opaque “risk profile,” or filters a candidate out of a job pool due to historical bias embedded in its training, the DPDP is silent. It can tell you who owns the data, but not who is responsible for the judgment.
By obsessing over data privacy while ignoring algorithmic agency, India is regulating the ingredients while the kitchen is on fire. We protect the “bit” but ignore the “bite.” Citizens are assured that their data is safe, even as the decisions derived from that data remain beyond legal scrutiny. This is not just a governance gap—it is a failure that leaves individuals exposed to unaccountable machine logic.
The 3-Hour Paradox: Outsourcing Judgment
The IT Rules Amendment 2026 and its 3-hour takedown mandate for Synthetically Generated Information (SGI) is the clearest example of state-induced algorithmic overreach. On paper, it targets deepfakes. In practice, automated censorship grows.
In trying to fight deepfakes, the state is outsourcing judgment to machines it does not control—and punishing platforms for hesitation. No human can reliably verify a complex, multi-modal deepfake within 180 minutes. Timeline is not a safeguard; it is a hurdle. To comply, platforms are forced to deploy “censor-bots” designed to err on the side of removal.
The consequences are predictable. Fear of the fake begins to eliminate the freedom of the real. Satire, dissent, and citizen journalism are filtered out before human review, not because they are false, but because they are risky. In effect, the State is mandating a system where liability determines truth. The outcome is an internet shaped less by facts than by the cost of being wrong.
The Core Doctrine: Agentic Accountability
If India wants to lead the global AI discourse, it must stop treating AI as a content problem and start treating it as an agency problem. We need a new doctrine: Agentic Accountability.
In simple terms: if an AI makes a decision, someone must stand behind it. Today, accountability dissolves into a loop of deflection. Developers blame training data, platforms blame user prompts, and users are left without recourse. Agentic Accountability reverses this logic. If an entity deploys an AI system to perform a human function—whether in finance, hiring, healthcare, or law—it must bear responsibility for the outcome of that system.
This is the “Third Way” India could offer globally. It avoids suffocating innovation with pre-emptive bans while ensuring that outcomes remain legally grounded. It moves the regulatory focus from process to consequence. It allows innovation to continue, but removes the ability to hide behind technical opacity when harm occurs.
The Enforcement Reality: An Institutional Void
A doctrine, however elegant, is only as strong as its enforcement. India must confront a difficult question: does it possess the institutional capacity to audit, test, and challenge AI systems at scale?
Current regulators were built for telecommunications and broadcasting. They are not equipped to interrogate neural networks, audit datasets, or detect latent bias in complex models. Passing an AI law without building this capacity risks creating a “paper tiger”—a framework that appears strong but is weak in practice.
Such a system disproportionately burdens startups, which lack compliance infrastructure, while allowing large technology firms to navigate or obscure accountability through complexity. Accountability without enforcement capacity is not governance—it is theatre.
The Sovereignty Mirage: Nationalising Dependence
India’s push toward “Sovereign AI” through the IndiaAI Mission is praising, but it obscures a vulnerability also. The country risks nationalising the cost of AI while privatising its dependence.
Public funds are being spend to subsidise compute and expand access to GPUs, yet the underlying hardware ecosystem remains globally dependent. Domestic supply chains for critical components—from rare earth elements to advanced semiconductors—are still emerging. Sovereignty, in this context, cannot be legislated; it must be built.
Without linking mineral policy, manufacturing capability, and AI deployment into a unified framework, India risks constructing a digital ecosystem reliant on external foundations. A software-driven vision of sovereignty, unsupported by hardware independence, is inherently unstable. We are, in effect, building a digital skyscraper on rented infrastructure.
The Power Position: Exporting Accountability
The global AI race is no longer defined solely by who builds the most advanced systems. It is increasingly defined by who determines accountability when those systems fail. This is India’s strategic opening.
The United States has adopted a market-led approach, placing trust in private sector leadership. The European Union has pursued a risk-based model, emphasising precaution and control. India has the opportunity to define a third path—one grounded in responsibility and outcome-based regulation.
By formalising and exporting Agentic Accountability, India can influence global standards, particularly across emerging economies facing similar challenges. This is not merely about domestic governance; it is about framing the international rules of the AI era. The objective should not be to replicate Western systems with localised adaptations, but to construct a framework rooted in Indian realities. Without such a move, decision-making in India will continue to indicate assumptions embedded in foreign-trained models. Regulatory leadership, rather than technological mimicry, is where long-term influence lies.
Notifications vs Rights: Choice
Conflict between MeitY and the Parliamentary Standing Committee highlights a core dilemma: should AI be governed by executive decree or legislative rights?
Flexible rule-making allows speed, but it also creates uncertainty. Policies that can be altered without parliamentary scrutiny undermine both investor confidence and citizen protections. Startups cannot build on conveying regulatory ground, and individuals cannot assert rights that lack permanence.
A notification-driven system centralises power while diffusing accountability. In contrast, a statutory AI framework establishes boundaries that cannot be easily overridden. The debate, therefore, is not procedural—it is foundational.
India must decide what it seeks to regulate: data, decisions, or power. Currently, it attempts to control all three using tools designed for one. The demand for an AI Act is not about expanding regulation, but about stabilising it—creating rules that endure beyond administrative discretion.
Conclusion: The Strategic Miscalculation
India needs a clear and responsible approach to AI, not scattered and reactive rules. This technology is too powerful to be managed without a strong and consistent framework that ensures accountability.
If things continue as they are, India won’t fail because of a lack of talent—it will fail because it misunderstands what needs to be regulated. Confusing control with real understanding could become a serious strategic mistake.
Once these policies are built into laws and systems, they will be hard to fix. India is on track to become a digital power, but the real danger is building a system that limits innovation while failing to protect people. The question is no longer whether to regulate AI, but whether India can do it clearly and responsibly—before confusion defines its future.
Author is a theoretical physicist at the University of North Carolina at Chapel Hill, US, and the author of the forthcoming book The Last Equation Before Silence ; views are personal















