Three tests for India’s AI future

Economic historians have long shown that technologies spread not simply because they exist, but because societies are prepared for them - through skills, infrastructure, institutions, and systems that make technology inclusive. As global leaders convene at the India AI Impact Summit, this is the lens through which India’s AI moment must be viewed — human capital readiness, access to enabling infrastructure, and institutional capacity. The recent Economic Survey strikes a note of caution, warning that rapid AI deployment could outpace the economy’s structural ability to reabsorb labour. With over seven per cent of India’s GDP tied to the IT/IT-enabled services sector, the stakes are significant. Unlike previous industrial transformations that automated manual labour, job displacement risks associated with AI are more complex to analyse.
On the one hand, what we already know is that AI systems are increasingly capable of writing and improving their own code, compressing the ladder of skills that once trained human programmers. For a country that relies on human-led services for its export strength, this raises hard questions about the durability of India’s talent advantage. But protectionism through delay is also a dead end. Slowing AI diffusion to save jobs will inadvertently subsidise domestic inefficiency, leaving industries vulnerable to more agile global competitors who route around — or simply operate beyond — such barriers. An earlier parallel comes from Britain’s attempt to protect its domestic textile industry by banning the export of advanced machinery to preserve its competitive edge. Although these restrictions targeted exports, not imports, they were rooted in the same logic of limiting the spread of technology to protect domestic industries. Nonetheless, British machines were smuggled out and used to establish mills across continental Europe. Britain could not stop the rise of competing manufacturing hubs. The way forward is to invest in the capacity to assist workers in their transition, rather than trying to protect them from AI. This entails imagining a meaningful social safety net that is appropriate for the AI era and encouraging companies, to invest in innovation and R&D.
Debates on AI also tend to pose a false choice between centralised computing exemplified by cloud services and decentralised on-device computing. In reality, developing economies will likely operate somewhere in between, balancing brute force with resilience and access.
There is a financial aspect to this discussion too. Cloud computing, once assumed to be infinitely scalable, now faces profitability pressures as AI workloads become vastly more expensive. The orchestration of AI workloads between the cloud and last-mile devices such as smartphones is a feasible alternative in many use cases. This is also where sector-specific deployment is crucial and AI skills can be viewed as ‘foundational’ infrastructure. In the past, organisations relied on IT departments to choose software, troubleshoot problems, and train staff on tools. AI is different. Hospitals, banks, courts, and factories need in-house expertise that understands both the technology and the domain in which it operates. Building such enlightened workforces may be as important as building data centres.
There is yet another facet of AI diffusion that must concern us all. Regulatory and policy institutions lacking the technical capacity to supervise new technologies tend to rely on blunt measures that are often antithetical to progress.
We have seen this cycle play out in the regulation of encrypted messaging and digital assets. In the face of encrypted communication, legitimate concerns over illicit activity have sometimes led to demands to bypass encryption altogether. This is characteristic of a policy environment where the absence of precise investigative tools leads to “all-or-nothing” approaches. Similarly, in the early days of cryptocurrency, regulatory responses often sought to isolate or restrict the technology due to limited visibility into its flows. In both cases, regulatory insecurity stemmed from gaps in supervisory capacity rather than a desire for censorship. When regulators cannot easily distinguish between legitimate and harmful activity, the default response tends towards precautionary restrictions, with unintended consequences such as the offshoring of digital asset entrepreneurs and innovation from India to the UAE and Singapore.
With AI, the stakes are even higher. If we do not build institutional capacity to understand and respond to AI-driven threats with technical nuance, we risk a future where online trust is managed through restrictions that constrain innovation, speech, creativity, and the future of work itself. Regulators, courts, and even the executive branch must bridge technical capacity gaps so they are not forced to choose between vulnerability and regressive rulemaking.
It is in this overarching context that the AI Knowledge Consortium, which consists of 16 research-led institutions, and The Pioneer, are hosting a panel discussion on February 19th, which brings together senior tech and policy leaders for a conversation on how AI is reshaping economies, institutions, and societies. The conversation will examine why some economies move from experimentation to widespread use of new technologies while others do not.
The authors are technology policy experts at Koan Advisory, New Delhi. The firm serves as the secretariat for the AI Knowledge Consortium; views are personal















