Artificial Intelligence (AI) is fast becoming the defining technology of our era, shaping everything from retail and finance to warfare and surveillance. Unlike run-of-the-mill general-purpose technologies, AI is truly pathbreaking: it holds the power to transform the foundational structures of society, from how we work and learn to how we govern and secure ourselves. Much like nuclear technology and climate change, AI transcends national borders and individual interests. While celebrating technology’s limitless potential, we must heed Geoffrey Hinton, dubbed the “Godfather of Deep Learning” and a 2024 Nobel laureate, who warns of the existential risks of rapidly advancing AI.
A few technocrats and major corporations now steer AI’s development, potentially reshaping economies and societies. To ensure AI benefits humanity rather than endangering it, we must adopt a global governance framework akin to theNuclear Non-Proliferation Treaty ( NPT) or the Intergovernmental Panel on Climate Change (IPCC).
The peril of few holding the future
Economists Daron Acemoglu and Jhonson warn that a small elite controlling technological development can lock society into pathways that primarily benefit the privileged. This echoes historical moments when narrow interests drove massive decisions: for example, Ferdinand de Lesseps undertook the Panama Canal project after his success in Suez, only to misjudge engineering challenges and trigger disastrous outcomes. By analogy, big-tech leaders might have grand AI visions but overlook ethical hazards, job displacement, or societal well-being. When vision and capital become so concentrated, even a minor misstep at the top can ripple across the globe.
Moreover, this dynamic reflects what Cornell economist and India’s former chief economic advisor Kaushik Basu calls the “Samaritan’s Curse.” Well-intentioned or philanthropic AI solutions-like free educational software from tech giants-could still undermine local autonomy and accountability.
In this scenario, communities become dependent on external providers who extract data and set the terms for entire regions. Over time, local institutions weaken, and the global majority is left beholden to decisions made just in the tech hubs. Likewise, an overreliance on purely “rational” frameworks-common in game theory-can exacerbate these inequalities. If tech elites optimize only for short-term interests or market dominance, they risk creating suboptimal outcomes that harm societal well-being-much like a Prisoner’s Dilemma, in which self-interested actions lead everyone astray.
Lessons from nuclear non-proliferation
The parallels to the nuclear age are not accidental. In the early Cold War, world leaders recognised that nuclear weapons posed an existential threat, motivating treaties such as the Nuclear Non-Proliferation Treaty (NPT). Although imperfect, the NPT regime has provided an institutional channel for monitoring, compliance, and the promotion of peaceful nuclear uses.
AI, too, carries existential dimensions-whether from fully autonomous weapons, large-scale manipulation through algorithmic propaganda, or poorly controlled superintelligence. If major actors pursue an “arms race” mentality, each will race to deploy ever more powerful AI systems without adequate safety checks. In this Hawk-versus-Dove dynamic, those who prefer cautious regulation (the Doves) risk being overtaken by more aggressive competitors (the Hawks). History shows that unrestrained competition with no global oversight can lead to destabilising deployments, leaving the rest of humanity at risk.
A climate governance
Meanwhile, the IPCC provides another model. By assembling scientific findings from around the world and presenting them to policymakers in plain language, the IPCC has anchored climate negotiations for decades.
Although climate action remains insufficient overall, the IPCC demonstrates the value of an inclusive, science-driven body that informs international treaties and inspires national policy.
AI would benefit from a similar body-an “IPCC for AI.” Such an institution could pool data, consolidate research, and highlight both the benefits and dangers of emerging AI applications. It would not, by itself, end competition or guarantee ethically deployed AI. However, it could foster a shared understanding of risks, develop transparent benchmarks for testing AI systems, and guide negotiations on international standards.
The need for equitable participation
If we want to avoid replicating the inequities of past global governance efforts, we must ensure that developing nations and historically marginalised communities have a real seat at the table. In nuclear and climate frameworks, wealthier nations often dominate the agenda, while poorer nations struggle to make their voices heard. When it comes to AI, the stakes are just as high-if not higher-because the technology permeates jobs, education, healthcare, and human rights.
Without equitable participation, we risk a scenario in which AI is developed primarily by and for affluent societies, leaving much of the world behind. Local languages, cultural contexts, and social needs might remain unaddressed. Worse, data could be harvested from vulnerable communities without meaningful consent, reinforcing a cycle of dependency and exploitation. Moreover, if large companies or governments Leveraging AI to automate industries, job losses may disproportionately impact regions lacking the resources to adapt or retrain.
Towards a global architecture
A robust AI governance framework would combine elements of the NPT (agreements limiting destructive uses and encouraging responsible technology sharing) with the IPCC’s approach (transparent, collaborative scientific assessment). Governments, industry, civil society, and academia should unite in a formal institution that:
1.Sets global standards for data governance, algorithmic transparency, and ethical AI usage.
2.Establishes monitoring and verification protocols to identify dangerous or manipulative AI deployments.
3.Facilitates capacity-building so that lower-income countries can develop or adapt AI ethically, rather than become passive recipients of technology shaped elsewhere.
4.Promotes open research on AI safety, interpretability, and fairness, ensuring that breakthroughs are not locked behind corporate or governmental secrecy.
Conclusion
AI’s immense promise should not blind us to its potential perils as AI has already transformed the planet’s strategic and economic landscape. To avoid a future shaped solely by a powerful few, we must proactively craft a global governance system.
Such an institution can harness the best of AI innovation while safeguarding against misuse, exploitation, job losses, and catastrophic misjudgments.
The lesson from nuclear arms and climate policy is clear: when technology poses risks on a planetary scale, neither laissez-faire market forces nor unilateral national approaches will suffice, instead, a cooperative international framework, informed by science and rooted in fairness, is our best hope.
(The writer is an officer of the Indian Railway Service of Mechanical Engineers and is presently pursuing MPA at the Lee Kuan Yew School of Public Policy, National University of Singapore. Views expressed are personal)