Summit in Delhi puts human-centric AI on the global agenda

Why did the AI Impact Summit in Delhi draw so much attention in February 2026? Part of the answer is scale. The room held more than researchers and policy staff. It also brought in AI company leaders, heads of Government, and sector groups that live with AI's side effects. Compared with earlier global AI conferences, this one felt closer to execution. The summit framed AI as infrastructure, not a lab project.
In his address, Prime Minister Narendra Modi pushed a clear direction: AI should be accountable, safe, democratic, inclusive, and human-centric, anchored in the MANAV vision. Earlier global meetings often split the discussion. Tech teams spoke about capability, while Governments argued about risk. Delhi pulled those threads together, and it showed in participation.
More top executives attended, more ministers and senior officials joined sessions, and more industry bodies turned up from health, finance, education, and media. That mix changed the tone. Talks focused less on abstract “AI potential” and more on guardrails for systems already in use. The public stakes also felt higher, because AI now shapes credit checks, classroom tools, hiring screens, and customer support.
A wider table: Tech, Government, and industry in the same room
Each group brings a different toolset. Companies control product design and release cycles. Governments set liability rules and public procurement standards. Industry groups bring real failure cases, like false fraud flags or unsafe medical triage. When they act alone, gaps form fast. A fast launch can outrun safety testing. A slow rule can miss new attack methods. Shared sessions help align timelines, and reduce surprises at rollout.
From talk to pressure for action, because AI is already everywhere
AI is no longer “coming soon.” It's in phones, offices, schools, and public portals. As a result, the risks are visible to ordinary people. Deepfakes can ruin reputations in hours. Scams scale with synthetic voices. Bias can hide inside automated decisions. That urgency helps explain why Delhi became headline news, not just a specialist event.
Modi's human-first message, and what MANAV asks the world to build
Modi's message pushed a shift from machine-first metrics to human outcomes. Faster models don't help if people can't challenge a harmful result. Cheap automation doesn't help if it locks out small towns, local languages, or people with disabilities.
MANAV works like a checklist for AI teams: build systems people can trust, contest, and access. In practical terms, “human-centric AI” means clear responsibility when an AI tool causes harm, safety testing before wide release, and inclusive design that works across India's diversity.
Breaking down MANAV
Moral (ethical): Reduce harm, for example, block non-consensual deepfake tools.
Accountable: Name an owner, so a rejected loan can be appealed and reviewed.
National Sovereignty: Protect sensitive data, so core systems don't depend on opaque foreign control.
Accessible (inclusive): Support local languages and assistive features, so services work beyond metro users.
Valid (legitimate): Test and audit models, so errors are measurable and decisions stay lawful.
What India needs next, from AI literacy to a strong regulator
The summit highlighted a central gap: India can't only import tools. It needs local talent, local datasets where appropriate, and public-interest systems built for Indian conditions. At the same time, job disruption needs planning, so older roles don't disappear before new AI roles grow. The 86-country declaration signals intent, yet enforcement remains unclear. India therefore needs a domestic framework with monitoring, audits, and consequences.
Teach AI basics early, so users can spot misuse and protect themselves
AI literacy can start by grade 8. Students can learn how training data shapes outputs, why models hallucinate, and how deepfakes spread. Simple habits matter too, like verifying sources and treating “AI answers” as drafts, not facts.
A TRAI-like AI watchdog, plus constant review as the tech changes
A dedicated AI regulator could set safety standards, require risk reports, run third-party audits, and enforce penalties for repeat abuse. It should also react quickly during election periods or mass-fraud spikes. Without ongoing review, misuse can trigger confusion, and widen social divides.
While PM Modi considers the summit more than merely aesthetics. India's digital public infrastructure-Aadhaar, UPI, and broad data systems-provides a rare foundation for massive AI deployment. Officials say AI in identity, commerce, health, and education might accelerate decades-old development. India is third in AI competitiveness behind the US and China. Still, major obstacles remain. Low R&D spending and foreign foundation models may restrict long-term leadership.
Even still, domestic initiatives are accelerating. BharatGen will launch Param2, a 17-billion-parameter model for 22 Indian languages. Additionally, Sarvam AI is developing a broader, voice-first solution for India's various languages. India clearly wants affordable, multilingual AI for Government, farmers, hospitals, and classrooms.
Global chip politics are changing supply chains, and firms want alternatives to China. India is striving to be more than an AI buyer due to that trend. It wants a seat at the table to co-create the future.
Indeed! Artificial intelligence (AI) already shapes choices that feel small, like what news appears first, and decisions that matter, like who gets a loan. It also supports systems people rely on each day, including work, schools, hospitals, and Government services. In plain terms, AI is software that learns patterns from data, then makes predictions or suggestions, sometimes even decisions. That power brings clear benefits, but it also raises real risks. The central challenge now is not speed alone. It's building AI that serves people, respects rights, and stays accountable.
AI's value shows up most when it reduces delays and helps people focus on judgment-heavy work. In healthcare, AI can flag unusual patterns in scans or lab results, which can support earlier follow-up. In business, AI can sort customer messages, summarise documents, and help teams plan inventory, so routine tasks take less time. Schools use AI tools to offer practice problems at the right level, which can help teachers spot gaps sooner. Cities and agencies also use data tools to predict demand, schedule staff, and detect suspicious claims. These uses matter because they can improve service quality without raising costs as sharply. Still, the best results come from clear goals, careful testing, and staff training, not from installing software and hoping for change.
While In clinics, AI systems can highlight images or results that look risky, so clinicians can review them sooner. That can matter when time affects outcomes, such as stroke care or cancer screening. On farms, AI-backed forecasts and sensor data can help predict irrigation needs and pest pressure, which can cut waste and protect yields. In classrooms, adaptive learning tools can adjust pace and difficulty, giving students extra practice where they struggle. These systems work best as support tools. They don't replace trained professionals, because context, empathy, and responsibility still sit with people.
Also, public agencies can use AI to route cases, spot patterns of fraud, and reduce backlogs in benefits and licensing. Done well, that can shorten wait times and lower error rates. AI can also support climate action. For example, grid operators can forecast demand and manage supply swings, which helps integrate wind and solar. Yet outcomes depend on data quality and clear rules. If records are incomplete or biased, models can produce confident but wrong results. Strong governance matters as much as technical skill.
The risks that can't be treated as side issues
AI can magnify existing problems because it often learns from past behavior. If past decisions treated groups unfairly, models could repeat that pattern at scale. Privacy risks also grow when systems pull from large datasets, sometimes collected without clear consent or meaningful choice. Surveillance can expand quietly when AI makes it cheap to track faces, voices, locations, and behavior.
In addition, automation can shift job tasks faster than training systems can respond, leaving some workers behind. Each risk damages trust in a direct way. People lose confidence when they can't understand a decision, challenge it, or even know data was used. Trust, once lost, is hard to rebuild.
Job disruption and inequality, plus what reskilling needs to look like
Routine work often changes first, including basic office processing, customer support triage, and some repeatable factory tasks. At the same time, new roles grow, such as data stewardship, model testing, safety review, and AI support in workplaces. The transition still hurts if training costs too much or arrives too late. Reskilling works best when it stays practical. Short programs, paid apprenticeships, and employer-backed credentials can help adults move without pausing life for years. Basic digital skills also matter, because even non-technical jobs now include AI-assisted tools.
Privacy, surveillance, and biased decisions that people can't contest
AI runs on data, and data often comes from people who never agreed in a clear way. Location history, app activity, and online behavior can feed profiles that shape what people see and how they're treated. Bias also shows up in high-stakes areas. Hiring tools can rate candidates unfairly, lending models can restrict credit, policing systems can over-target neighborhoods, and health tools can miss symptoms in under-represented groups. When an AI system affects rights or opportunity, people need a clear way to ask, “Why?” and a real path to appeal.Two practical answers point forward: transparency (explanations and audits) and rights (appeals and oversight with teeth).
What “ethical and inclusive AI” looks like in practice
Ethical and inclusive AI sounds abstract until it becomes a set of habits and checks. First, someone must own outcomes, not just the code. Next, systems need evidence of safety, fairness, and security before wide release. Teams should also include the people most affected, especially in health, education, housing, and criminal justice. Inclusion is not only about users in wealthy cities. It also means addressing the global power imbalance, where a few firms and advanced nations set tools and terms. Cooperation can narrow that gap through shared research, common standards, and support for local capacity. The goal is simple: AI should increase human capability, not reduce human agency.
Rules that build trust: audits, clear accountability, and human review
Strong guardrails can stay readable and enforceable. For high-stakes uses, basic expectations should include:
Privacy-by-design: collect less data, protect it well, and set clear retention limits.
Security testing: reduce model theft, data leaks, and prompt-based attacks.
Bias testing: measure performance across groups, then fix gaps before release.
Third-party audits: let independent reviewers check claims and risks.
Documentation: record training sources, limits, and known failure modes.
Human review: keep a person responsible for final decisions in sensitive cases.
A “right to an explanation” should mean plain reasons, not technical fog. It should also include who to contact when something goes wrong.
Sharing AI benefits across countries, not just across companies
Concentrated AI power can lead to digital monopolies, where a small set of providers controls access, pricing, and standards. That can also deepen a global divide if developing economies only import tools they can't shape. Several approaches can reduce this risk. Public-sector investment can fund local data infrastructure and responsible deployments in health and education.
Safe open research can support learning and scrutiny, while still protecting sensitive data. Data governance can set fair rules for access, consent, and cross-border use. Partnerships also matter when they help build local teams that create tools for local languages, local clinics, and local markets.
In the end, the Delhi AI Impact Summit mattered because it matched global attention with a clearer direction and broader participation. It also framed MANAV as a practical north star for human-centric AI, not a slogan. Next, India and other countries need to turn promises into education, job transition plans, and a regulator that can keep pace. Otherwise, misuse will grow faster than trust.
The writer is a veteran journalist and freelance writer based in Brampton Canada















