AI agents can cause ‘10x the damage in 1x the time’ of humans, says Rubrik CTO

Walk into any large Indian bank or IT services firm today and chances are an AI agent is already at work. These are not chatbots but autonomous systems that send emails, manage databases, and make decisions without human intervention. The competitive pressure to deploy them is intense. But security experts warn that enterprises are handing significant power to systems they do not yet fully know how to control.
“These are probabilistic entities. You cannot exactly predict what they will do,” said Arvind Nithrakashyap, co-founder and CTO of US-listed cybersecurity company Rubrik, and an IIT Madras alumnus. “Agents are fast. They can cause 10x the damage in 1x the time of humans.”
A race without guardrails
Gartner estimates that 40% of enterprise applications worldwide will be integrated with AI agents by end of 2026, up from less than 5% last year. Yet the firm predicts that more than 40% of those projects will be abandoned by 2027 because organisations lack the risk controls to run them safely.
Neil Shah, VP for research and partner at Counterpoint Research, says the root cause is a tension inside boardrooms everywhere. “The gap is between CEOs and CIOs with the urgency to deploy an AI strategy, versus CSOs making sure systems are secure before handing over the keys of the kingdom to AI agents,” he said. “It can quickly become a Wild West.”
Nithrakashyap frames the challenge across three dimensions: visibility into what agents are accessing, the ability to enforce policy on non-deterministic systems, and a way to reverse damage when something goes wrong. “You may say an agent should never send emails to external customers,” he said. “But how do you prevent a hallucination from doing that?”Much of that vulnerability, experts say, begins with something as basic as a password.
The password problem
Most organisations give AI agents the same login credentials as the employee they work for rather than creating restricted accounts. A compromised agent therefore carries the full access rights of its human counterpart, reaching emails, financial records, and internal systems without restriction.
Research by Rubrik’s threat intelligence unit found that approximately 80% of cyberattacks originate from credential compromise and that attackers typically spend up to six months inside a network before causing visible damage. An AI agent on stolen credentials could replicate that damage in minutes.
Yet even organisations that manage credentials carefully may be exposed through a vulnerability they are less likely to have considered: the technical plumbing that connects agents to their systems.
A vulnerable connector
Most AI agents connect to enterprise tools through the Model Context Protocol, or MCP, a universal standard allowing any agent to plug into any enterprise system. It was designed for ease of adoption, but its security foundations have not kept pace.
A survey of over 2,600 MCP deployments by Endor Labs found that 82% contained vulnerabilities allowing attackers to access restricted files and 67% carried risks enabling malicious code execution. More than 30 security flaws in MCP have been documented since January 2026 alone.“MCP has not necessarily been developed with a full understanding of security and identity, “Nithrakashyap said. “Standards are not fully established yet.”
Fixing the protocol, however, only addresses part of the problem. Some of the most serious risks come not from technical flaws but from deliberate manipulation.
When agents are turned against you
An agent can be hijacked through hidden instructions embedded in a document or email it reads. Unlike a conventional breach, a manipulated agent works from within, corrupting or deleting data in ways that are difficult to detect until the damage is done.Shah believes the answer is not to slow AI adoption but to build governance fit for purpose. “The risk will only grow as agents begin evading oversight without the enterprise even realising it,” he said. “Governance needs to be dynamic and adaptive.”
Nithrakashyap put it plainly. “You have to embrace AI. But how do you do that in a safe manner? There is no real option.”For Indian enterprises under intense board pressure to modernise, finding that answer may be the most consequential technology decision of the decade.
The author is a Freelance Journalist ; views are personal














