The invisible deep fake heist

At 8:32 pm, a Bengaluru-based finance manager received a video call from his CFO, or at least someone who looked and sounded like him. The instruction was routine: Approve an urgent vendor payment tied to an ongoing deal. The face was familiar, tone carried authority, and context matched the internal communication. The transfer was processed within minutes. It was a deepfake. Incidents of this nature are no longer isolated. They are routine, and scary.
A recent report by a Parliament committee highlighted the new and upcoming AI-linked risks in operational terms. It identified automated transaction bots, synthetic identity creation, deep fake impersonation, and AI-generated shell entities as active fraud vectors rather than merely emerging threats. What was not stated explicitly, but becomes evident across the cases, is that the AI-linked frauds are no longer constrained by the human throughput. They are increasingly software-driven, and beyond the so-called realm of human interventions. They reflect a structural shift in how financial frauds are executed, and why India’s security architecture is beginning to show strain at scale.
India’s digital payments ecosystem has expanded at a pace few economies have matched. UPI processes 13 billion transactions a month, creating a real-time financial network of unprecedented scale. Alongside this, reported cybercrime losses crossed Rs 22,000 crore in 2025, with over 28 lakh cases recorded across categories ranging from investment scams to identity frauds. A growing share of these losses is linked not to system breaches, but to social engineering, where users themselves authorise transactions under falsehoods. This shift is where, how, and why AI begins to deeply matter to us.
Traditional frauds scaled through volume, relying on bulk phishing, and low-probability attempts. AI allows fraud to scale through precision. Voice cloning tools can replicate speech patterns from a few seconds of audio. Deepfake generation is available through low-cost subscriptions. Generative systems can produce context-specific messages in multiple languages, tailored to recent transactions or behaviour cues. Once deployed, they reduce the marginal cost of additional fraud attempts to near zero while increasing the probability of success.
This transition exposes a deeper mismatch in how security is designed. India’s financial safeguards continue to rely heavily on credential-based verification: passwords, OTP authentication, and KYC processes built around document or video validation. Now, we have the 2FA, or two-factor authentication. These systems assume that frauds occur when credentials are compromised. AI-led fraud bypasses this as they target personalities rather than access to their systems. In most reported incidents, there are no breaches in the conventional sense. The users are persuaded. OTPs are shared, links are clicked, or transactions are approved because the requests appear legitimate.
Banking executives have increasingly pointed to impersonation-led frauds as higher-conversion channels than generic phishing, particularly across remote transaction environments such as UPI and card-not-present payments. The system fails not because it is technically broken, but because it is operating at the wrong layer. This is most visible in the reliance on OTP-based authentication. Even the 2FA, or OTP-plus regime may not help. OTP assumes that possession of a registered device validates identity. This is weakened through multiple vectors: SIM swap fraud that redirects messages, call spoofing that mimics legitimate institutions, and AI-assisted interactions that convince users to disclose verification codes voluntarily.
As is evident from the Parliament report, the 2FA may not work, as AI systems are deployed to bypass the multi-factor authentication frameworks, effectively turning the primary defense mechanism into a target. The structure of frauds has shifted from the individual actors to organised operations. A significant share of India’s high-value scams, including the so-called “digital arrest” cases, are traced to coordinated centres operating outside the country, particularly in parts of Southeast Asia. These are not loosely organised networks but structured operations with defined workflows, performance tracking, and access to shared technological infrastructure.
Globally, fraud-as-a-service ecosystems offer modular tools ranging from phishing kits to synthetic identity frameworks at subscription-level pricing, lowering entry barriers for new operators. Within India, this development is reflected in the composition of losses. Investment scams account for a disproportionately high share of financial damage, while identity-driven fraud continues to expand across retail and corporate segments. The scale of activity suggests that frauds are no longer episodic but systemic, embedded within the broader digital transaction ecosystem. However, the infrastructure through which this operates is fragmented. Banking systems enable fund transfers, telecom networks control communication channels, and digital platforms provide entry points for engagement.
Each layer is regulated independently, while fraud cuts across all three simultaneously. This fragmentation creates delays in detection and response, even as attackers operate in real time. India has built mechanisms to address parts of this problem.
The Citizen Financial Cybercrime Reporting and Management System has enabled the blocking of over INR 8,000 crore in fraudulent transactions through coordinated action across banks and payment platforms. Machine learning systems like Mule-Hunter are being developed to identify suspicious accounts based on transaction patterns.
These are meaningful interventions, but they remain largely reactive. Funds are frozen after detection, and patterns are identified after repetition. AI-led fraud compresses execution cycles to minutes, reducing the window available for intervention. The risk is extending beyond retail users to corporate systems. Business email compromise, historically dependent on text-based impersonation, is being replaced by audio-visual deception. Documented global cases involving deep fake video calls have resulted in multi-million-dollar losses. In India, the rise in voice cloning incidents, and AI-driven impersonation scams indicates that similar vulnerabilities are emerging within enterprise workflows.
Verification methods that rely on voice confirmation or video calls are no longer reliable when both can be synthetically reproduced. The legal and regulatory framework has begun to adapt but remains uneven in its coverage. Recent changes introduced provisions to address organised cybercrime and synthetic content, while updated IT rules impose tighter timelines for content takedowns. The Digital Personal Data Protection framework adds obligations around data handling and breach disclosure.
However, three structural constraints persist: difficulty of attributing AI-generated fraud across jurisdictions, lag between fraud execution and legal response, and transnational nature of many organised operations targeting Indian users.
What emerges from this is not a failure of components but a gap in alignment. Digital adoption expands the base over which fraud operates. AI reduces costs, and increases effectiveness. Defensive systems, while improving, are not calibrated to the same speed or coordination. Real-time deepfake impersonations will become standard in high-value frauds. AI-driven phishing will make it difficult to distinguish from legitimate interaction. More significantly, agentic systems can execute multi-step fraud operations with minimal human oversight, from target identification to fund extraction. In that context, the shift is not simply an increase in fraud volume. It is a change in how fraud is executed. The transition from credential theft to personality manipulation alters the definition of security. Systems built to protect access are increasingly ineffective against systems designed to manipulate intent. India’s digital infrastructure has scaled faster than most nations. The challenge is whether its security design and architecture can adapt at a comparable pace.














