Since India does not have a very robust infrastructure to ‘fact-check,’ the deepfake could be a real menace affecting politics to individual privacy
Anew word i.e., ‘deepfake’, entered the lexicon in 2018 when a fake video attributed to President Barack Obama surfaced. It was frighteningly real and unlike the usual morphing that one was accustomed to and yet had many still falling prey to, despite barbs like ‘WhatsApp University’. What the deepfake suddenly promised was indiscernible and seamless clarity and ‘reality’ (only it wasn’t), a dangerous addition to the make-believe world of tech-enabled tools that could sway opinions.
The synthetic medium had metastasized using a combination of Artificial Intelligence (AI) and Machine Learning (ML) to create the most compelling outputs, far away from the truth. In a world already reeling under misinformation, falsehoods, and ‘manufactured truths’, deepfakes carried unprecedented risks. That deepfakes could be created with readily available technological wherewithal, added to the threat dimension. Already the brazen misuse of unverified and false content towards political/partisan manipulation in India, with its diverse faultlines, imagined wounds, and subliminal emotions, makes it ripe and vulnerable to more such (dis)information and sophistry.
Last week, deepfake debuted on the Indian scene with disturbing portents, when a fake video of an actor using the technology is believed to have gone viral. She rightfully lamented, “Something like this is honestly extremely scary, not only for me but also for each one of us who is vulnerable to much harm because of how technology is being misused”. Such ‘identity theft’ is something that could happen to anyone which could potentially destroy their reputations and even put them to grave physical harm at the hands of someone who could react to a deepfake output, believing it to be true. When conflated with the phenomenon of ‘vigilante justice’ or even lynching, misuse possibilities are scary.
Celebrities from the Indian film world have been the first to react to this deepfake output knowing that they are the automatic targets for salacious deepfakes and it is only a matter of time before one of them falls in the malicious dragnet. Already the (un)social media is full of lewd, crude, and licentious aspersions onto these stars that go unchecked, that for them to imagine the field with optically-audibly ‘real’ deepfakes, is simply unimaginable in its temerity.
India does not have a very robust infrastructure or even a culture of basic ‘fact-check’ industry and some of the most responsible people in positions of power have been guilty of making falsehoods and got away with the same. Therefore, the risk spiral manifold as the technology evolves from ‘cheap fakes’ to ‘deep fakes’. In the deeply distrustful times that be, incidents of deepfakes going beyond reputational smearing to even the security-societal realm have already happened, and their consequences can only be imagined. Deepfakes showing Ukrainian President Volodymyr Zelensky asking his countrypeople to surrender to Russia or deepfakes of Russian President Vladimir Putin claiming that he was initiating martial law as Ukrainians were invading Russia have all done the rounds with pernicious effect.
With our own National Elections due in 2024 and the reliance on ‘social media’ to build various competing partisan ‘narratives’ – the chances of deepfake outputs proliferating, cannot be ruled out. Those with the larger control levers on media, governance or even control mechanisms would stand to benefit as they could potentially pander to selectivity in calling out (or ignoring) deepfakes, whenever it suited or militated against their narratives and interests. Our inherent diversity of populace also makes us immensely susceptible to alternate belief systems, aspirations, and intents, that could get invoked and incensed with the usage of deepfakes.
Regrettably, the technology to create deepfakes is not exactly new, and the technology to detect and prohibit the same hasn’t been very effective, anywhere in the world. Given the uniquely Indian phenomenon of reimagining history and thus politics, the deepfakes could get many historical figures to say and do things, that they never said or did – yet history and historical figures could suffer due to deepfake output that could credibly suggest that did do/say so. Already deepfakes like Adolf Hitler saying unsaid things exist, to posit frightening possibilities.
With negligible defence against misuse, save for questioning the social media platforms – the impact of the Information Technology Act, states, “Whoever, using any communication device or computer resource cheats by personating, shall be punished with imprisonment of either description for a term which may extend to three years and shall also be liable to fine which may extend to one lakh rupees” is yet to be seen. There is general caution as of now e.g., Meta’s manipulated media policy warns, “videos that have been edited or synthesized…in ways that are not apparent to an average person and would likely mislead an average person to believe they are authentic”, but it isn’t deterrent enough.
It's only when everyone realises that they could easily get a deepfake video with the exact visuals and voice of themselves or their loved ones, would the scale of risks, hits home. Deepfakes could make phishing attacks, impersonations or online siphoning that one is barely protected against, seem pedestrian. The situation on rogue technology harks back to the dialogue in Jurrasic Park where Jeff Goldblum ponders as to how scientists were so engrossed in the query of ‘if they could or couldn’t’, but forgetting to question if they should! Deepfake faces the same conundrum.
(The writer, a military veteran, is a former Lt Governor of Andaman & Nicobar Islands and Puducherry. The views expressed are personal)