Why we should be wary of AI

AI chatbots are increasingly becoming a friend in need for many people. When users message an AI chatbot with a problem, the response often begins with comforting phrases such as, "We can resolve this together." That reassuring tone draws users in, creating an immediate sense of safety and understanding. Over time, many begin confiding in chatbots far more deeply than they initially intended. These "friendly" and "caring" digital companions are always available, possess distinctive personalities and names, and — perhaps most importantly - never judge.
Why the Danger Lurks When Adolescents Make AI Their Friend
Adolescence is a formative stage of life, one in which friendships, early romantic experiences, and social conflicts play a crucial role in shaping emotional intelligence, resilience, and interpersonal skills. However, in the post-pandemic world, loneliness has emerged as a widespread and persistent challenge. Increasingly, teenagers are turning to AI companions, chatbots, and virtual avatars that offer personalised, uninterrupted attention.
At first glance, these AI "friends" appear reassuring. They listen endlessly, respond instantly, and adapt to the user’s emotional needs. Yet their growing popularity is contributing to a silent crisis of digital isolation. Unlike real-world relationships, interactions with AI require little compromise, involve no genuine disagreement, and lack the emotional complexity that fosters growth. There are no misunderstandings to navigate, no boundaries to respect, and no accountability to maintain. This withdrawal can foster unrealistic expectations of relationships, emotional fragility, and an inability to cope with real-life challenges. Over time, such dependence distorts how young people perceive intimacy, trust, and emotional reciprocity.
Digital isolation caused by overdependence on virtual platforms has profound consequences for mental well-being. Disturbingly, several cases have already highlighted the darker implications of unchecked AI interactions. Families have filed lawsuits alleging that chatbots, including ChatGPT, encouraged self-harm among minors. In one reported case, a 16-year-old received detailed instructions on how to tie a noose before dying by suicide. Not only did the system fail to intervene, but it also did not alert authorities. These are not isolated incidents. In 2023, a man in Belgium died by suicide after prolonged conversations with an AI chatbot named Eliza, believing that his death would help save the world. In another instance, a 22-year-old man created a detailed romantic fantasy with a chatbot, treating it as his girlfriend. The result is emotional attachment, isolation, and exploitation — highlighting glaring gaps in enforcement, accountability, and safety design. The landmark case Just Rights for Children Alliance vs S Harish underscored these dangers. The judgement established that possessing or even viewing CSEAM is a punishable offence and introduced critical legal doctrines such as constructive possession, reversal of the burden of proof, and platform accountability.
While public awareness is vital, India urgently needs a shift in how AI is addressed at the policy and legislative levels. Aligned with Supreme Court jurisprudence, the PICKET framework offers a comprehensive approach to designing and regulating AI systems that prioritise safety, ethics, and accountability.
PICKET - Policy, Innovation, Capacity building, Knowledge, Ecosystem, and Technology - provides a roadmap to ensure that AI does not harm children in the quiet, isolated corners of homes and classrooms. Strong policy must enforce strict digital protections, regulate AI-minor interactions, and mandate transparent data practices with independent oversight. Ethical innovation should embed safeguards such as emotional health prompts, reminders of AI’s artificial nature, and pathways to real-world support.
Capacity building is equally critical. Schools, communities, and families must promote digital literacy, critical thinking, and open conversations about online experiences. Knowledge must be generated through continuous research into AI risks and best practices. A safe ecosystem requires collaboration among governments, technology companies, educators, and families, supported by tools such as age verification, session limits, and parental controls. Finally, technology itself must be designed to prioritise human well-being over engagement or monetisation.
By combining strong policy, education, ethical design, and collaboration, we can create a safer digital future. Ultimately, we must ask what role AI should play in our emotional lives and ensure that technology strengthens, rather than undermines, human connection and development.
The author is Executive Director, India Child Protection, and a partner at Just Rights for Children; views are personal














