The death by suicide of three young girls in the National Capital Region earlier this month grabbed headlines and sparked debate about online behaviour, the addictive nature of AI-driven gaming, and its impact on children’s well-being. Public discussion has largely focused on social media, but gaming platforms and AI tools now deeply woven into children’s lives pose equally serious risks. We cannot afford to ignore it.Recent regulatory efforts are a step forward but remain incomplete. India’s Promotion and Regulation of Online Gaming Bill, 2025 addresses online money gaming, yet large parts of the broader gaming ecosystem, including non-monetised and social gaming, remain outside meaningful oversight. Unmonitored online games can be deeply harmful for children who are already vulnerable. For instance, there have been tragic cases where minors took part in games that encourage them to undertake harmful online challenges linked to self-injury and even suicide, often while struggling with mental health.Evidence of harm is growing. A Space2Grow study found that 54% of children reported experiencing risks on gaming platforms, with top concerns including online grooming, cyberstalking, and cyberbullying. Open chat features often expose children to strangers who may build trust and then manipulate or exploit them. A 2023 study by UNICEF and Cyper Peace Foundation analysing the effects of gaming evolution on children highlights these risks with several examples of harms faced by children on online gaming platforms. Many gaming platforms enable open chat and direct messaging that connect children with unknown adults, often sexual predators who then can sexually exploit children and blackmail them.Problematic gaming behaviour is also rising among adolescents. About 3.5% are estimated to experience internet gaming disorder, 0.5% above the global average. Internet gaming disorder is associated with sleep disruption, declining academic performance and social withdrawal. The WHO’s classification of gaming disorder as an addictive behaviour alongside alcohol and drugs, signals that this is not merely a lifestyle concern but a health and child-development issue.Alongside gaming, AI-powered chatbots and virtual companions are rapidly entering children’s digital environments. These tools are often marketed as helpful, entertaining, or emotionally supportive, and many are free and easily accessible. Yet safety standards vary widely. Some lack effective age checks, content safeguards, or reporting protocols. Young users may encounter adult themes, disturbing material, or unsafe guidance delivered with apparent authority. For instance the tragic instance of a teenager who committed suicide following instructions from an AI chat bot.Children are particularly susceptible because their judgment and critical thinking skills are still developing. They are more likely to trust conversational systems and follow suggestions without questioning accuracy or intent. When AI systems simulate empathy or authority without strong guardrails, risks of manipulation, emotional dependence, and harmful influence increase. There have already been tragic reports linking unsafe AI interactions with vulnerable teens.At the same time, technology-facilitated child sexual exploitation is expanding at an alarming scale. An estimated 300 million children, 12.5% of the world’s child population, are affected by technology-facilitated Child Sexual Exploitation and Abuse. The Internet Watch Foundation identified over 3,000 AI-generated child sexual abuse material (CSAM) images in a single month, and the Childlight Index report 2025 highlighted a 1325% increase in AI-generated online abuse material in South Asia and Western Europe alone. In 2024, the National Centre for Missing and Exploited Children was alerted to over 2.25 million cases of child sexual abuse material reported or hosted in India.Despite this, safety measures remain inconsistent and often reactive. Too often, protections are added only after harm occurs rather than built in from the start. This approach places unreasonable responsibility on children and families to manage risks embedded in platform design.Representative image. Photo: Jamillah Knowles & We and AI / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/Child safety must become a design requirement, not a compliance afterthoughtThe AI ecosystem includes multiple players, from companies that build AI models and provide computing infrastructure (developers) to businesses that distribute AI tools (deployers), and the people who use them-the end users. Each has a different level of control and responsibility for managing risks. International regulatory approaches increasingly recognise this distinction. Developers often have deeper visibility into training data, model behaviour, and built-in safeguards, and are therefore well placed to address risks at source. Deployers also play a meaningful role through responsible deployment, product design, and enforcement mechanisms.Platforms can and should take an active role in harm prevention. Some companies have begun restricting harmful content generation, deploying detection systems, and participating in cross-industry safety initiatives. Such measures show that prevention is possible when prioritised.Parents and caregivers also need greater awareness and support. Many underestimate risks in gaming chats, AI companions, and hybrid platforms. Digital literacy must go beyond screen-time rules to include risk recognition and help-seeking skills. But education cannot replace safer system design. Children should not be expected to navigate complex digital threats alone.Safety by design needs to be embedded as a proactive obligation, from the start. Safety-by-design means embedding protections at the architecture stage. This includes age-appropriate defaults, strong age verification, limits on unknown-adult contact, friction for high-risk interactions, and robust content filtering. It also requires data minimisation for minors, transparency about system limits, human oversight, and clear reporting and redress pathways.On Feb 10 the Government of India formally notified an amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to bring AI-generated and synthetic content (including deepfakes and similar media) under the legal oversight of the IT rules, with strict compliance requirements for digital platforms, this is a welcome first move. As the AI summit approaches, child safety and well-being must sit at the centre of the agenda. Strong safeguards build trust and long-term sustainability while still enabling benefits such as AI-enabled learning and accessibility. Innovation needs to balanced with safeguards recognising Global South realities.AI is moving quickly into classrooms, homes, and peer networks. Gaming is becoming more immersive and more AI-enhanced. Waiting for large-scale harm before acting would repeat earlier regulatory mistakes. Policymakers and companies now have the advantage of foresight.Child protection in the AI era must be proactive, rights-based, and built into how systems are designed, deployed, and governed. Innovation and protection are not opposing goals.Innovation should move forward, but never at the cost of safety and well-being.Shireen Vakil is a child protection specialist. Vakil is former head of APAC safety policy at Meta.Chitra Iyer is co-founder and CEO, Space2Grow, and a knowledge partner at the MeitY’s expert engagement committee on AI and child safety.