A Conversation with Vaishnavi J on Chatbots

For our September issue of The Assurance Newsletter, we spoke with Vaishnavi J, founder of Vyanams Strategies (Vys), an advisory firm that helps companies better understand and address youth safety challenges online. With chatbots rapidly entering kids’ lives, Vaishnavi brings both practical and policy expertise to the conversation, offering insights on the opportunities these tools create, the risks they pose, and the safeguards needed to ensure they support, rather than undermine, children’s development:

1.How would you explain chatbots to parents who are unfamiliar, and why might children find them appealing?

    “Chatbots are computer programs that mimic conversation, using artificial intelligence to respond in natural language. For children, they can feel like endlessly patient playmates that “talk back” in natural language; always available, endlessly patient, and willing to play along with any idea. A child can ask a chatbot silly, serious, or imaginative questions without fear of embarrassment. 

    The catch is that chatbots feel like companions, but they don’t actually understand or care the way humans do. At Vys, we’ve seen how quickly children (and even adults) anthropomorphize these tools, treating them like companions. This makes chatbots helpful in certain ways but can quickly blur important boundaries.”

    2. From your perspective, what are the most important benefits or opportunities chatbots could offer children?

    “At heir best, chatbots can be valuable scaffolding for growth. They can help children practice skills like reading, writing, or problem solving through interactive dialogue, rather than rote drills. We’ve seen some early pilots showing that chatbots can boost language learning and math fluency when used as tutors, offering personalized explanations at a pace that matches the child. They can also nurture creativity, whether brainstorming story ideas or role-playing historical characters. Importantly, chatbots lower barriers; they’re always available to students across a range of demographic backgrounds, and are a free or low-cost way to supplement their ongoing education. 

    When we built one of the first product frameworks for developmentally appropriate AI tools, we saw that responsible design choices like nudging students back to parents for important questions or refusing to answer specific types of questions turned chatbots into genuinely supportive tools. Done right, they can complement human connection, not replace it.”

    3. What are the biggest concerns when it comes to children using chatbots?

    “The main concern is that children may confuse a chatbot’s fluency / personalized tone when responding, with genuine understanding. Children may believe a chatbot’s answers are always correct, or that it “cares” about them, leading to over-reliance.

    Our red teaming of AI models for youth harms has also shown that curious probing and boundary-testing – a very natural behavior in children – can sometimes push chatbots into unsafe territory where they provide advice on dangerous self-harm techniques or encourage sexual exploitation. 

    Since their responses are stochastic, even the same benign question could yield different responses. For children, that unpredictability can be confusing or harmful, especially if one answer is safe and another slips into biased, unsafe, or misleading territory. Finally, privacy is always an important issue. Children may trust chatbots with highly sensitive information without realizing how that information may be stored or used.”

    4. Do you think today’s laws and tech company policies are keeping pace?

    “No, but there is a really exciting opportunity present right now to inform what the next generation of technology could look like. Most policies are behind the curve because the technology is evolving so rapidly and policymakers are also catching up on their own AI expertise. Studying the impact of these technologies on young people also requires more investment; it’s happening, but not quickly enough.

    On the industry side, many thoughtful AI safety teams within companies have introduced content filters, data minimization, and nudges back to trusted resources. Still, these safeguards tend to focus on obvious harms rather than subtler risks like emotional dependency or children taking answers at face value as the truth. I’m encouraged to see both companies and policymakers recognizing that chatbots require a different conceptualization of risk and remediation than other earlier forms of technology.”

    5. What role should policymakers play, and which safeguards are most effective?

      “In our work with companies and policymakers, we have seen that early adoption of responsible design frameworks can make a big difference in how models are trained, prompts are assessed, and risk is tested.

      For policymakers, this looks like focusing less on banning technologies outright for children, and instead encouraging guardrails within industry that align models with children’s developmental needs. Some of the most effective safeguards we have recommended in our age-appropriate design framework for AI include: requiring clear age-appropriate defaults, limits on personalization, or refusal / redirecting potentially harmful queries to more reliable resources.

      Independent testing relying on experts in this space is also incredibly important. Policymakers need to catch up with the asymmetry of knowledge between them and technologists. When governments, independent experts, and companies collaborate, we can move beyond the “friend or foe” framing and build chatbots that serve as safe, supportive tools for children.”

      6. If you could leave parents and educators with one key takeaway, what would it be?

      “Chatbots aren’t inherently friends or foes. They reflect specific design choices that amplify the context in which they’re used. Children may use them as playful practice partners for learning and creativity, or as emotional crutches that displace their real relationships.

      Parents and educators should not panic or ban these tools outright. In our workshops for parents, we always highlight that the most important parental control tool out there is a connected, curious parent. Get to know how your child interacts with chatbots from a place of curiosity and engagement, rather than judgement.

      By treating chatbots as one more part of a child’s digital environment – something to be informed about and and contextualized for them – you can help ensure they serve as tools for growth, not substitutes for the human connections that matter most.”

      Vaishnavi J

      Vaishnavi is the founder and principal of Vyanams Strategies (VYS), helping companies, civil society, and governments build healthier online communities for young people. VYS leverages extensive experience at leading technology companies to develop tactical product and policy solutions for child safety and privacy. These range from product guidance, content policies, operations workflows, trust & safety strategies, and organizational design. An expert in online child safety, privacy, and age-appropriate design, Vaishnavi has held significant roles in the tech industry. She was the global head of youth policy at Meta, supporting age-appropriate content and product policies across Instagram, Facebook, VR, and messaging services. She previously led Twitter's video content policies, was their first head of safety policy in APAC, and served as Google's child safety policy lead for APAC. Vaishnavi is a recognized commentator on child safety and privacy, featured in BBC, NPR, CNN, Washington Post, Wall Street Journal, New York Times, and Rolling Stone. Learn more about VYS's work at www.vyanams.com, and subscribe to our monthly newsletter for insights on building healthier digital experiences for young people at https://quire.substack.com.