What Three Weeks with 14 Teens Taught Me About Online Safety

Over three weeks this winter, I had the opportunity to work with a group of 14 teenagers, ages 15 to 18, in the DC community of Shaw. Through this unique partnership with the Kennedy Recreation Center, we covered three core topics: AI and chatbots, social media and mental health, and healthy online relationships. On paper, the concept was simple. In practice, the experience challenged many of my assumptions.

When I first designed the workshops, I assumed the biggest need would be simply opening the conversation. I expected that many teens had not spent much time thinking about how the tech they use every day shapes their experiences on and offline. But as the workshops unfolded, something more interesting emerged. There was a clear tension between awareness and uncertainty. These young people were sharp. They were observant. They could articulate, in their own words, how scrolling made them feel worse on some days and connected on others. They just hadn’t had the opportunity, or the space, to talk about it out loud. That shifted things for me. My job wasn’t only to inform them about best practices or how the internet could be harmful. My job was to meet them where they already were, and go deeper.

When we began talking about generative AI in our first week, nearly every hand in the room went up when I asked if they had used a chatbot before. Some had used AI tools to help with homework and other school needs. Others said they turned to chatbots to help them process thoughts or ask questions they didn’t feel comfortable asking another person. But when the conversation turned to what happens to those interactions, many assumed they were completely private. Several students said they had shared their names, ages, locations, interests, and personal worries because it felt like talking to a diary of sorts. No one had explained that those conversations could be stored, reviewed, or used to train future models. It wasn’t that the students were careless with their information, they simply lacked an understanding of how these systems actually function.

During the social media and mental health session in workshop two, there was another moment that stuck with me. When I asked everyone to check their screen time from the previous day, two teens in the room had logged over 15 hours of social media usage. Not total screen time, just social media … in a single day. The students talked about how easy it is to lose track of time, how the apps keep you scrolling, and how the fear of missing something pulls them back to their phones again and again. At the same time, none of them framed social media as entirely negative. For many, it was still the easiest way to stay connected with friends who went to different schools or simply a source of entertainment after a long day. They weren’t naïve about the system they were participating in. What they were really grappling with was how to live within it, how to use these platforms without letting them take over more of their time and attention than they intended.

Some of the most powerful moments came in our third week during our conversations about online relationships and boundaries. Several students talked about witnessing or experiencing racially charged bullying in online gaming communities, spaces where harsh comments and anonymity made cruelty feel normal. Others spoke about moments when group chats crossed the line from joking into humiliation. One student shared that they had experienced emotional abuse from a partner online, describing how controlling behavior and constant monitoring had slowly become normalized in the relationship. It was a reminder that the challenges teens face online are often deeply personal, and that the line between digital interaction and real emotional harm is much thinner than adults sometimes assume.

Here’s what I keep coming back to: these teens were not lacking intelligence or awareness. They were lacking access. Access to information, to frameworks, to conversations that treated them as capable of handling complexity. That’s not their failure. That’s a gap in how we’ve built digital literacy education: who we’ve built it for, where we’ve put it, and whose lives we’ve centered when we design it.

Programs like this, direct, in-person, community-rooted, don’t just close a knowledge gap. They make the conversation feel human. Online safety can feel like a distant, policy-level concern. Something for parents to worry about, or legislators to debate. When you’re sitting in a rec center in Shaw with 14 teenagers who are telling you about the messages they got in a gaming forum, or asking you with genuine vulnerability whether their AI conversations are safe, it stops being abstract. It becomes urgent and real and personal.

 That’s the version of this work that matters most. And those three weeks reminded me why.

Kaylin Peete

Kaylin Peete is the Public Affairs Coordinator for the Family Online Safety Institute. Her role involves monitoring and analyzing trends in online safety legislation, researching and reporting on safety topics, and supporting FOSI events and programs. Prior to joining FOSI, she spent several years teaching in Washington, D.C. Public Schools, enhancing the achievements of marginalized students via the Teach for America nonprofit organization. Additionally, Kaylin holds a B.A. in Foreign Affairs with a Minor in French from the University of Virginia as well as a Master's in Education Policy from Johns Hopkins University.