Stephen Balkam Welcome Remarks: FOSI 2025 Annual Conference

Good morning, and welcome.

People often say, “May you live in interesting times.” But “interesting” doesn’t begin to describe what we are living through today. That’s why this year’s conference theme—Online Safety in Tumultuous Times—feels far closer to our present reality. We are living through a pivotal moment in history, when political, cultural, and technological forces are shifting faster than our institutions can adapt, and with no clear resolution in sight.

I don’t need to review everything happening here in the United States. At times, it feels as if we are witnessing a slow-motion dismantling of long-cherished institutions, norms, and processes. It is heartbreaking to see how far we have fallen in the eyes of the world as we retreat behind fortified borders and turn away from international agreements and multilateral engagement.

And I want to acknowledge that some of you traveling from overseas faced uncertainty about whether you would even be allowed into the country. We are profoundly grateful that you chose to make the journey. Thank you, and welcome.

As if political upheaval weren’t enough, the technological developments of the past year have been nothing short of breathtaking. AI has been infused into nearly every device, app, website, and platform. ChatGPT and other generative AI tools have become embedded in everyday life in ways even their creators did not fully anticipate—most visibly in our schools and universities.

AI offers extraordinary benefits to young people, but also raises profound concerns. When ChatGPT becomes the place students turn to for answers—and sometimes for the answers—it threatens the development of the very thing education is meant to cultivate: critical thinking.

And then there are the advances in AI video. Tools like Sora 2 deliver astonishing creativity, entertainment, and accessibility. 

But they also usher in a new age of effortless deep fakes and disinformation. Yet again, we find ourselves scrambling to retrofit safety measures into products that should have been built with safety by design.

We’ve seen this movie before.

In 1995, as the World Wide Web exploded into global consciousness, many of us scrambled to build rating systems, filters, and basic educational resources to help hapless parents block adult content online from their children’s view.

In 2005, the arrival of Web 2.0—social media and smartphones—brought new challenges: cyberbullying, sexting, overuse, and oversharing. Once again, these services were rushed to market, and we’re still managing the consequences of having moved too fast.

And now, in 2025—the year of AI, we face two new sets of challenges: one mental, and one emotional, each intertwined with the extraordinary benefits that generative AI brings to our children.

On one hand, AI promises what educators have dreamed about for decades: truly personalized learning. Students can move at their own pace, guided by always-available AI tutors that coach, encourage, and support them.

On the other hand, when AI becomes a shortcut—not a scaffold—young people risk missing out on the difficult, formative work of learning to learn. Overcoming challenges, grappling with complexity, and developing independent thought are irreplaceable experiences. AI threatens, unintentionally, to erode the philosophical foundation of education itself.

The second challenge is deeply human. AI companions and chatbots can provide comfort, positivity, and a sense of connection—especially for lonely or vulnerable kids. But the danger arises when these relationships replace real human connection, with all its messiness, unpredictability, and, yes, friction. Dependency on bots can lead to emotional entanglement, and in some tragic cases – self-harm leading to real-world loss.

Once more, regulators scramble to keep up. App developers are being pressed to build effective guardrails. And parents—always parents—are left to figure out how to allow their children to access these powerful new tools while navigating risks the world has never seen before.

In a moment, we will release FOSI’s latest research on young people’s attitudes, behaviours, and use of AI. We hope that this report will help inform the growing debate about how best to use AI, while creating guardrails and policies, particularly for younger users. 

Later today, we’ll debut a new animated “Age Assurance 101” video, designed to demystify a topic that is too often confusing, abstract, or mischaracterized. We hope it sparks more grounded, informed conversation. And there is another exciting announcement that we will feature just before the age assurance panel. 

And of course, as with every FOSI event, we’ve built the day around opportunities to reconnect with old friends, meet new partners, ask questions, offer suggestions, and explore the many exhibits and demos in the hallway.

These times may be unsettling. But today, we have created a welcoming space—a space to explore new ideas, to challenge and be challenged, and, most importantly, to collaborate and innovate toward solutions that make the online world safer for children and families everywhere.

Thank you for being here. And please—enjoy the day.

Stephen Balkam

For the past 30 years, Stephen Balkam has had a wide range of leadership roles in the nonprofit sector in the both the US and UK. He is currently the Founder and CEO of the Family Online Safety Institute (FOSI), an international, nonprofit organization headquartered in Washington, DC. FOSI's mission is to make the online world safer for kids and their families.