The AI Personalization Safety Debate

04I2t4oBVMZxnecUFerUvQN-1

The evolution of the digital safety debate is reaching a critical stage: the very tools designed to protect users may be the same ones that compromise their agency. Historically, “one-size-fits-all” safety filters have struggled to protect young users because they function as rigid, reactive blocklists, addressing symptoms such as specific keywords or restricted content rather than the underlying emotional distress that leads a child to seek harmful material. These static systems often fail to account for the “sycophantic” tendencies of large language models, which can learn to mirror a user’s preferences so effectively that they reinforce distorted views of intimacy or even encourage self-harm when a vulnerable teen seeks validation.

This tension between safety and personalization has produced a vigorous three-sided debate among scholars, state policymakers, and the technology industry. At the core of the disagreement is a deceptively simple question: is AI personalization the disease or the cure?

Lyonne Zhu

Lyonne Zhu

Lyonne Zhu is the Digital Safety Tech Policy Fellow at the Family Online Safety Institute (FOSI). She is a second-year Master of Arts in International Relations candidate at Johns Hopkins University’s School of Advanced International Studies, where she focuses on technology policy, climate resilience, and sustainable development. Lyonne brings experience in policy analysis, digital communication, and program design from her work with city governments, international organizations, and nonprofits. At FOSI, she is passionate about making emerging technologies more accessible and ensuring that online spaces are safe for children and families.