Civic Tech

AI Chatbots ‘empathy gap’ puts children at risk

The study examined cases where interactions between AI and children, or adult researchers posing as children, exposed potential risks.

A study by Dr. Nomisha Kurian, an academic from the University of Cambridge, has highlighted an “empathy gap” in artificial intelligence (AI) chatbots that poses significant risks to young users, underscoring the urgent need for child-safe AI.

Dr. Kurian’s research urges developers and policymakers to prioritize AI design approaches that consider children’s unique needs. The study provides evidence that children are particularly prone to perceiving chatbots as lifelike, quasi-human confidantes, and that their interactions with AI can lead to distress or harm when the technology fails to address their specific vulnerabilities.

The study links this empathy gap to recent incidents where AI interactions resulted in potentially dangerous situations for young users. Notable examples include a 2021 incident where Amazon’s AI voice assistant, Alexa, instructed a 10-year-old to touch a live electrical plug with a coin, and a case last year where Snapchat’s My AI provided tips to adult researchers posing as a 13-year-old girl on how to lose her virginity to a 31-year-old man.

In response to these incidents, both companies implemented safety measures. However, the study emphasizes the need for proactive long-term strategies to ensure AI is child-safe. Dr. Kurian offers a 28-item framework to help companies, educators, parents, developers, and policymakers systematically consider how to protect young users when they interact with AI chatbots.

Dr. Kurian conducted this research while completing a PhD on child wellbeing at the Faculty of Education, University of Cambridge, and is now based in the Department of Sociology at Cambridge. Writing in the journal *Learning, Media and Technology*, she argues that the vast potential of AI necessitates responsible innovation.

“Children are probably AI’s most overlooked stakeholders,” Dr. Kurian said. “Very few developers and companies currently have well-established policies on child-safe AI. That is understandable because people have only recently started using this technology on a large scale for free. But now that they are, rather than having companies self-correct after children have been put at risk, child safety should inform the entire design cycle to lower the risk of dangerous incidents occurring.”

Kurian’s study examined cases where interactions between AI and children, or adult researchers posing as children, exposed potential risks. She analyzed these cases using insights from computer science on how large language models (LLMs) in conversational generative AI function, combined with evidence about children’s cognitive, social, and emotional development.

  • EUREKALERT