Innovation

Researchers work on AI for online child safety

Much of the work in online child abuse prevention focuses on detecting perpetrators.

A teenager sends explicit pictures requested by a new online love interest, only to receive a blackmail threat from someone in another country. A tween meets a stranger instead of a teen girl who bought her jewelry online. These incidents highlight the growing range of online crimes children and teens face, from sextortion to cybergrooming.

Three Virginia Tech researchers are working to make the digital world safer for children and teens. With support from a National Science Foundation (NSF) grant, Jin-Hee Cho, Lifu Huang, and Sang Won Lee from the Virginia Tech Department of Computer Science aim to develop a technology-assisted education program to prevent online sexual abuse.

The World Health Organization reports that one in three Internet users worldwide is a child, and grooming children for sexual abuse is a growing online threat. Virginia Tech researchers are building and training chatbots powered by artificial intelligence (AI) to help children and teens identify and avoid cyber predators.

Traditionally, cybersecurity has focused on protecting networks from disruption and devices from data theft, explains Jin-Hee Cho, associate professor of computer science and lead researcher on the project. “But today, people are becoming more interested in what is called social cybersecurity,” Cho says, emphasizing the need to understand how technology and people interact.

While technology can disseminate information, enable education, and fuel economic development, there are risks, especially for vulnerable groups like children and teens. A 2021 national survey published in JAMA Network Open found that 15.6 percent of young adults reported being victims of online child sexual abuse, with other forms of image-based and non-consensual sexting also prevalent.

Efforts to make online interactions safer have increased, such as the U.S. Senate designating June as National Internet Safety Month in 2005. However, much of the work in online child abuse prevention focuses on detecting perpetrators, leaving children and teens still vulnerable to abuse.

To address this, Cho and her team are developing chatbots that simulate interactions between predators and children. These simulations will be used to create an educational program for 11- to 15-year-olds, helping them recognize and avoid cyber predators.

Assistant professor of computer science Lifu Huang will focus on the AI aspects of the chatbots, leveraging his research in conversational AI to make them believable. Sang Won Lee will compile data to train the bots, tackling the challenge of finding responsible data about predator interactions. Pamela Wisniewski from Vanderbilt University will develop training materials and ensure the chatbots are safe for minors to use, aiming to teach teens about cybergrooming risks and coping mechanisms without restricting their internet use.

  • Eurekalert