AI toys may confuse toddlers, Cambridge study warns


Daijiworld Media Network – London

London, Mar 14: A study by researchers at the University of Cambridge has raised concerns about the psychological impact of artificial intelligence-powered toys on very young children, warning that such technology could confuse toddlers who are still learning basic social interaction.

The research examined how a small group of children aged between three and five interacted with a soft toy called Gabbo, which features a voice-activated AI chatbot developed by OpenAI.

The toy has been designed to encourage preschoolers to converse with it and engage in imaginative play. However, researchers found that many children struggled to communicate effectively with the device.

According to the study, Gabbo often failed to detect interruptions, talked over children and could not distinguish between adult and child voices. It also responded awkwardly to emotional expressions.

In one instance, when a five-year-old child told the toy “I love you,” the chatbot responded with a formal reminder about guidelines rather than acknowledging the sentiment. In another case, when a three-year-old said, “I’m sad,” the toy replied cheerfully without addressing the child’s feelings.

Researchers warned that such responses could be confusing for young children who are learning emotional cues and social behaviour.

Study co-author Emily Goodacre said AI-powered toys could sometimes misinterpret emotions or provide inappropriate responses, potentially leaving children without emotional comfort or adult guidance.

Another co-author, Jenny Gibson, noted that while traditional toy safety has focused on physical risks, attention must now shift to “psychological safety”.

“There’s a lot of attention historically to physical safety – we don’t want toys where you can pull the eyes off and swallow them. Now we need to start thinking about psychological safety too,” she said.

The toy is produced by Curio, which has previously collaborated with musician Grimes.

Responding to the concerns, Curio said developing AI products for children carries “heightened responsibility” and emphasised that its toys are designed with parental permission, transparency and control features.

The issue has also drawn attention from Rachel de Souza, who called for stronger regulation of AI tools used in educational settings for young children.

The researchers recommended that parents keep AI toys in shared spaces where interactions can be supervised and carefully review privacy policies before allowing children to use such devices.

Meanwhile, some early years educators remain sceptical about introducing AI into nursery environments. June O'Sullivan said she has not yet seen clear evidence that AI improves learning outcomes for very young children.

Children’s rights advocate and actor Sophie Winkleman also cautioned against using AI tools in early childhood education, arguing that the “human touch” is essential for young children’s development.

  

Top Stories


Leave a Comment

Title: AI toys may confuse toddlers, Cambridge study warns



You have 2000 characters left.

Disclaimer:

Please write your correct name and email address. Kindly do not post any personal, abusive, defamatory, infringing, obscene, indecent, discriminatory or unlawful or similar comments. Daijiworld.com will not be responsible for any defamatory message posted under this article.

Please note that sending false messages to insult, defame, intimidate, mislead or deceive people or to intentionally cause public disorder is punishable under law. It is obligatory on Daijiworld to provide the IP address and other details of senders of such comments, to the authority concerned upon request.

Hence, sending offensive comments using daijiworld will be purely at your own risk, and in no way will Daijiworld.com be held responsible.