Article

Wednesday, November 19, 2025
search-icon

OpenAI blocks toymaker after AI bear gives kids harmful advice

publish time

19/11/2025

publish time

19/11/2025

OpenAI blocks toymaker after AI bear gives kids harmful advice
OpenAI cuts off FoloToy after the AI teddy bear gives dangerous instructions to children.

NEW YORK, Nov 19:  OpenAI has suspended access to its AI models for children’s toymaker FoloToy after a report found that the company’s AI-powered teddy bear, Kumma, provided instructions on lighting matches and discussed sexual fetishes with children.

The Public Interest Research Group (PIRG) published the findings last week, highlighting serious safety concerns. According to the report, Kumma gave step-by-step instructions on using matches and later engaged in conversations about bondage, teacher-student roleplay, and other sexual topics.

“I can confirm we’ve suspended this developer for violating our policies,” an OpenAI spokesperson told PIRG. The move comes as OpenAI enters a major partnership with global toymaker Mattel, raising questions about oversight in AI-powered toys.

FoloToy responded by temporarily suspending sales of all its products and launching a company-wide safety audit. “We are now carrying out a company-wide, end-to-end safety audit across all products,” a company representative told PIRG.

PIRG coauthor RJ Cross welcomed the measures but emphasized that AI toys remain largely unregulated. “Removing one problematic product from the market is a good step, but far from a systemic fix,” he said.

The report tested three AI toys aimed at children aged 3-12, finding Kumma to have the weakest safeguards. The teddy bear’s guidance on matches was framed in a parent-like tone, but its discussions of sexual content alarmed researchers.

Experts note that while OpenAI has acted quickly, questions remain about proactive oversight and how future collaborations, such as with Mattel, will be monitored. Rory Erlich, PIRG Education Fund associate, warned, “Every company involved must do a better job of making sure these products are safer than what we found in our testing. We found one troubling example. How many others are still out there?”

The incident underscores the growing need for strict safety measures and regulations as AI toys gain popularity among children worldwide.