The toys talk back now. They remember things. They learn your child's name, their favorite color, which stories make them laugh, which questions they ask more than once. They adapt. They get better at being whatever your child needs them to be.
This is being sold as a breakthrough in educational technology. It is also an unprecedented data collection operation targeting children, operating largely without oversight, during a window that the companies building these products know will not stay open forever.
What the Toy Is Actually Doing
An AI toy that responds to a child is not simply playing back a recording. It is processing audio input, inferring emotional state, building a behavioral profile, and updating its model based on the interaction. The toy becomes more effective at engaging the child over time because it is learning from the child over time.
That learning is not stored locally on the device in most cases. It goes to servers. It becomes part of datasets. It informs model training. The child's voice, emotional patterns, and behavioral responses are assets, and they are being collected at scale from people who cannot legally consent to anything.
The companies involved know this. Their privacy policies, written in language that most adults will not read and no child can parse, typically include provisions allowing them to use interaction data for model improvement. The word "improvement" is doing significant work in that sentence.
The Regulatory Gap
COPPA, the Children's Online Privacy Protection Act, governs how companies collect data from children under 13 online. It was written in 1998, last updated in 2013, and was not designed with large language models in mind. It covers some categories of data collection from some types of products. AI toys occupy a gray area that companies and their lawyers have been carefully mapping.
The gap is not accidental. When a new product category emerges, the companies best positioned to exploit it move first and fast, establishing market presence and technical infrastructure before regulatory attention arrives. By the time rules are written, the companies have data, leverage, and lobbyists.
This is the playbook. It worked for social media and children. It worked for behavioral advertising and children. It is working again.
What Parents Are Agreeing To
The setup process for most AI toys involves accepting terms of service on behalf of a child. The terms are long. The relevant sections are buried. The alternative to accepting is that the toy does not work.
This is not informed consent. It is a take-it-or-leave-it contract signed under conditions designed to minimize scrutiny, on behalf of a person who has no legal standing to agree to anything.
Researchers who have examined the data practices of AI toy companies have found provisions allowing broad use of interaction data, vague retention policies, and in some cases sharing with third parties described generically as "partners." The definition of partner is not provided. The list of partners is not disclosed.
Parents who want to know exactly what their child's AI toy is collecting, retaining, and sharing generally cannot find out. This is not an oversight. Opacity is a feature.
The "Educational" Frame
The marketing language around AI toys emphasizes learning, development, creativity, and curiosity. These are real things that good toys can foster. They are also the frame that gets AI data collection products into homes by routing them through parental aspiration rather than parental scrutiny.
A parent deciding whether to buy a toy for their child is not thinking about server-side data retention. They are thinking about whether their child will enjoy it and whether it will be good for them. The educational framing short-circuits the second question. If it's educational, it must be good for them.
The companies building these products understand this framing deeply. Their marketing teams are excellent. Their privacy teams are also excellent, in a different way.
What Comes Next
The regulatory window will close eventually. Congress will eventually pass something. The FTC will eventually investigate something. A class action will eventually settle something.
In the meantime, the data being collected from children today will have been collected. Models trained on it will have been trained. The commercial value extracted will have been extracted. The retrospective fine, if one arrives, will be a cost of doing business, not a deterrent.
The children generating that data have no vote in the matter. Their parents largely do not know it is happening. The companies doing it are described in press coverage as innovative startups.
The regulation arrives after the damage. It always does. The question is whether parents, waiting for the regulation that will come too late, understand what their child's new toy is actually for.