This is the new MLex platform. Existing customers should continue to use the existing MLex platform until migrated.
For any queries, please contact Customer Services or your Account Manager.
Dismiss

AI toys need regulation, new safety kitemarks, systemic impact study urges

( March 13, 2026, 11:06 GMT | Official Statement) -- MLex Summary: Toys enabled with generative AI that talk to children should be more tightly regulated and carry new safety kitemarks, according to the first systemic study of the impact of the technology on young children, by the University of Cambridge. The authors set out recommendations and call for only developers that adhere to them to be allowed access to generative AI models to put in toys. They recommend regulation to limit toys’ ability to affirm friendship, labelling of developmental appropriateness and that regulators themselves be aware of the social and emotional aspects of early years development.Statement follows. The report is attached....

Prepare for tomorrow’s regulatory change, today

MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.


Know what others in the room don’t, with features including:

  • Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
  • Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
  • Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
  • Curated case files bringing together news, analysis and source documents in a single timeline

Experience MLex today with a 14-day free trial.

Start Free Trial

Already a subscriber? Click here to login

Documents