AI chatbot firms face stricter regulation to protect children in UK
Preteen girl at desk solving homework with AI chatbot.
Phynart Studio | E+ | Getty Images
The UK government is closing a “loophole” in new online safety legislation that will make AI chatbots subject to its requirement to combat illegal material or face fines or even being blocked.
After the country’s government staunchly criticized Elon Musk’s X over sexually explicit content created by its chatbot Grok, Prime Minister Keir Starmer announced new measures that mean chatbots such as OpenAI’s ChatGPT, Google’s Gemini, and Microsoft Copilot will be included in his government’s Online Safety Act.
The platforms will be expected to comply with “illegal content duties” or “face the consequences of breaking the law,” the announcement said.
This comes after the European Commission investigated Musk’s X in January for spreading sexually explicit images of children and other individuals. Starmer led calls for Musk to put a stop to it.
Keir Starmer, UK prime minster, during a news conference in London, UK, on Monday, Jan. 19, 2026.
Bloomberg | Bloomberg | Getty Images
Earlier, Ofcom, the UK’s media watchdog, began an investigation into X reportedly spreading sexually explicit images of children and other individuals.
“The action we took on Grok sent a clear message that no platform gets a free pass,” Starmer said, announcing the latest measures. “We are closing loopholes that put children at risk, and laying the groundwork for further action.”
Starmer gave a speech on Monday on the new powers, which extend to setting minimum age limits for social media platforms, restricting harmful features such as infinite scrolling, and limiting children’s use of AI chatbots and access to VPNs.
One measure announced would force social media companies to retain data after a child’s death, unless the online activity is clearly unrelated to the death.
“We are acting to protect children’s wellbeing and help parents to navigate the minefield of social media,” Starmer said.
Alex Brown, head of TMT at law firm Simmons & Simmons, said the announcement shows how the government is taking a different approach to regulating rapidly developing technology.
“Historically, our lawmakers have been reluctant to regulate the technology and have rather sought to regulate its use cases and for good reason,” Brown said in a statement to CNBC.
He said that regulations focused on specific technology can age quickly and risk missing aspects of its use. Generative AI is exposing the limits of the Online Safety Act, which focuses on “regulating services rather than technology,” Brown said.
He said Starmer’s latest announcement showed the UK government wanted to address the dangers “that arise from the design and behaviour of technologies themselves, not just from user‑generated content or platform features,” he added.
There’s been heightened scrutiny around children and teenagers’ access to social media in recent months, with lawmakers citing mental health and wellbeing harms. In December, Australia…
Read More: AI chatbot firms face stricter regulation to protect children in UK