This is the new MLex platform. Existing customers should continue to use the existing MLex platform until migrated.
For any queries, please contact Customer Services or your Account Manager.
Dismiss

Counsel for Anthropic, OpenAI flag privacy tradeoffs in AI safety

By Emma Whitford

March 30, 2026, 20:04 GMT | Insight
The tension between product safety and user privacy is one of the “hardest questions that we have to grapple with on a daily basis,” Anthropic product counsel Mengyi Xu said Monday, while OpenAI Senior Counsel Daniel Kehl said the issues are currently “converging” in technology policy conversations focused on youth.
The tension between product safety and user privacy is one of the “hardest questions that we have to grapple with on a daily basis,” Anthropic product counsel Mengyi Xu said Monday.

OpenAI Senior Counsel Danielle Kehl agreed, noting that the issues are currently “converging” in technology policy conversations focused on youth.

“I think that the direction that we are heading in is one in which there’s a recognition that there are going to have to be tradeoffs,” Kehl said, adding that companies should always act in the “most privacy-preserving way.”

In order to make artificial intelligence models safe, developers need to have visibility into how they are being used in “real life,” Xu said during a conference* panel alongside Kehl in Washington, DC.

“We’re doing a lot of research and [research and development] on how to do so in a privacy-preserving way, but that tension does exist at a fundamental level,” Xu said.

A little bit of user visibility can help improve tools that identify potentially dangerous content, according to Xu. “But in order to start that virtuous cycle you have to start somewhere.”

OpenAI is looking to minimize the number of ChatGPT user conversations it needs to inspect by using automated monitoring and classifiers, Kehl said.

The company has models that “run in the background that do a lot of our safety work,” Kehl said. These models are “looking for patterns in traffic that are indications of violative content in a ton of different areas. So there could be a safety issue here, it could be a violent-activities question.”

The challenge, the OpenAI attorney said, is settling on the appropriate amount of precision.

“Are you catching all of the bad content, which in order to do that often means catching a lot of perfectly benign content, or are you missing some content because you want to sort of reduce the amount of noise?” Kehl said.

While she did not reference specific alleged safety issues on Monday, OpenAI is currently facing 12 wrongful death lawsuits in California state court brought on behalf of youth and adults, stemming from ChatGPT users’ conversations with the chatbot (see here).

— Agentic AI brings new challenges —

Agentic AI, which tackles more complex tasks for users, raises unique challenges, Xu said, noting that in this arena, there are tradeoffs between security and innovation.

She said AI agents challenge a pre-existing dichotomy between malicious use of a technology by a third party and misuse by the primary user.

“You can have an agent that’s operating within the scope of consent ... fully aligned with the user’s goals, but then doing things in a resourceful way that actually exceeds what the user has intended that agent to do,” she said, and it’s not clear if that scenario falls on the misuse side, or the malicious use side.

The challenges posed by AI agents can’t be solved by “one layer” of the technology stack, Kehl said, but will require model and application developers working together to help users trust the product enough to give it permission to work for them.

“It’s really just a shared issue that I think we’re all going to have to deal with," she said.

*IAPP Global Summit 2026: Privacy-AI Governance, Washington, DC, March 30-April 2, 2026.

Please email editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.

Tags