This is the new MLex platform. Existing customers should continue to use the existing MLex platform until migrated.
For any queries, please contact Customer Services or your Account Manager.
Dismiss

IAPP Global Summit puts focus on technology’s human cost

By Madeline Hughes, Maria Dinzeo, Amy Miller and Emma Whitford

April 2, 2026, 16:29 GMT | Comment
When privacy professionals descended on Washington, DC, this week, they knew they were walking into a conference focused equally on privacy and artificial intelligence. But they may not have anticipated a looming anxiety, summarized by Prince Harry in his keynote: public trust in technology is “broken.” To remedy this, speakers across two days urged attendees to keep their eye on the simple and fundamental goal of protecting that public. 
As artificial intelligence increasingly complicates corporate governance across industries, privacy professionals were reminded this week in Washington, DC, that they need to get back to basics —protecting people. 

The International Association of Privacy Professionals rebranded its annual summit this year to reflect a growing focus on AI. With that change of focus came a new sense of anxiety, palpable from panel discussions to the keynote addresses about murky governance frameworks, unclear lines of responsibility and even personal issues such as social isolation. 

But the basics of compliance still apply. From practicing cybersecurity hygiene and spotting phishing attempts to making sure companies aren’t being unfair and deceptive, speakers emphasized the roles humans still play in preserving privacy, and the very real human consequences of privacy violations. 

AI companies are meeting the moment, Google Chief Legal Officer Kent Walker said in his keynote address Monday. They have a commercial incentive to do so, he said (see here). 

“This isn't just privacy by default or even privacy by design,” he said. “It's privacy by innovation, with AI labs and developers competing as much on demonstrable privacy techniques as they do on quality."

The conference floor was full of vendors pitching their latest AI compliance tools aimed at filling potential gaps in corporate governance and transitioning from checklists to continuous, self-updating programs.

But the mistakes of the past loomed over the event. In two historic verdicts last week, jurors in New Mexico and California found Meta Platforms and YouTube liable for design features on their platforms that caused harm to children (see here and here). 

The children’s stories resonated with Prince Harry, who said he met with families throughout the California trial. Technology companies have long benefited from public trust, Prince Harry said during a closing keynote address on Tuesday.
 
“But that trust has been broken,” he said. “Not suddenly, but steadily” (see here). 

Federal regulators aren’t immune from technology-induced anxiety. During a fireside chat on Monday, US Federal Trade Commissioner Mark Meador fielded a question about Australia's recent social media ban for children. “Importantly, there was also a social media ban instituted in my house,” he said. “That's mostly for my sanity.” 

US state regulators, who are gaining prominence and increasingly building bridges to their counterparts in Europe and elsewhere around the world, told the conference they plan to be increasingly active enforcers, building their internal capacity to bring cases (see here).

“The army is amassing troops, and it's happening across states at the same time that states are enacting comprehensive privacy laws,” said Michael Macko, the enforcement chief of the California Privacy Protection Agency, commonly known as CalPrivacy. 

Throughout the conference, MLex reporters conducted exclusive interviews with regulators from Brasilia to Ottawa (see here and here), as well as other important privacy industry figures (see here).

— Agentic AI enters the scene —

The arrival of agentic AI is making it harder and more complicated to protect individuals from potential privacy violations and is raising questions about new legal and ethical responsibilities, speakers said at multiple sessions. 
 
Unlike generative AI, agentic AI operates autonomously and proactively, and the speed and complexity of agents’ decisions can make meaningful oversight challenging. The ability of agents to work together to accomplish a single task highlights the need for dynamic governance frameworks.

Without controls, AI agents may pursue tasks beyond what a human intended, or take actions without authorization, or access protected data, attorneys said. That’s leaving companies and their vendors, for now, grappling with which individuals or departments should be held accountable when things go wrong.

“We’re using an AI agent for marketing purposes, and the vendor put all the responsibility on us,” said Noga Rosenthal, chief privacy officer for Ampersand, a TV advertising company. “So that’s where we are, and I actually pushed back, and I said, ‘there’s ways of you mitigating the risk.’”

But basic compliance principles “continue to matter,” said Mary Ann Le Fort, who leads the privacy governance, AI governance and cyber advisory functions at Priceline. These include transparency, trust and clarity. 

Consensus did not emerge this week about how advanced technologies should be regulated. But several speakers argued that innovation and guardrails are not antithetical to one another. Without rules of the road, trust will remain a pipe dream. 

“We can have innovation and privacy, and in fact, now we’re seeing we cannot have innovation without privacy,” Canada’s Privacy Commissioner Philippe Dufresne told MLex on the sidelines. “Because that’s the safeguard that will bring that necessary trust for innovation.” 

—Additional reporting by Matthew Newman and Mike Swift in Washington. 

*IAPP Global Summit 2026: Privacy-AI Governance, Washington, DC, March 30-April 2, 2026.

Please email editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.

Tags