Agentic artificial intelligence, which is capable of making decisions and taking actions with minimal human intervention, poses antitrust risks as it is another example of AI that pulls decision-making away from market participants themselves and stabilizes prices, a Washington, DC antitrust enforcer said.
Agentic artificial intelligence, which is capable of making decisions and taking actions with minimal human intervention, poses antitrust risks as it is another example of AI that pulls decision-making away from market participants themselves and stabilizes prices, a Washington, DC antitrust enforcer said.“These are self-executing schemes and algorithms, and I think that deferral — again, that deferral of decision-making to a system that a company [knows] is acting quickly and responsively to its competitor, I think raises some antitrust risk, because now you don’t have companies, you don’t have firms, you don’t have entities doing the work of independently assessing competition in the market and making those independent decisions.” District of Columbia Assistant Attorney General Ashley Walters said Monday during an event* in Washington, DC.
Walters, a participant in a panel discussion focused on AI and its various effects on competition, had been asked about areas in which the introduction of AI into a dominant firm’s operations may have the effect of entrenching their power, or of allowing a firm to extend its power.
“When firms are coalescing around the use of certain algorithms, even to the extent that they’re independent, if they’re self-executing, and there’s [knowledge] of how they work and how they’re coordinating, I think that’s, again, where there would be antitrust concern,” Walters said of agentic AI.
Walters had previously argued that the crux of identifying anticompetitive uses of AI boiled down to whether market participants are displacing their own “independent centers of decision making,” such as when competitors mutually rely on a pricing algorithm that eventually stabilizes prices, either by being fed competitors’ data or by referencing those prices itself.
The design of an AI product or algorithm doesn’t necessarily have to be, “on the coding level,” intentionally designed to facilitate collusion in order to trigger antitrust risk, and even what Walters called “independent algorithms” — those in service of only one market participant — will also do so where they, in the end, have the effect of stabilizing prices, she said.
*"Washington Antitrust and Digital Markets Forum," MLex, George Washington University Competition Law Center, Forum Global, Washington, DC; March 23, 2026.
Please email editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.