The CFPB is just the latest federal agency telling companies across industries that humans must be meaningfully involved when they deploy new AI tools to automate tasks like hiring, streamlining their processes, or keeping track of worker productivity.
Meanwhile, state and even local lawmakers are passing laws aimed at governing the use of AI in the workplace, too. And more rules from other states and cities are coming soon, experts agree.
“We're only at the beginning of a massive amount of regulation,” employment attorney Danielle Ochs from Ogletree Deakins said.
Not everyone is a fan of the new AI rules for the workplace. Business groups argue a confusing patchwork of obligations is emerging for employers, and the cost of compliance could put the use of some AI tools out of reach for small and medium-sized businesses.
But a common theme is already emerging from the combined guidance and regulations so far. When any AI tool is used in the workplace, humans should be involved to flag potential bias and discrimination, prevent the loss of confidential information, and make sure companies aren’t invading employees’ privacy.
In pushing back on “booming ‘black box’ scores” of workers using AI and other algorithms, the CFPB said it's concerned about automated warnings or AI-powered recommendations for disciplinary measures, including the firing of workers, without direct human oversight.
Despite the potential risks, human resource departments are embracing AI quickly, recent studies show. One in four US organizations are currently using AI to support human-resource related activities, with nearly two-thirds implementing AI in HR within the past year, according to statistics from the Society for Human Resource Management (SHRM).
Among HR leaders whose organizations are currently using generative AI, 75 percent said the technology enhanced efficiency; 69 percent said it increased creativity; and 65 percent said it improved work quality, according to a survey from SHRM earlier this year.
— Preventing discrimination —
But using those AI tools without human oversight can lead to a host of issues, and a top concern for federal agencies is the perpetuation of bias and discrimination in the workplace.
Such fears are well founded, legal experts agree. Implicit bias can easily creep into AI tools that select potential job candidates if hiring decisions are being made based on problematic data. If an AI tool is only selecting male or young job candidates, for example, a human being could spot the issue and fix it. An AI tool wouldn't.
“One of the things I'm always talking about is the importance of doing vendor due diligence,” said Laura Malugade, an employment attorney with Husch Blackwell.
When a company is considering relying or incorporating data from a third party into its employment decisions, people need to conduct due diligence on potential vendors, she said. “What data did they use to train their model?” she asked. “What is the model looking for?”
In May, the White House issued new guidance aimed at protecting workers from risks related to an employer’s use of AI and it outlined key principles for the workplace.
Employees should have input into how employers use AI, and the technology should be used to assist, improve or complement their jobs, the White House said.
Employers should ensure that AI systems don’t violate or undermine workers’ rights, and they should be transparent with employees and job seekers about when AI is used and how it impacts their jobs. They should also establish clear governance systems and procedures, including human oversight, the White House said.
The Equal Employment Opportunity Commission has been addressing bias issues around algorithmic fairness since 2021. Now, it’s stepping up enforcement efforts around AI and machine-learning hiring tools and has designated the use of AI in employment as a top “subject matter priority.”
Last month, the US Department of Labor issued guidelines designed to remind employers about the principles and values they should be thinking about when deploying and using AI and workforce, which include “meaningful human oversight for significant employment decisions” (see here).
“You cannot simply use these tools without human guidance in a way that might have an adverse impact ... under the statutes that that agency regulates,” said Ochs, who has trademarked her legal practice as the “Techplace,” the intersection of tech and employment.
States and even local governments have been stepping in, too. Illinois, Maryland and New York City all require employers to ask for consent before using AI during certain parts of the hiring process.
New York City began enforcing “Local Law 144” last year, which prohibits employers and employment agencies from using automated employment decision tools unless they’ve conducted an audit to find potential bias and provided required notices.
This year, Illinois amended the Illinois Human Rights Act to prohibit employers from using AI in a manner that creates discrimination or bias and requires employers to provide notice if they’re using AI.
The Colorado Artificial Intelligence Act, which takes effect in February 2026, includes provisions for employers who deploy AI to protect against discrimination. Under current provisions, employers classified as deployers must exercise reasonable care to protect against known or foreseeable risks of algorithmic discrimination.
Experts agree state and local laws like these are just the start. More state and local regulations around the use of AI in the workplace are coming. It’s only a matter of time before all the large US states that have their own employee protections address this issue, Ochs said.
Not everyone agrees that’s a good thing. Business groups such as SHRM say they support “thoughtful legislation and regulation that promotes rather than stifles workplace and workforce innovation.”
But the regulatory efforts around AI in the workplace are creating a perplexing patchwork of obligations for employers, and they’re putting some AI applications out of reach for small and medium-sized businesses due to the cost and uncertainty of regulatory compliance.
“SHRM believes that overlapping laws and regulations regarding AI may lead to unintended consequences that create uncertainty and discourage workplace innovation,” Ken Meyer, president of the New York City chapter, told a Senate employment and workplace safety panel in September. “SHRM supports a uniform federal standard that provides a clear framework for how employers should strive to prevent unlawful bias when using AI.”
— Protecting trade secrets —
Discrimination isn’t the only potential legal issue worrying HR departments. Humans also need to be "in the loop" to make sure employees aren’t potentially leaking confidential information when they use AI, or taking trade secrets with them when they leave, legal experts say.
Employees are further using AI in the workplace in a myriad of ways, which is raising concerns about potential leaks of sensitive or confidential information to third parties. If employees are using an AI product like ChatGPT for their jobs, proprietary data could become part of OpenAI’s training data, for example.
Data companies often treat as sensitive, confidential, financial information that would no longer be confidential, and that could create legal problems for companies, said Michael Ryan, an employment attorney with Foley and Lardner.
Employees who leave could take that confidential information with them, and companies might have to sue to protect it, he said.
At the same time, the fact that confidential information was shared with an AI company like OpenAI, and potentially included in its training data, could be used to argue that a company didn’t take “reasonable steps” to protect that information, as required under federal or state trade secret laws, he added.
“A lot of the policies that we are drafting for employers these days focus on ways to protect that confidential information,” Ryan said.
— Protecting privacy —
Unchecked employee surveillance is another top issue for federal agencies. Gathering too much information about employees could violate their privacy and wouldn’t be relevant to their jobs, federal regulators are warning.
This week, the CFPB released new guidance emphasizing that the use of AI and other tracking technologies in the workplace are subject to a more than 50-year-old privacy law, the Fair Credit Reporting Act (see here), part of a holistic effort by the CFPB to respond to AI and other emerging technologies.
In a statement, the CFPB said when high-risk AI applications are involved in hiring or firing an employee, it's important to have human involvement in that decision.
AI surveillance is also a growing concern for agency officials. The same day the guidance was issued, CFPB Director Rohit Chopra said in a speech that he is hearing from workers who are required to wear a device or install an app that tracks their activities and movements.
The agency said in its guidance circular that worker scores could be used to make automated recommendations or determinations on a wide range of employee attributes, including pay, and to predict worker behavior such as “potential union organizing activity and likelihood that a worker will leave their job.”
Scoring or profile data could also be analyzed by AI and machine learning systems to schedule work shifts or assign job responsibilities, or to issue warnings or other disciplinary actions to employees, the CFPB said.
“With the rise of artificial intelligence, the data harvested about us can be used to power models that score us and put us into different categories. This data, the dossiers assembled about us, and the algorithmic scores about us may be sold for profit,” Chopra said in a speech in East Lansing, Michigan. “I have serious concerns about how background dossiers and reputation scores can be used in hiring, promotion, and reassignment” of workers.
Please e-mail editors@mlex.com to contact the editorial staff regarding this story or submit the names of lawyers and advisors.