This is the new MLex platform. Existing customers should continue to access MLex here until migrated to the new platform. For any migration queries, please contact Customer Services or your Account Manager.

AI tools are being asked to draft laws. Can we really trust the process?

By Frank Hersey

(October 30, 2024, 20:46 GMT | Comment) -- This article is from MLex's new AI regulation news service. To follow all the latest developments in AI regulation around the world visit MLex AI.


Laws partially and fully drafted by generative AI are already passed or are moving through the process in jurisdictions around the world — including legislation on AI itself.

US lawmakers who say they’re using the technology to speed up the process and free up time for voters tell MLex they have received no complaints — quite the opposite. A similar move in Brazil, though, did not go so smoothly. 

Politicians across the board have taken to using generative AI to draft the likes of press releases and speeches, and more ambitiously to shift the tone of communications right or left to appeal to particular constituencies.

Using it to draft new legislation can be argued as a logical next step, then, although generative AI does struggle with amending legal texts, one lawmaker told MLex, and it is still prone to hallucinations. Plus there appears to be a reliance on one drafting tool: OpenAI’s pioneering ChatGPT. 

How far a jurisdiction wants to go with AI tools for penning laws is not only about the effectiveness of the technology, though, it also speaks to an appetite for risk and levels of trust in the tools. 

— First drafts —

The US state of Arizona passed a bill in May to regulate deepfakes in elections. Part of it was written by ChatGPT. The bill’s author, state representative Alexander Kolodin, used the chatbot to provide the definition of what a deepfake is. This definition was used in the bill, which was passed unanimously in both houses of the state parliament.

Kolodin is gung-ho about the potential of the technology across various areas of his role and attributes the acceptability of doing so to the kind of state Arizona is.

“Legislative bodies change slowly, but I never got any blowback for [using gen AI],” Kolodin told MLex. Arizona is a “very tech forward state, and I think that [for] the general public, it made a lot of sense to why you would use a tool like this, and they seem to think it was a sensible and clever thing to do.”

Kolodin said he finds AI “incredibly useful,” saying it “helps you to do your job in the absence of adequate staffing.” He believes it makes him more efficient and is “the only way” he can match voter expectations of his role. 
While he finds the tools in some ways more useful than human assistants, Kolodin would be concerned about less experienced lawmakers using AI research and generative AI applications as they would be less likely to spot errors. 

He thinks that all lawmakers should be upfront about their use of the technology and wonders whether a lot more have used it for drafting bills without saying so.

“Voters elected you because they trust you and they trust your judgment, right?” Kolodin said. “And so as long as you're just always forthright with the voters about what you're doing, they tend to respect that.”

— Federal pressure —

Ahead of Kolodin, back in February 2023, Massachusetts senator Barry Finegold caused waves when he introduced “an act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT.” 

He wanted to make a point about the fact that AI technologies are going to enter every aspect of life, and federal lawmakers need to get ahead of it — in contrast to how the social media horse was allowed to bolt from the stable a few years ago.

“People have really responded” to his use of generative AI, Finegold told MLex. “I think it's going to be used, especially in legislatures where they don't have staff.”

People are getting used to it, just as they got used to the shift from typewriters to word processors, he said. Generative AI drafting entire pieces of legislation is “definitely something that’s on its way, coming in the future.”

He’s considering using the technology in his own future drafting of legislation, alongside traditional methods.

The state bill now accompanies another on cybersecurity and AI which is running out of time in the US electoral cycle, but Finegold said he remains “hopeful" that it will get passed.

Kolodin sees the impact of AI as likely to be greater at the state level rather than in Congress, simply due to the fact of the sheer amount of legislation that passes through states rather than the center, said the lawmaker.

— Beyond drafting —

One thing that Kolodin stressed was that AI tools seem to be better for drafting new text for legislation, although much of his work is about reviewing and amending it. The software has a flaw: “ChatGPT is not really great at handling strikeouts,” he said.

But it’s useful in many more aspects of his job, he said. For example, he will submit a first draft of legislation and ask the software for counter arguments and proposals to tighten the language. Kolodin finds it to be accurate in predicting what humans might say. He then revises it before showing it to colleagues, which helps make a better first impression, much improving the chances of the bill making any progress.

“My natural tone is to speak as a very conservative Republican,” Kolodin said, something that might be unhelpful when trying to appeal to a broader audience. “I will run it by people who are not very, very conservative Republicans, and they'll look at it, and they'll go: 'You sound extremely far right,' ” he said. So he uses tools such as ChatGPT to shift the tone of his press release or speech to take it towards the center ground.

— Error and influence —

Kolodin referred to using specific legal large language models, or LLMs, such as Lexis+ AI by LexisNexis.* He uses it to upload new legislative fact sheets or bills to start summarizing and analyzing. He also uses the function to establish whether an idea for new legislation is in fact already covered by existing law.

But even specially trained legal LLMs make mistakes. Researchers at Stanford found that while they hallucinate less than general models such as OpenAI’s GPT4, legal models can still hallucinate up to a third of the time, despite vendors’ claims to remove such risks. Relying on these specialized models could still distort lawmakers' research.

Their drafting is also at risk. All lawmakers spoken to for this article used OpenAI’s ChatGPT. The developer’s terms don’t allow certain political uses such as the creation of tools for outbound political campaigning. Two individuals who spoke to MLex of their attempts to stand for election but promising to do all their work via ChatGPT both fell foul of the terms following media coverage of their plans (see here).

OpenAI told MLex that it is improving its monitoring activity but expects to also take inputs from external sources such as civil society and journalists to identify uses that breach its terms.

Various generative AI developers updated their terms and tech for this year's raft of elections (see here). But politicians are nevertheless using it. This poses a risk of one developer having significant influence over the drafting of political materials. Its systems can already detect prompts for political queries, suggesting the responses could be tailored to provide a certain political leaning if so desired.

— Cultural divide —

One defense against this could be a cultural reluctance to use generative AI in lawmaking, at least for now. More and more US lawmakers of all stripes are using it, but there are fewer reports elsewhere so far. If cultural acceptance is delayed, other suitable AI tools may emerge over time offering alternatives to OpenAI. 

One Brazilian lawmaker in the city of Porto Allegre used ChatGPT to draft a bill that prevents his city charging residents to replace stolen water meters. He didn’t change a word, nor did he tell fellow lawmakers who passed it unanimously last October, the Associated Press reported. 

When he let the cat out of the bag by bragging on social media, controversy ensued. His colleagues were not happy to have been part of the experiment which is thought to have resulted in the world’s first legislation fully drafted by AI.

Had he not spoken, perhaps only he and OpenAI would ever have known.

UK lawmakers told MLex that they are unaware of anyone in their circle using AI to draft legislation, with one suggesting as a particular challenge the peculiar style that drafters of legislation must employ. But they are using technology such as Perplexity AI, a research and conversational search engine, to summarize legislation and identify any gaps, with the same risks of hallucination.

AI is already firmly embedded in the legislative process. In cultures where its use may be frowned on as a shortcut rather seen as a proud example of technological prowess, outcries are likely to follow, but the use of AI in lawmaking will surely continue to develop

And as it becomes more accepted, despite the risks of hallucination and market concentration, there will likely come with it a shift in the concept of the role of lawmakers, possibly even the expectation that remain human beings (see here).

*LexisNexis is the parent company of MLex.

Please e-mail editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.

Tags