Skip to main content

You make this possible. Support our independent, nonprofit newsroom today.

Give Now

State and local governments are using AI for work. But should they?

caption: The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21, 2023, in Boston.
Enlarge Icon
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21, 2023, in Boston.

How public employees should use programs like ChatGPT in their day-to-day work — and whether generative artificial intelligence should be allowed at all — is being dealt with through a patchwork of policies nationwide.

By synthesizing a trove of information, generative AI programs like ChatGPT can summarize documents, write software code, and streamline the writing process. But the use of this technology in government spaces is drawing some conflict.

Maine has put a moratorium on using artificial intelligence for all state agencies. In contrast, Boston's city government has said it wants employees to experiment and find ways to improve government functions.

Locally, the city of Seattle adopted a provisional generative AI policy in April. Washington state published its first generative AI guidelines in August. Both documents emphasize the need to use the technology in ways that keep transparency, privacy, and equity at the forefront.

In a recent law, Iowa’s state legislature barred certain books from public libraries based on their depictions of sex acts. As part of that process, one employee — and their use of ChatGPT to streamline compliance — resulted in books like Toni Morrison’s “Beloved” and Margaret Atwood’s “The Handmaid’s Tale” being removed from shelves.

"I think it's a really interesting story because you have here a situation where there's a government employee who is being told that they have to do a job that they don't particularly want to do or agree with, and they have to do it very quickly," said Todd Feathers, a contributing reporter for WIRED magazine who recently wrote about how state and local governments are handling employee use of AI. "And it's going to be a pretty tempting offer to have these generative AI tools that can do a lot of work very quickly, without too much human interaction."

With no federal framework, different cities, counties, and states are taking different positions.

"I think the vast majority of city [and] state governments at this point don't really have any guidelines that have to do specifically with generative AI," Feathers said. "They may have guidelines that speak to AI tools in general, or to things like communicating with the public — rules that employees have to follow that will apply in some ways to various applications of generative AI. But for the most part, these things are still being created."

For its part, the city of Seattle is using a series of pilot programs to ascertain generative AI's utility for the city.

"There is interest among communications professionals in the city to use the technology to more quickly produce first drafts of written materials, produce executive summaries, or produce more readable versions of technical, legal, or legislative content for a more general audience," said Jim Loter, the Interim Chief Technology Officer for the City of Seattle.

Loter said the city is not using free or public databases like ChatGPT, and instead is currently testing some use cases for AI products by Microsoft-backed OpenAI and Google, using city data stored on Amazon Web Services interfaces. Using native databases built on internal data ensures a closed loop of data privacy and integrity, Loter said.

"With pretty carefully controlled parameters and pretty carefully controlled environments, it does provide a better user experience for people who are trying to find information, say, within hundreds of PDF files that exist out on the website, for example," Loter said.

Seattle’s provisional policy directs employees to avoid using sensitive or confidential data; to review the output from AI for bias, prejudice or harm; to make sure content doesn't infringe, or is sourced from, someone's intellectual property; and to maintain prompts and responses in accordance with the city's public information standards.

"We've seen that the technology is optimized to sound human-like as much as possible, to the detriment of accuracy or adherence to the facts," Loter said.

He added that the city is currently looking at ways to more transparently note if something was produced by or with the help of generative AI.

"An example might be, 'This paragraph or this web page was produced in part by ChatGPT-4 and was reviewed for accuracy by a member of department X staff.'"

The city is currently reviewing recommendations from a policy advisory team made up of industry and research experts. The final policy is on track to be adopted by the time the provisional policy expires at the end of October. After that passage, Seattle will continue to review the use of generative AI through quality assurance and testing practices.

"There's a lot of work left to do, but the policies and the guidelines will be out of the way, and then those will serve as the guardrails for us to do that work going forward," Loter said.

Why you can trust KUOW