AI Chatbots Are Invading Your Local Government—and Making Everyone Nervous

That’s a particular challenge for health care and criminal justice agencies.

Loter says Seattle employees have considered using generative AI to summarize lengthy investigative reports from the city’s Office of Police Accountability. Those reports can contain information that’s public but still sensitive.

Staff at the Maricopa County Superior Court in Arizona use generative AI tools to write internal code and generate document templates. They haven’t yet used it for public-facing communications but believe it has potential to make legal documents more readable for non-lawyers, says Aaron Judy, the court’s chief of innovation and AI. Staff could theoretically input public information about a court case into a generative AI tool to create a press release without violating any court policies, but, he says, “they would probably be nervous.”

“You are using citizen input to train a private entity’s money engine so that they can make more money,” Judy says. “I’m not saying that’s a bad thing, but we all have to be comfortable at the end of the day saying, ‘Yeah, that’s what we’re doing.’”

Under San Jose’s guidelines, using generative AI to create a document for public consumption isn’t outright prohibited, but it is considered “high risk” due to the technology’s potential for introducing misinformation and because the city is precise about the way it communicates. For example, a large language model asked to write a press release might use the word “citizens” to describe people living in San Jose, but the city uses only the word “residents” in its communications. because not everyone in the city is a US citizen.

Civic technology companies like Zencity have added generative AI tools for writing government press releases to their product lines, while the tech giants and major consultancies—including Microsoft, Google, Deloitte, and Accenture—are pitching a variety of generative AI products at the federal level.

The earliest government policies on generative AI have come from cities and states, and the authors of several of those policies told WIRED they’re eager to learn from other agencies and improve their standards. Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, says the situation is ripe for “clear leadership” and “specific, detailed guidance from the federal government.”

The federal Office of Management and Budget is due to release its draft guidance for the federal government’s use of AI some time this summer.

The first wave of generative AI policies released by city and state agencies are interim measures that officials say will be evaluated over the coming months and expanded upon. They all prohibit employees from using sensitive and non-public information in prompts and require some level of human fact checking and review of AI-generated work, but there are also notable differences.

For example, guidelines in San Jose, Seattle, Boston, and the state of Washington require that employees disclose their use of generative AI in their work product while Kansas’ guidelines do not.

Albert Gehami, San Jose’s privacy officer, says the rules in his city and others will evolve significantly in coming months as the use cases become clearer and public servants discover the ways generative AI is different from already ubiquitous technologies.

“When you work with Google, you type something in and you get a wall of different viewpoints, and we’ve had 20 years of just trial by fire basically to learn how to use that responsibility, “ Gehami says. “Twenty years down the line, we’ll probably have figured it out with generative AI, but I don’t want us to fumble the city for 20 years to figure that out.”