AI Chatbots Are Invading Your Local Government—and Making Everyone Nervous
THE UNITED STATES Environmental Protection Agency blocked its employees from accessing ChatGPT while the US State Department staff in Guinea used it to draft speeches and social media posts.

Maine banned its executive branch employees from using generative artificial intelligence for the rest of the year out of concern for the state’s cybersecurity. In nearby Vermont, government workers are using it to learn new programming languages and write internal-facing code, according to Josiah Raiche, the state’s director of artificial intelligence.
The city of San Jose, California, wrote 23 pages of guidelines on generative AI and requires municipal employees to fill out a form every time they use a tool like ChatGPT, Bard, or Midjourney. Less than an hour’s drive north, Alameda County’s government has held sessions to educate employees about generative AI’s risks—such as its propensity for spitting out convincing but inaccurate information—but doesn’t see the need yet for a formal policy.
“We’re more about what you can do, not what you can’t do,” says Sybil Gurney, Alameda County’s assistant chief information officer. County staff are “doing a lot of their written work using ChatGPT,” Gurney adds, and have used Salesforce’s Einstein GPT to simulate users for IT system tests.
At every level, governments are searching for ways to harness generative AI. State and city officials told WIRED they believe the technology can improve some of bureaucracy’s most annoying qualities by streamlining routine paperwork and improving the public’s ability to access and understand dense government material. But governments—subject to strict transparency laws, elections, and a sense of civic responsibility—also face a set of challenges distinct from the private sector.
That’s a particular challenge for health care and criminal justice agencies.
Loter says Seattle employees have considered using generative AI to summarize lengthy investigative reports from the city’s Office of Police Accountability. Those reports can contain information that’s public but still sensitive.
Staff at the Maricopa County Superior Court in Arizona use generative AI tools to write internal code and generate document templates. They haven’t yet used it for public-facing communications but believe it has potential to make legal documents more readable for non-lawyers, says Aaron Judy, the court’s chief of innovation and AI. Staff could theoretically input public information about a court case into a generative AI tool to create a press release without violating any court policies, but, he says, “they would probably be nervous.”
“You are using citizen input to train a private entity’s money engine so that they can make more money,” Judy says. “I’m not saying that’s a bad thing, but we all have to be comfortable at the end of the day saying, ‘Yeah, that’s what we’re doing.’”
Under San Jose’s guidelines, using generative AI to create a document for public consumption isn’t outright prohibited, but it is considered “high risk” due to the technology’s potential for introducing misinformation and because the city is precise about the way it communicates. For example, a large language model asked to write a press release might use the word “citizens” to describe people living in San Jose, but the city uses only the word “residents” in its communications. because not everyone in the city is a US citizen.
The earliest government policies on generative AI have come from cities and states, and the authors of several of those policies told WIRED they’re eager to learn from other agencies and improve their standards. Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, says the situation is ripe for “clear leadership” and “specific, detailed guidance from the federal government.”
The federal Office of Management and Budget is due to release its draft guidance for the federal government’s use of AI some time this summer.
The first wave of generative AI policies released by city and state agencies are interim measures that officials say will be evaluated over the coming months and expanded upon. They all prohibit employees from using sensitive and non-public information in prompts and require some level of human fact checking and review of AI-generated work, but there are also notable differences.
For example, guidelines in San Jose, Seattle, Boston, and the state of Washington require that employees disclose their use of generative AI in their work product while Kansas’ guidelines do not.
Albert Gehami, San Jose’s privacy officer, says the rules in his city and others will evolve significantly in coming months as the use cases become clearer and public servants discover the ways generative AI is different from already ubiquitous technologies.
“When you work with Google, you type something in and you get a wall of different viewpoints, and we’ve had 20 years of just trial by fire basically to learn how to use that responsibility, “ Gehami says. “Twenty years down the line, we’ll probably have figured it out with generative AI, but I don’t want us to fumble the city for 20 years to figure that out.”