a typewriter with the word ethics on it
Photo by Markus Winkler on Pexels.com

Last July, San Francisco rolled out Copilot to 30,000 city employees as part of the largest AI deployment by any local government in the country. The rollout marks a major shift as city departments adopt AI tools for everything from stroke detection to drone surveillance. 

Unfortunately, the city’s mandatory AI transparency report, which was just released, provides us with little information about how this transformation is actually working. This lack of accountability is a flaw in San Francisco’s grand experiment in AI-powered government. We are automating the delivery of public services, but not automating transparency about it.

The city has made a significant bet on artificial intelligence, and with good reason: the public demands efficiency, and AI might supply it. However, any wager this ambitious comes with downsides. The only way to manage them is through transparency, and that’s where the trouble begins.

The city owed the public an inventory last month covering the shift to AI. Section 22J of the city code mandated the disclosures “to promote the ethical, responsible, and transparent use of AI tools” and to share “information critical to understanding those technologies.” What we got is not fit for that purpose.

Of more than 50 city departments, only eight submitted information. That means 85 percent of departments reported nothing. Zero from Public Works, Planning, or Building Inspection. Silence from the Department of Technology itself and from the educational sector.

The reported data is bare-bones. We don’t hear about any cases of AI errors in city work. We don’t read about privacy breaches, AI hallucinations, contract overruns, or training problems. We don’t learn much about how the new data is being stored or what other government bodies might access it. 

We also don’t know what outcomes might reflect racial, gender, or other biases. The Electronic Frontier Foundation has made the point that detecting bias in AI systems requires open collaboration. Without comprehensive reporting, we’re flying blind.

And then there’s the Copilot question. The city’s signature AI effort gets zero coverage, probably due to a convenient loophole: AI tools used “solely for internal administration” are exempt from disclosure. So the largest AI commitment in city history is effectively free from public scrutiny. Copilot can apparently review our data, but we can’t review its data.

If we aren’t keeping track of the downsides of this technology, it makes it hard for us to be confident of its upsides. To be fair, this is only the first version of one report. More information might come to us in the future.

But what we did learn shows that these questions are important right now. Public Health disclosed 18 AI systems, covering stroke detection and concealed weapons detection at Zuckerberg San Francisco General Hospital, as well as programs to draft discharge summaries and predict the likelihood of payment. Police listed six systems, including ones that listen for gunshots, operate body cameras, and manage evidence. Public Utilities reported a dozen, including AI-enabled drones deployed to monitor city streets.

These are powerful tools that impact real people’s lives. An AI system that detects strokes could save lives or miss critical cases. An AI billing predictor could unfairly flag low-income patients. A drone surveillance system in our skies raises clear privacy and civil liberties questions. These impacts demand public scrutiny.

There’s also the issue of money. If we aren’t closely tracking this technology, how can we know if we’re getting a good deal? Even with inexpensive software licenses, other expenses can add up quickly, such as training, equipment, electricity, cloud storage, and legal work. And what kinds of long-term contracts are we locking ourselves into?

Part of the problem is the city’s complex structure. The chief information officer reports there’s no authority to compel departments to report, saying there is “no gavel to bang on the table.” Compliance by departments is voluntary, at least for now.

AI might succeed in helping deliver public services as efficiently as we all hope, but only if we transparently discuss its flaws and benefits. Here’s a suggestion: make the first rule for the use of AI in San Francisco city government to be immediate, automatic reporting on its own use. Real-time dashboards showing which departments use which tools. Monthly reports on AI-influenced decisions. Investigations and consequences when privacy is violated. Turn off any tools that don’t meet that standard.

If AI is powerful enough to help run city government, then it is powerful enough to withstand public scrutiny. This information can’t be kept secret.

Shum Preston is a former director of communications at the California Department of Justice and at Common Sense Media. He is a Certified Information Privacy Professional (CIPP/US) and is a member of Service Employees International Union Local 1000 (SEIU 1000).

The Voice welcomes submissions of unsolicited op-eds and letters to the editor. Acceptance and publication is solely the prerogative of The Voice; no payment is offered for op-eds and letters to the editor. Any opinions expressed in op-eds and letters to the editor are those solely of the writer(s) and do not necessarily reflect the opinions of The Voice of San Francisco, its staff, contributors, sponsors, or donors. Send op-eds and letters to: Editor@thevoicesf.org.