NYC Officials Are Mad Because Journalists Pointed Out The City’s New ‘AI’ Chatbot Tells People To Break The Law
Countless sectors are rushing to implement AI" (undercooked language learning models) without understanding how they work - or making sure they work. The result has been an ugly comedy of errors stretching from journalism to mental health care thanks to greed, laziness, computer-generated errors, plagiarism, and fabulism.
NYC's government is apparently no exception. The city recently unveiled a new AI" powered chatbot to help answer questions about city governance. But an investigation by The Markup found that the automated assistant not only doled out incorrect information, it routinely advises city residents to break the law across a wide variety of subjects, from landlord agreements to labor issues:
The bot said it was fine to take workers' tips (wrong, although they sometimes can count tips toward minimum wage requirements) and that there were no regulations on informing staff about scheduling changes (also wrong). It didn't do better with more specific industries, suggesting it was OK to conceal funeral service prices, for example, whichthe Federal Trade Commission has outlawed. Similar errors appeared when the questions were asked in other languages, The Markup found."
Folks over on Bluesky had a lot of fun testing the bot out, and finding that it routinely provided bizarre, false, and sometimes illegal results:
There's really no reality where this sloppily-implemented bullshit machine should remain operational, either ethically or legally. But when pressed about it, NYC Mayor Eric Adams stated the system will remain online, albeit with a warning that the system may occasionally produce incorrect, harmful or biased content."
But one administration official complained about the fact that journalists pointed out the whole error prone mess in the first place, insisting they should have worked privately with the administration to fix the problems cause by the city:
If you can't see that, it's reporter Joshua Friedman reporting:
At NYC mayor Eric Adams's press conference, top mayoral advisor Ingrid Lewis-Martin criticizes the media for publishing stories about the city's new Al-powered chatbot that recommends illegal behavior. She says reporters could have approached the mayor's office quietly and worked with them to fix it
That's not how journalism works. That's now how anything works. Everybody's so bedazzled by new tech (or keen on making money from the initial hype cycle) they're just rushing toward the trough without thinking. As a result, uncooked and dangerous automation is being layered on top of systems that weren't working very well in the first place (see: journalism, health care, government).
The city is rushing to implement AI" elsewhere in the city as well, such as with a new weapon scanning system tests have found have an 85 percent false positive rate. All of this is before you even touch on the fact that most early adopters of these systems see them are a wonderful way to cut corners and undermine already mistreated and underpaid labor (again see: journalism, health care).
There are lessons here you'd think would have been learned in the wake of previous tech hype and innovation cycles (cryptocurrency, NFTs, full self driving," etc.). Namely, innovation is great and all, but a rush to embrace innovation for innovation's sake due to greed or incurious bedazzlement generally doesn't work out well for anybody (except maybe early VC hype wave speculators).