#Chatbots

New York City’s Microsoft-Powered Chatbot Tells Business Owners to Break the Law – CX Today

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Home Conversational AI
The NYC mayor has defended the “MyCity” chatbot and refused to take it offline despite insufficient guardrails
Published: April 4, 2024
Charlie Mitchell
A generative AI (GenAI) chatbot developed by New York City is under fire after it advised small business owners to break the law.
The “MyCity” chatbot – powered by Microsoft’s Azure AI services – also misstated local policies.
When breaking the news, The Markup quoted a local housing policy expert who said that the bot’s information could be incomplete and – at times – “dangerously inaccurate.”
Moreover, the bot shared insights on housing policy, worker rights, and rules for entrepreneurs while appearing authoritative – according to the publication.
For instance, it answered questions like: “Do I have to accept tenants on rental assistance?”, and: “Are buildings required to accept section 8 vouchers?”, with a definitive “no”.
In doing so, it inferred that landlords don’t need to accept such tenants.
However, it is illegal for landlords to discriminate on source of income in New York City, with a small exception for small buildings where the landlord or their family lives.
Elsewhere, the bot answered the question: “Can I make my store cashless?”, with a “yes”. However, since 2020, stores in New York City have been required to accept cash as payment.
Yet, these are just two examples of many, and after reports broke last week, the bot is still available online, giving out false guidance.
While New York City has strengthened its disclaimer by noting that its answers are not legal advice, it continues to embrace the AI system without leveraging sufficient safeguards.
Defending the decision at a press conference on Tuesday, Eric Adams, Mayor of New York City, said:
Anyone that knows technology knows this is how it’s done. Only those who are fearful sit-down and say, ‘Oh, it is not working the way we want; now we have to run away from it altogether.’ I don’t live that way.
Julia Stoyanovich, a Computer Science Professor and Director of the Center for Responsible AI at New York University, told AP News that the approach is “reckless and irresponsible.” 
“They’re rolling out software that is unproven without oversight,” continued Stoyanovich. “It’s clear they have no intention of doing what’s responsible.”
The bot has been available to the general public since October.
On launch, NYC labeled it a “one-stop-shop” for business owners, generating responses to questions to help steer them through New York City’s bureaucratic labyrinth.
At the time, Adams said he was “proud to introduce a plan that will strike a critical balance in the global AI conversation — one that will empower city agencies to deploy technologies that can improve lives while protecting against those that can do harm.”
Yet, now New York City is doling out the inaccurate and potentially harmful advice.
Also, keeping it on the website – even with a disclaimer – may come back to bite them.
After all, when Air Canada got sued for its bot sharing inaccurate advice in February, it tried to argue that it should not be held responsible for the advice given by its chatbot, amongst other defenses. The Canadian courts ruled in favor of the claimant.
In that case, Civil Resolution Tribunal (CRT) member Christopher Rivers wrote as part of its reasoning for the verdict: “In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission.
“While a chatbot has an interactive component, it is still just a part of Air Canada’s website.
“It should be obvious to Air Canada that it is responsible for all the information on its website.
“It makes no difference whether the information comes from a static page or a chatbot.”
Of course, the US and Canadian court systems are different. Yet, this demonstrates the legal dangers of embracing AI with insufficient guardrails.
For its part, Microsoft – via a spokesperson – has pledged to continue working with NYC employees “to improve the service and ensure the outputs are accurate and grounded on the city’s official documentation.”
However, as the tech giant continues its AI adventure – stories like this are not a good look.
 
 
Contact Center
NiCE to Acquire Cognigy for $995MN, Wants to “Set a New Standard” for AI in Customer Experience
Contact Center
Big CX News from SAP, ServiceNow, OpenAI & Zoom
Conversational AI
NiCE CEO: “We’re Not Just Partners; We’re Co-Building with AWS, ServiceNow & Snowflake”
CRM
Is Salesforce the CRM Villain of 2025?
Conversational AI
OpenAI’s New ChatGPT agent Will Find, Communicate with, and Buy from Businesses
Contact Center
Why Fast, Flexible Voice AI is Your New CX Superpower
Share This Post
Contact Center
NiCE to Acquire Cognigy for $995MN, Wants to “Set a New Standard” for AI in Customer Experience
Contact Center
Big CX News from SAP, ServiceNow, OpenAI & Zoom
Conversational AI
NiCE CEO: “We’re Not Just Partners; We’re Co-Building with AWS, ServiceNow & Snowflake”
Get our Free Weekly Newsletter, straight to your inbox!
Handpicked News, Reviews and Insights delivered to you every week.
Tech
Industries
Trending Topics
Featured Brands
About
More
All content © Today Digital 2025

source

New York City’s Microsoft-Powered Chatbot Tells Business Owners to Break the Law – CX Today

What psychologists need to know about the