Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Boston University law professors Jessica Silbey, left, and Woodrow Hartzog are the authors of an upcoming paper entitled “How AI Destroys Institutions.”
Paul Carpenter, a New Orleans magician, describes using his computer and AI software during an interview in New Orleans, Friday, Feb. 23, 2024. Carpenter said he was hired in January by Steve Kramer, who has worked on ballot access for Democratic presidential candidate Dean Phillips, to use AI software to imitate President Joe Biden’s voice to convince New Hampshire Democratic voters not to vote in the state’s presidential primary.
Jessica Silbey and Woodrow Hartzog’s paper examines the use of artificial intelligence beyond the use of generative AI systems such as Anthropic’s Claude chatbot.
Students during a class taught by Benjamin Breyer, a professor in Barnard’s first-year writing program who developed an AI chatbot that helps students develop thesis statements, in Manhattan on Oct. 29, 2025.
Cameras and lights staged outside the West Wing of the White House in Washington, Feb. 6, 2025. A pair of Boston University researchers write in a soon-to-be-published paper that they are particularly worried about the impact of AI slop, the videos and other cheap and “thoughtless” content generated by such automated systems, that threatens to drown out legitimate news reporting.
Cal State Fullerton computer-science student Dianella Sy shows off a generative AI image made for a team project during a summer “AI Boot Camp” held at Cal Poly SLO in July.
Boston University law professors Jessica Silbey, left, and Woodrow Hartzog are the authors of an upcoming paper entitled “How AI Destroys Institutions.”
Paul Carpenter, a New Orleans magician, describes using his computer and AI software during an interview in New Orleans, Friday, Feb. 23, 2024. Carpenter said he was hired in January by Steve Kramer, who has worked on ballot access for Democratic presidential candidate Dean Phillips, to use AI software to imitate President Joe Biden’s voice to convince New Hampshire Democratic voters not to vote in the state’s presidential primary.
Whatever benefits artificial intelligence might offer, it has become increasingly clear the technology also engenders some significant harmful side effects, such as increasing carbon emissions and allegedly encouraging suicides.
In a new paper, a pair of legal researchers warn of an additional, profound and potentially far-reaching AI hazard. They argue that the technology is set to undermine democracy in America by damaging and potentially destroying the institutions that undergird it, including the rule of law and journalism.
“AI is anathema to the well-being of our critical institutions,” wrote Boston University law professors Woodrow Hartzog and Jessica Silbey in the draft of their paper entitled “How AI Destroys Institutions.” Due to that fact, they wrote, “absent rules mitigating AI’s cancerous spread, the only roads left lead to social dissolution.”
In their paper, which they said will be published in the UC Law Journal later this year, Hartzog and Silbey focus on institutions, which they define not as individual organizations, but the particular “field(s) of human action” in which such organizations operate and the values and norms for those fields. So, under that definition, a hospital such as Zuckerberg General Hospital isn’t an institution, but the field of medicine or health care is.
The pair also take a broad view of AI. Their paper looks at the combined effect of not just generative AI systems such as OpenAI’s ChatGPT chatbot, but facial-recognition systems and similar predictive technologies, as well as automated decision systems, such as those sometimes used to set bail or to review job candidates.
Hartzog and Silbey argue that AI weakens institutions in three different ways. It hinders people within institutions from developing the knowledge, skills and expertise needed to maintain or reinvigorate them. AI systems used within institutions reduce or eliminate the role of humans in decision-making and deliberation, eroding their legitimacy in the eyes of those affected by the decisions and weakening their ability to respond to changing circumstances.
And the technology isolates people, reducing their ability to learn from, debate with, understand and reach consensus with others who have different views or knowledge. Institutions can’t function without that consensus or the mutual respect that comes from interpersonal connections, they wrote.
Much of the social and human progress seen in the 19th and 20th centuries was built on the development of crucial institutions such as higher education and the legal system, Silbey told The Examiner.
Thanks to AI, she said, “I think we’re seeing [those institutions] erode right in front of our eyes.”
Jessica Silbey and Woodrow Hartzog’s paper examines the use of artificial intelligence beyond the use of generative AI systems such as Anthropic’s Claude chatbot.
How the public and governments respond to the potential threats of AI, including to institutions, will be of great importance to San Francisco. The City has become ground zero for the AI industry, home to the two best-funded private companies in the sector — OpenAI and Anthropic, the latter of which reportedly signed a 13-year lease Friday to occupy a 27-story downtown office tower — and numerous smaller ones.
Thanks to those companies and the researchers and developers they deploy, San Francisco has garnered an outsized share of venture-capital investment in recent years. And the industry’s surge has helped spark a revival in The City’s downtown and a rebound in its depressed office real-estate market.
But The City’s citizens also stand to be harmed by the technology, particularly if it undermines democratic governance.
Hartzog and Silbey argue that AI is doing just that — by hindering expertise, replacing human deliberation and isolating people, the technology is harming institutions as disparate as medicine, the family, and religious and financial institutions.
But in their paper, they focus specifically on how AI is damaging the rule of law, higher education and journalism.
There have been numerous cases in which lawyers have filed documents with courts that contain fake citations made up by AI systems. But in looking at how AI is harming the rule of law, Silbey and Hartzog chose not to focus on that issue. Such practices can harm the careers and reputations of the lawyers who used AI for their research — as well as those of their firms — but the researchers chose to focus instead on what they see as the higher-level threat AI poses to the entire institution, Silbey said.
As she and Hartzog lay out in their paper, that threat comes largely from the offloading of decision-making — about questions such as the amount bail should be set at, the lengths of criminal sentences, benefits calculations, or who the IRS should target for audits — to automated systems. That practice is becoming increasingly widespread due to the sense that such systems are free from human bias and can dispassionately and accurately make determinations, the pair wrote.
But such systems are essentially black boxes, they argue. It’s unclear exactly how they make their decisions or how they weigh particular factors. That undercuts the legitimacy of those decisions, Silbey and Hartzog argue. It also makes them unpredictable, they say — and that in turn violates the notion of equal justice under the law, because there’s no way to know whether the systems will apply the law in the same way in similar situations.
“Algorithmic invasions of our legal institutions subvert the reason we believe in and follow the rule of law,” they wrote in their paper.
Students during a class taught by Benjamin Breyer, a professor in Barnard’s first-year writing program who developed an AI chatbot that helps students develop thesis statements, in Manhattan on Oct. 29, 2025.
By contrast, AI is harming higher education in multiple ways, Silbey and Hartzong say. It’s undermining the development of knowledge and expertise — the foundations of that institution — by encouraging people to offload tasks that require deep thought, they say.
Because of the way they are designed — large language models such as that underlying ChatGPT generate sentences and paragraphs by essentially determining the most likely next words based on the large amounts of documents they’ve been trained on — AI systems produce homogenized content and suppress or discourage exceptional thoughts or insights, Silbey and Hartzog argue.
Such technologies also prioritize fields of inquiry that can be easily quantified, thus neglecting or even discouraging areas such as the humanities that are less amenable to that type of study.
“Desire on the Couch,” on view at the California Institute of Integral Studies, was curated in partnership with the Kinsey Institute
Public-health expert emphasizes that tuberculosis is treatable with antibiotics, cautions residents against getting ‘spooked’ by the situation
The North Beach pizzaiolo who brought slice culture to San Francisco will soon cook in the Super Bowl owner’s suite — without losing his neighborhood roots
And the use of AI in higher education — particularly by professors and instructors — threatens to undermine trust in the system and its legitimacy, they wrote. They note a recent incident in which Northeastern University students were reportedly upset to learn their professor had used ChatGPT and other AI tools to help create his lesson plan; one even demanded a tuition refund.
“AI is anathema to the institutional structure of higher education,” Hartzog and Silbey write.
With journalism, the pair expressed particular worry about the effects of AI slop — the videos and other cheap and “thoughtless” content generated by such automated systems, they say. The glut of such content threatens to drown out legitimate news reporting and overwhelm accuracy and truth with algorithmically generated falsities and inaccuracies.
Such slop is likely to build on itself, they argue, pointing to studies indicating that as AI models are increasingly trained on their own outputs, their reliability degrades. The more journalists rely on AI systems for their research, the more susceptible their work will be to its flaws, Hartzog and Silbey argue.
Journalists use their judgement to determine what’s news. By necessity, they adapt their work to changing political, economic and social environments. And they sometimes have to tell their audience things those consumers don’t want to read or hear.
Cameras and lights staged outside the West Wing of the White House in Washington, Feb. 6, 2025. A pair of Boston University researchers write in a soon-to-be-published paper that they are particularly worried about the impact of AI slop, the videos and other cheap and “thoughtless” content generated by such automated systems, that threatens to drown out legitimate news reporting.
But AI can’t adapt in the same way, Hartzog and Silbey argue. Because the technology is basically a pattern-matching system, it can’t really determine what’s news — something it possibly hasn’t seen before. At the same time, the tendency of generative AI chatbots in particular to be sycophantic, to tell users what they want to hear, may well steer people away from journalistic outlets, the pair say.
Silbey said the dangers AI poses to journalism and higher education are what she worries about most. Making rational policy decisions, determining the best candidates in elections and more depend on shared sets of facts and the gaining and dissemination of new knowledge, she said.
“Journalism and higher ed — these knowledge-producing institutions in our everyday life — I think are foundational to not returning to a Dark Ages and a kind of feudalism,” Silbey said.
She and Hartzog acknowledge that many American institutions were ailing long before OpenAI released ChatGPT three years ago. What they fear is that the sheer scale of the technology’s deployment and availability and the degree to which its output mimics that of actual people will exacerbate those institutions’ problems.
“The goal of the paper was really to give a way of thinking about and diagnosing the specific ways in which AI threatens to further enfeeble to the point of destruction a lot of these already ailing institutions,” Hartzog said.
Damien P. Williams, a professor of philosophy and data science at the University of North Carolina at Charlotte, said the arguments Hartzog and Silbey make are compelling and echo some of his own writing and thoughts. Framing the effects of AI on institutions was an interesting and worthy tack to take to look at the technology’s impact, he said.
Williams concurs with the researchers’ assessment that whatever the current state of American institutions, AI isn’t going to help them.
“Things might be on fire — but if you pour gasoline on that fire, it’s definitely going to get worse,” he said.
The paper is a kind of call to action, said Gary Marcus, the author of “Taming Silicon Valley: How We Can Ensure That AI Works for Us.” It highlights the importance of maintaining strong institutions and effectively lays out the threat current forms of AI pose to them, he said.
A frequent critic of generative AI, Marcus has called for the development of new versions of the technology that would combat the technology’s tendency to make things up by, in part, incorporating actual knowledge of the physical world.
“We can certainly imagine AI that would be more trustworthy, and that would be a good start,” said Marcus, an emeritus professor of psychology and neural science at New York University. “But we also need to rethink how we are going to rebuild our world to address these new technologies.”
Cal State Fullerton computer-science student Dianella Sy shows off a generative AI image made for a team project during a summer “AI Boot Camp” held at Cal Poly SLO in July.
Despite their paper’s title, Harztog and Silbey — similar to both Williams and Marcus — said they don’t see AI’s destructiveness as inevitable. They said they find encouraging the debate about and pushback against the technology that’s starting to happen within their own university and within the broader society.
At Boston University, they and their colleagues have been forming committees to study the technology and have been discussing how, where and when AI should be used, the researchers said. Hartzog said the important thing is to not just accept the tech industry’s narrative that infiltration of AI adoption into all sectors of society is inevitable.
What’s also crucial to prevent AI from destroying institutions is for society and the institutions themselves to put rules in place to protect them and the values they represent, they said.
Such rules would “serve as a signal to all of us about the sort of world that we want to shape and the sort of skills that we want to encourage and the sort of relationships we want to foster,” Hartzog said.
“While it may seem like trying to push back the tide, I think it’s important,” he said.
If you have a tip about tech, startups or the venture industry, contact Troy Wolverton at twolverton@sfexaminer.com or via text or Signal at (415) 515-5594.
{{description}}
Email notifications are only sent once a day, and only if there are new matching items.
Your browser is out of date and potentially vulnerable to security risks.
We recommend switching to one of the following browsers:
Sorry, an error occurred.
Already Subscribed!
Cancel anytime
Account processing issue – the email address may already exist
Ben Pimentel’s new weekly newsletter covering the biggest technology stories in San Francisco, Silicon Valley and beyond.
See what you missed during work.
Receive our newspaper electronically with the e-edition email.
Receive occasional local offers from our website and its advertisers.
Sneak peek of the Examiner real estate section.
We’ll send breaking news and news alerts to you as they happen.
Thank you .
Your account has been registered, and you are now logged in.
Check your email for details.
Invalid password or account does not exist
Submitting this form below will send a message to your email with a link to change your password.
An email message containing instructions on how to reset your password has been sent to the email address listed on your account.
No promotional rates found.
Secure & Encrypted
Secure transaction. Secure transaction. Cancel anytime.
Thank you.
Your gift purchase was successful! Your purchase was successful, and you are now logged in.
A receipt was sent to your email.