Can your chatbot logs be used against you in court? Northeastern expert explains – Northeastern Global News

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
“Courts are built on transparency, explainability and fairness, and those are precisely the areas where AI chatbots still struggle,” says Mark Espositio, a professor in the D’amore-McKim School of Business.
We know that a person’s Google history can be used as evidence in court, but what about a conversation with an artificial intelligence chatbot? 
Many people are turning to large language models for everything from life advice to online searching. 
Can those logs be used against them legally? 
Mark Esposito, a professor in international business and strategy in the D’Amore-McKim School of Business and an expert on AI governance, says in theory AI chatbots could be used as evidence, but “we’re still far from having the legal and procedural framework” to make that realistic.  
“Courts are built on transparency, explainability and fairness, and those are precisely the areas where AI chatbots still struggle,” he says. 
First, is the issue in how chatbot data is collected and managed, he says. AI companies store every chatbot interaction in a secured location on their servers, he explains.    
“Chatbot interactions are permanent by design since every exchange is logged and stored,” he says. “That raises discovery issues (opposing counsel could demand access to everything, even irrelevant bits), and it complicates how courts would treat those logs as evidence.”
Accountability is also a major concern, he says. 
“If a chatbot generates an argument or recommendation, who’s responsible? The user, the developer or the system itself? Courts need clear attribution and auditability, but most models are still black boxes,” he says. 
There are also data protection rules since chatbots “tend to overcollect information compared to what is legally ‘proportionate,’” he says. 
While there are many challenges with using AI chatbots in courtrooms, Esposito doesn’t rule out that they may one day play a bigger role. Researchers like himself have developed legal clinics to test how these technologies should be used in the future. 
“These are pilots in legal form that are testing hypotheses that are leading to a specific set of behavior,” he says. 
But the bigger questions AI researchers are grappling with right now are much more about privacy, data ownership and surveillance, he says. 
“From there, the legal implications of those things is not too much of a leap,” he says. “We started to recognize that in the last four years under the Biden administration. Now, we are much more oriented around the question, ‘Where’s the data and how is it being used?’ That is where accountability eventually has to be set up.” 
Cesareo Contreras is a Northeastern Global News reporter. Email him at c.contreras@northeastern.edu. Follow him on X/Twitter @cesareo_r and Threads @cesareor.
© 2025 Northeastern University

source

Scroll to Top