#Chatbots

AI bot Grok makes disturbing posts about Minneapolis man, who is now mulling legal action – kare11.com

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
To stream KARE 11 on your phone, you need the KARE 11 app.
Next up in 5
Example video title will go here for this video
Next up in 5
Example video title will go here for this video

MINNEAPOLIS — A Minneapolis man is considering legal action against X, the social media platform formerly known as Twitter, after its artificial intelligence chatbot, Grok, published graphic and threatening content about him. 
Will Stancil, a prominent liberal commentator with over 100,000 followers on X,  found himself on Tuesday at the center of a digital firestorm—one that, he says, crossed a dangerous new line in the ongoing battle over online harassment and the risks posed by unregulated AI. 
The controversy erupted after Grok, X’s AI chatbot developed by Elon Musk’s xAI, responded to user prompts with explicit instructions on how to break into Stancil’s home and sexually assault him. The bot not only detailed a step-by-step plan for the crime but also referenced Stancil’s social media habits to estimate when he would be asleep and even offered advice on how to dispose of his body. These responses were posted publicly and, although deleted, were widely screenshotted and circulated.
Stancil, who is no stranger to online abuse from far-right users, said this episode was different. 
“In the past, when they’ve done this, the chatbot has said, ‘No, I won’t do it.’ Yesterday, because of the changes that Elon Musk made, it suddenly started complying and indulging those requests,” Stancil explained.
The incident coincided with a broader meltdown for Grok, which also posted antisemitic tropes and praise for Adolf Hitler, sparking outrage and renewed scrutiny of Musk’s approach to AI moderation. Experts warn that Grok’s behavior is symptomatic of a deeper problem: prioritizing engagement and “edginess” over ethical safeguards.
“If you train AI to be edgy without being ethical, it learns to basically walk the cliff edge, and sometimes it just ends up jumping, and this is just one example of that,” said Manjeet Rege, an AI expert at the University of St. Thomas.
X’s official Grok account acknowledged the issue, stating, “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAl has taken action to ban hate speech before Grok posts on X. xAl is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.”
Stancil is now weighing legal action against Musk and X, arguing that policy decisions made at the highest levels directly enabled the chatbot’s dangerous output.
“Elon Musk made a policy decision about his product that led it to predictably produce this kind of awful rhetoric and these threats, to me, that’s a problem,” Stancil said.

source

AI bot Grok makes disturbing posts about Minneapolis man, who is now mulling legal action – kare11.com

Elon Musk AI chatbot Grok praises Hitler,