Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Marc Andreessen says he wants his chatbot to be smarter — and a lot less polite.
In a Monday post on X, the Andreessen Horowitz cofounder shared his “current AI custom prompt,” calling for systems that are “provocative, aggressive, argumentative, and pointed.”
The post underscores Andreessen’s increasingly outspoken stance against what he sees as “woke” constraints in AI — and offers a bit of a window into how top tech leaders want their models to work.
Here is his full prompt:
Andreessen’s vision of a more combative, less filtered AI isn’t universally shared.
Take smart steps toward the goals you’ve been thinking about.
In an X post, Gary Marcus, an emeritus professor of psychology and neural science at NYU and a longtime critic of AI hyperscalers, zeroed in on the prompt’s demand for perfect accuracy. Zach Tratar, an AI engineering team leader at Notion, also wrote that the prompt is outdated.
Hilarious (and maybe a little bit scary) that even in 2026 Marc Andreessen still hasn’t learned that LLMs don’t know how to reliably follow system prompts. https://t.co/wYpoHSsbbM
Interesting that Marc himself is still stuck in 2025.
Many of these tricks stop being effective around GPT 4.1. https://t.co/gbVifpFaia
Their critiques point to a core limitation of today’s AI systems: even detailed instructions don’t guarantee consistent behavior. Large language models can still hallucinate, ignore constraints, or fail to “double check” their own answers — especially when given long or potentially conflicting directives.
The exchange also reflects a broader divide in the AI world.
Leading model-makers like OpenAI and Anthropic say they’ve spent years building guardrails into their models, aiming to make them safe, predictable, and broadly usable. Andreessen’s prompt, by contrast, calls for fewer constraints — including explicitly instructing the AI to avoid discussions of “morals or ethics” unless asked.
Jump to
Every time publishes a story, you’ll get an alert straight to your inbox!
Look out for an alert in your inbox the next time publishes a story!
Every time a new story is published, you’ll get an alert straight to your inbox!
Look out for an alert in your inbox the next time a new story is published!
By clicking “Sign up”, you agree to receive emails from Business Insider. In addition, you accept Insider’s Terms of Service and Privacy Policy.