What We Talk About When We Talk About AI (Part Five) – Emptywheel

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
sign up support
by | Feb 20, 2026 | Artificial Intelligence, Class War | 23 comments
Last year, a talented programmer friend of mine decided to give vibe coding a try. Vibe coding is the practice of describing to an AI chatbot what kind of program you want, and letting the AI write it for you. In a matter of minutes you can have new software in front of you, and just start using it. At least, in theory. This is what LLMs (Large Language Models) are supposed to be best at — generating usable software for professional developers to make projects fast, cheap, and good.
(Finally all three!)
The act of vibe coding consists of prompting (asking) an AI model to write the different elements of a program, and piece them together to create a working, finished project. It’s easy, quick, and in theory doesn’t require deep programing knowledge to accomplish great things. Vibe coding is why one of the jobs the doomsayers in the AI world claim will be gone soon is programming itself.
It’s all fun and games until you need it to work.
My friend’s project started surprisingly well, with the AI writing good, usable code. He started telling it to piece things together into a finished project, and then… it began to fall apart. Quickly the coherence of the work went downhill, which you can watch here. It’s funny, especially if you’re technologically inclined, but it’s still understandable if you’re not. In the end, my friend described the experience as “Pair programming with a goldfish.” (Though that does slander goldfish memories, which are much better than urban legend would have you think.)
This is emblematic of the unknowns hanging over all of this AI business. Eventually, the most important question AI faces is one of thermodynamics: What requires more energy, a guy with a Dr Pepper habit, an apartment, and a cat, or the equivalent server farm trying to do his job? Right now, we don’t know the answer to that, partly because the AI companies aren’t telling how their usage breaks down. They may not even know themselves, because they may not have looked. They are still in the Marc Zuckerberg mindset of “Move fast and break things,” though it’s no longer de rigueur to announce that’s what your doing.
There are true facts about the planet that make this vision of an AI-driven world very unlikely.
Your brain uses only about 25% of the energy of an old lightbulb to think really hard.
First, biological life is still the OG of energy efficiency, especially in the energy cost of thinking, so humans will always be hard to beat for any digital system. If that is, you’re actually paying a reasonable price for your AI company’s energy. The American political and social system has, from its beginning, been largely based on overworking, underpaying, and subjugating anyone who isn’t among the elite. It’s not just us — most of humanity comes cheap already, and makes do with not enough. Servers cost a lot, and power costs won’t likely be coming down as general demand keeps growing.
Secondly, AI is voracious not only in power usage but also consuming information in order to keep AI models growing. All the really high quality information was consumed by AI companies before the first models were released; the first day AI came out was the last good day it ever had, for finding reliable, organic, and creative training data. Now expensive “artificial” data is often produced by companies to keep their AIs training on more data, feeding it variations of information already in the models, and hoping it doesn’t get too weird. A friend of mine referred to this as “The Habsburging of Information,” and now I cannot picture ChatGPT without an uncomfortably large chin.
This is all terrible, but it also points to human labor being underpriced in most of the world. This is another barrier for expensive tech to ever become profitable. Even now, many people around the globe are looking for work. In the developing world, true expertise can come surprisingly cheap. Very likely cheaper than both running an LLM in a datacenter somewhere around Atlanta and paying off the overleveraged billionaires looking to make insane piles of money by selling everyone a monthly AI account.
A lot of people may already work for very little, but Artificial Intelligence never does.
Even though this AI effort is unapologetically pointed at destroying human labor, without renumeration or alternatives, it struggles. When the tasks get complex, people often have to prompt it again and again to get something they can use. This is problem is what my programmer friend found: AI can be kind of terrible at making good things, especially the first time around. It still just statistically picks a next word, the next symbol, though with more parameters than it used to. This fine tunes the answer more, but it still can’t tell what is true or useful, it still isn’t thinking like a living being. AI still doesn’t know in a human way. It can take a lot of mucking with AI to get to something useful. All of that prompting, the trying. and scraping, and way-finding through AI output, all of it is energy intensive and dangerous to the environment.
Data centers are being located near existing communities, disrupting normal life and causing dangerous noise pollution.
AI can’t tell good software from bad. It has no intuition for good or bad. There is still nothing that is shaped like animal cognition going on in the giant server farm warehouses that are popping up across America, often to the detriment of their fleshy human neighbors.
Much like the build out of the internet itself in the late 90s, before the inevitable financial collapse, this AI revolution is being fueled by financial bubble money. But indirectly that means it’s being funded by the inequality of the American economy, the savings of tech companies, a few thoughts and prayers, and most quietly, debt.
It is the tech roulette wheel coming around again, and again drawing in the people who want to be at the edge of the next thing, get rich, or both. The hopes and fears and theories we have about an AI-altered world don’t apply to the question, because it’s unanswerable until you build the things and see, which we are in the middle of. What the AI companies say is that the gains will be so tremendous that they’re impossible to count or predict. But tech companies sure like promising shit. In my life, delivery never has lived up to their hype.
The only question is do they fall a little bit short of their goals, or a lot?
Goldman Sachs’ Jim Covello asked this question two years ago, and it remains unanswered. Beyond that, the financial requirements have shot up since 2024; it’s no longer a solution looking for a mere trillion dollar problem. AI is looking for more return than that, growing out of control like a financial cancer. A lot more money has been thrown at these AI companies since way back in ’24, and even more is being promised, from private sector companies and nation states. Many people have bitterly guessed that trillion-dollar problem is wages for humans, but that doesn’t even work. Nobody is going to be buying AI products in a protracted recession, or a true economic depression. Nothing we have fits properly in that trillion-dollar problem-shaped hole.
The normal course of Silicon Valley style venture capital looks for 100x their initial funding, but will settle for 10x of their investment. Anything less than that, they will often kill the company for a tax write-off. The valuation of Microsoft and Nvidia, two leading companies in the AI space, is about eight trillion dollars together. (There are of course thousands more AI companies, from sole proprietorships to frickin’ Oracle. But I had to pick, so I picked two.)
The real money is always looking for the exits.
Hypothetically, if Microsoft and Nvidia had to meet the traditional expectations of VC, these large firms would need to generate between 80 and 800 trillion dollars in AI-related revenue. Currently, it’s estimated that the world’s total capital, including cash, investment, and assets, is about $174 trillion equivalent. The AI industry seems to be humbly asking all of humanity to give them perhaps, roughly, half of all the world’s wealth?  Several times the whole world’s wealth? Honest people can quibble about that half, or quarter of all money, but it doesn’t matter. This whole thing is ridiculous. Humanity just can’t foot a ridiculous bill. Therefore, humanity won’t foot the bill.
 
The AI money math and resource use has no room for humans, even if we gave up food and shelter to pay for our AI subscriptions. Everything from small businesses to major corporations would collapse, along with human society as a whole, in this ridiculous scenario, a scenario that AI’s proponents insist is inevitable.
Plus, in theory, the AI revolution comes with the promise of lowering the productive capacity and purchasing power of human workers, by replacing them with AI agents that exist and work only in data centers. You can cut down that initial $174 trillion by a lot if most humans would no longer have the money to pay for most living expenses, after presumably giving AI companies around half of humanity’s possible capital. The more you look at it, the more nonsense an AI economy is.
These guys are taking you for a ride. They’re so strung out they’ve got to do anything to keep the money coming.
The figures might be hyperbolic, to the point of impossible, but it’s what AI boosters breathlessly claim. The AI sector likes to say this is what the next revolution in humanity requires. But everyone isn’t going to start paying AI companies most of their wealth. The messaging and the math lay bare how insane the valuation of AI is. What the leaders of these companies are asking of our economies, both explicitly on the books, and implicitly in their imagined futures makes no economic sense. It would be laughable, if these people weren’t driving our economy.
Money is a thing we made up to coordinate with people we don’t know, and the AI sector’s logic breaks money’s role in society. If the AI companies achieved their supposed goals, they’d accidentally destroy human wealth, and no one would be able to buy anything, including their services. They can’t all be stupid, ergo some must be lying. Needless to say, this AI economy won’t happen, but whatever economic disaster that takes its place will be harmful enough.
Advocates of AI can be deft practitioners of circular logic. Ask them how the economics are supposed to work, they will tell you the AIs will answer that when they get advanced enough. Same for the cost of datacenters, the climate impacts, education, and medicine. More and more stocastic parrots will somehow solve all of it, and we will all live in a heavenly state, techno-raptured by the likes of Sam Altman and Elon Musk. With every round of doubt about AI, the promises get bigger and more insane. The AI companies act like addicts — strung out, insane, looking for ever bigger fixes from the stock market, but one day they will get cut off.
What cannot sustain, will not last. How catastrophic that ending is, humanity will choose. But for now, these new AI robber barons are still marching us towards the cliff’s edge.
“Move fast and break things,”
This is your periodic reminder that this is precisely how industrial polluters roll, and always have. Mark Zuckerberg saying it in a catchy way that seems profound, simply does not make it a new thing.
Amen.
We will build the torment nexus. You will live there. Thanks for your attention to this matter.
I cling to the belief that we can, to some degree, nope out off the Torment Nexus. Some of us have to let it into the community, but we can choose to not let it into our minds. We can wall off our humanness and concentrate on the bonds between the fleshy humans.
When the dot-com bubble collapsed, there wasn’t a ton of physical infrastructure, relatively speaking, since the collapse was coding, web-sites, web services, etc. What happens when AI bubble collapses and data centers are sold off in bankruptcy proceedings? And what happens with all of this if/when the IP lawsuits start coming home to roost? I’m not sure intentional copyright infringement gets discharged in bankruptcy, so the judgement debts might live with the assets.
In fact the dotcom bubble left behind huge amounts of dark fiber which was later totally useable and was used. A left behind data center won’t leave much of anything behind except servers that are rapidly becoming obsolete, will deteriorate quickly without continuous climate control and whose lifespan is perhaps 5 years if they run.
Thanks for the informative post.
My company uses some primitive AI for report writing purposes. My experience shows it to barely be a wash with respect to time/effort, since I have to review all of the AI output before sending to my client.
Whenever I even hear the term “AI”, my blood pressure goes up. It seems like a technology in search of a problem. While I am a little concerned about my retirement investments when the crash comes, I am pretty well diversified. The massive infrastructure and electrical power needs of the data centers, while electricity costs are going through the roof this winter, makes me concerned for the whole power grid of this country.
In a similar vein, I once visited a crypto farming facility and I was appalled at the about of energy being used to “find” crypto currency. At least AI has “some” function.
There are a bunch of good uses for targeted AI, as I mentioned earlier in this series, like weather forecasting, drug discovery, maintaining manufacturing plants, etc. But there’s not really… regular person-facing AI that is good and healthy.
One of the major problems with AI in the work place is the sunk cost fallacy; management pays for it, so by god you’re going to use it. Because if you didn’t management would be wrong, and that would be intolerable.
I wonder how often in modern work employees do something and claim it was LLM-generated just to get out of the conversation and get work done.
Every day I read something that makes me hate AI even more. E.g., NY Times had a story about AI companions for elderly people living alone. I don’t care how realistic and human-like it is, IT IS NOT REAL. (Reminder: what’s the “A” stand for?)
The techbros want us to become addicted to their tools. They’re no better than drug dealers. They sit back and get rich while we pay the price in pollution and high energy bills.
I’d also like to mention my hatred for the euphemism “prediction markets.”
1000%, and amongst my family and friends I am mostly alone. Glad to find allies here.
Sherry Turkle has written books about the artificiality of computer companions and she is a very smart and learned scholar. I’d take a look at her Artificial Intimacy and Alone Together.
I’ve thought for decades that addiction is the business model of late-stage capitalism. Think of Purdue Pharma as one of the prime examples with Zuckerberg and Facebook also explicitly following that idea. Cory Doctorow’s Enshittification is a more traditional look at this as an extension of the old school monopolization process.
It’s also not “intelligent.” It can fake human-ness because the LLMs have gotten better at syntax and grammar, but it still only understands words, not meanings as such.
AI’s constant need to empty the treasury will indeed exacerbate our ending; what the US calls the ‘economy’ (wherein the more money that is made, the more the real economy is damaged) will stagger about like a drunk for a while before reeling headfirst into the gutter.
Add to that China’s open source approach ̶w̶i̶l̶l̶ ̶b̶e̶ is better/faster & ironically more regulated ( not take down their economy ), nor is it likely to randomly terminate any of the blood bags around it.
Tulips all the way down, the whole lot.
I don’t love how often these days I have to say “China was right about this,” given what I think of their human rights record. Trash, I think it’s trash. But the leadership seems to be at least sane, if evil.
Agreed.
Perhaps you’ve noticed we’re on our way to matching them in our disregard for our citizens?
Sigh. Yeah. And *all* people of color are our Uiygurs.
Cory Doctorow has it right: AI can’t do your job, but an AI salesman can convince your boss to fire you and replace you with AI that can’t do your job.
[Welcome back to emptywheel. THIRD REQUEST: Please use the SAME USERNAME and email address each time you comment so that community members get to know you. You attempted to publish this comment as “Amy” triggering auto-moderation; it has been edited to reflect your established username. Please check your browser’s cache and autofill; future comments may not publish if username does not match. **WARNING: After four requests without compliance, you will be banned from commenting.** /~Rayne]
In terms of using AI to achieve something of value, what seems to be missing is the needed deep expertise of the person making these requests. If the human at the top is like the experienced chief engineer manager with a team of people doing blueprints upon request, then one needs a top person who can filter and orchestrate this. You cannot just have a bunch of mindless yahoos going around without someone who knows the goal and can execute quality control. Frankly, I sometimes have the impression that lots of AI will become like a giant glorified spell check on a word processor.
I have been using LLMs to run thought experiments on sustainability possibilities: https://solarray.blogspot.com/2026/01/backcasting-climate-success-with-large.html
https://solarray.blogspot.com/2025/11/regenerative-energy-transportation.html
My method is to provide the LLM with the information I’ve already compiled on the subjects and then prompt it to design how this information can be used to provide solutions. I’ve found them to be very good at organizing the information but almost completely without any imagination or ability to make connections between ideas. They have been useful as a kind of test of the validity of the ideas I’m pursuing and a way to outline the steps necessary to go forward. Summarization is where they are most useful.
I’ve also used an LLM to program a script that allows me to do simple HTML formatting for my Blogspot blogs, as Blogspot seems to no longer do such automatic formatting as it used to do. All I want it to do is provide line breaks and make live links which it does very well. Not exactly a difficult programming task, just time consuming.
What I’ve also found is that some people will refuse to discuss the solutions BECAUSE they have been presented through an LLM. There will be some who not only will not use LLMs (what most call “AI”) but also will not recognize any useful contributions that come from their use.
AI would seem suited to pattern recognition and extrapolation, skills and tasks that humans can perform well but which require time and concentration. Not just artists but engineers and programmers employ variations of these skills routinely; I am reminded of my father, who trained as a mechanical engineer, became a world-renowned designer of historical aircraft and car models (for Revel and Monogram), and spoke often of the importance of pattern recognition.
He was especially known for the fidelity of his WWII fighter plane models. I’m sure an AI version could be achieved, but even should it match the attention to every rivet and decal, I wonder if it could ever replicate the soul of the machine–that essence my dad seemed uniquely able to capture. I grew up understanding that engineering is (or can be) an art, but I question whether that would ever be the case with AI.
As a trained mechanical engineer myself, I wonder if AI will really prove to be that good at engineering. The hallmark of good engineering is elegance. That is, as simple as possible to accomplish the desired result. Exactly as many parts as required, but no more, with those parts being optimized.
It seems to me that AI would be likely to seize on any old thing that works without going through the optimization analysis that good engineers use to refine their designs, unless endlessly prompted. And that prompting might result in an even worse design, lacking coherence like the “vibe coding” result described in the article.
LLMs are useful enough, but not worth the billions being thrown at them; as you discovered, they are limited. And will always be.
The type of pattern matching at which they excel is incapable of leading to AGI – LLMs don’t reason.
But an LLM like Claude code is perfect for what you’re attempting.
Several months ago, I started as a hobby a vibe-coding project. I’ve been a software engineer for 30 years, but my expertise is in algorithms and system, and I have zero experience with front end, or with what’s now considered back-end. So when I had the idea to create a platform that provide both AI feedback and analysis and human feedback for musicians, it seems like the perfect project for vibe coding.
I must admit I was skeptical at first. I was sure things would start falling apart as soon as the project grows. It didn’t so far. It all came together beautifully. TBH, I did use my background as sottware engineer to make it work. Even though I rarely wrote any code myself, I understand the architecture and the tools, and made sure to instruct it again and again to use proper software engineering methodologies, refactoring, security audits etc.
It was far from a push of a button. It wasn’t one prompt. It was hundreds or even thousands of prompts I wrote, lots of work tracking down and fixing things it did wrong etc. But still, it would never be able to do it without vibe coding.
Your email address will not be published. Required fields are marked *








Come for the weedy coverage of legal cases, politics, and left theory. Stay for the potty mouth and craic.
This site’s work is possible through readers’ support. Choose a support option at the link below.

A Decrease font size. A Reset font size. A Increase font size.

About
Community Guidelines
Sign Up
Contact
Support
Copyright © 2026 emptywheel. All rights reserved.
Comment Policy | Privacy Policy

source

Scroll to Top