Claudette Hobbart

Claudette HobbartClaudette HobbartClaudette Hobbart

Claudette Hobbart

Claudette HobbartClaudette HobbartClaudette Hobbart
  • Home
  • Pro Stuff
  • Fun Stuff
  • Book
  • Blank
  • More
    • Home
    • Pro Stuff
    • Fun Stuff
    • Book
    • Blank

  • Home
  • Pro Stuff
  • Fun Stuff
  • Book
  • Blank

What happens after the AI bubble bursts?

Maybe that’s when the best work will begin

Originally published on Medium on  Nov 16, 2025

 Image by Eduardo Huelin 

If you’re like me, you’ve read countless stories predicting that the AI bubble is about to burst. The reasons make sense. AI companies are taking on too much debt. Their funding models are overly insular. They’ve experienced too much brain drain. The electric grid can’t support all the new data centers. And of course, AI projects just don’t work.


Perhaps the most worrying part, however, is that many are predicting that if AI goes down, it will take the global economy with it.


Oh boy.

But would it really be that bad if AI failed?

If you’re like me, you’ve thought at least once that it might not be the worst thing if AI just… stopped. I mean, we wouldn’t have to worry about the terrifying new proliferation of bias, privacy breaches, and cyberattacks enabled by AI.


We could breathe easy knowing that the contracts with the Department of Defense wouldn’t lead to some sort of AI-driven apocalypse. And we wouldn’t be flooded by AI-generated porn. (You can call it “erotica” if you want Mr. Altman. We all know what you mean.)


Sure, the economy would be in shambles, but at least we’d be spared all that. Right?


Sigh. Probably not. If you believe the Gartner AI hype cycle, chances are high that we’ll see a major dip (read: economic meltdown) as the collective buzz of AI wonder wears off. Then, we’ll pick ourselves up, dust ourselves off, and start thinking more realistically about what we should be using this AI thing for.

OK, what will that look like?

Naturally, I hope that once the hype dies down, we collectively take stock and decide it’s time to dedicate our AI resources to the fight against cancer, climate change, and all of the world’s hardest problems. I also want a baby unicorn for Christmas. 


Realistically though, here’s what I think will happen:


  • AI costs will go up. I think we all know that the AI industry has taken a page out of drug dealer’s playbook until now, right? Their “first one’s free” pricing model has us all hooked. With or without an economic crash, the pricing models will have to go up soon (possibly by a lot) to account for the costs of building all the data centers, buying all the hardware, all the energy and water usage, etc. And those increased costs will probably apply to everyone, including the AI companies themselves, who will have to pay higher and higher prices for minerals, energy, and water to power the machine.


  • Frivolous AI products will decrease. As AI costs goes up, we’ll probably see fewer cat translators and genius toothbrushes. It doesn’t seem like people will be willing to shell out big money for these types of things. On the other hand, I could be wrong on this one. If people are willing to pay $185 for a paperclip, maybe they’ll be willing to pay more for crazy AI products.


  • AI results will definitely include ads. This is a foregone conclusion, regardless of whether we have an economic crash. But to keep costs relatively low, LLM providers will have to include ads in at least some plans. The question is, will we even know the ads are there? Considering AI’s ability to create personalized content on the fly, it’s possible we may not even know an ad when we see it.


  • Companies will think harder about where they use AI. Your dentist’s office might take down that chatbot from their website. Your CEO will probably stop issuing blanket “you must use AI” mandates. Projects won’t be funded just because they’re cool.* But that won’t keep companies from using AI. They’ll just be more judicious. For instance, they’ll try to use small language models (SMLs) instead of LLMs where possible to save money.

What I hope will happen

We’ve already established that my dearest hope of solving all the world’s problems with AI probably won’t happen. But perhaps we can achieve something more modest.


If you’re like me, you probably dealt with a lot of magical thinking when the AI hype cycle began. People who really should have known better insisted that AI could do impossible things and ignored any arguments to the contrary.


As time has gone on, the truth has become clear. AI is a tool like anything else. It does a good job at extrapolating from existing content (particularly if that content is well-written and well-structured), but it cannot spontaneously create new content out of thin air. It can generate source code, but it cannot effectively architect a software ecosystem. It can tell you what you want to hear, but it cannot do the hard work of negotiating real-time disagreements, misunderstandings, and conflicting priorities, as we have to do every day with our coworkers in a work environment.


So, my hope is that when the AI bubble bursts, we will gain a newfound respect for the human work behind the AI. I’m not asking for miracles — just a little more recognition of what makes these systems possible.


(But we’ll definitely keep using it for porn. Because, of course we will.)

. . .

*If you haven’t seen them, recent studies from MIT and Stanford both indicate that funding for AI projects over the past few years has been based on a significant dose of wishful thinking. Executive teams are funding what they want to work, regardless of feasibility. Conversely, they’ve been neglecting potentially high-ROI projects that lack a cool factor. A major market crash could turn around that type of thinking pretty quick. 

Copyright © 2025 Claudette Hobbart - All Rights Reserved.

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept