I jumped on the AI bandwagon early on. It was a technology that intrigued me and frightened me at the same time. I played around with it and continued to explore its uses in my life primarily because it was novel and, more importantly, free.

This was not a scientific, measured exploration of large language models. It was, ‘let’s see what ChatGPT comes up with when I ask it . . .’  Since then, I have found AI genuinely useful in multiple ways. It acts as a copy editor for much of my writing. I have used it to design courses and come up with flyers promoting them. I used it today for background research for this post.

Adopting a New Technology

Change is hard. New things are scary. AI is one of the most transformative of technologies to come along in a century. Think back to before our homes had electricity. Light came from kerosene or gas lamps, candles or fire places. When electrification first came about, people were mistrustful, not understanding how it worked or whether it would be reliable.

I doubt many of us could explain how electricity works, it just does. We depend on it to turn on the lights, power up the laptop and monitor our energy usage using our smart phone. We complain when we are without it for a few hours. We have become accustomed to it being there without question. That is the template for how and what AI is going to do for us.

Here’s the difference. Electrification of homes in the United States took decades. AI is now everywhere your smartphone is.

The Downside of New Technology

I like AI. So, it has been more than a bit sobering to learn about how it has already been corrupted by pornographers and hucksters. Consumers need to be aware of what they are buying, but it is incredibly challenging to do this in the increasingly sophisticated world of AI.

AI, to be useful, actually requires the user to synthesize the information delivered and make meaning of it. Instead, what appears to be happening is that people are accepting wholesale without discernment, what AI offers them as truthful and factual. It is not.

The Rush to Adopt AI. Is It FOMO?

AI is a seductive product, attracting incredible amounts of venture capital, promising to transform work, and solve the most intractable of problems at the touch of a button. In our rush to adopt this miracle tool, however, we are at risk for overlooking the fundamentals.

There is a very real risk that the investment start-up model that has made gazillionaires out of Sam Altman and Greg Brockman, and has added to the fortunes of Musk, Zuckerberg, and their backers at Microsoft, et al., is headed toward a 21st century stock bubble reminiscent of the over-valuation that sent the country into recession during the housing bubble.

Just What Is AI?

This is like asking, “What is electricity?”  It is easier to understand how it will make your life better than to explain the mechanics, coding, and powering of datafarms required to organize, capture, and synthesize a large language model. A great primer can be found here.

It is amazing how rapidly the lay public and industry have adopted AI and are implementing it in every corner of modern life. Because it is easy to use, provides quick results, and appears to make life easier, the promise of AI to transform our lives seems to be within reach.

Which to me is always a “yellow light”.

Is It Too Good to be True?

A recent Substack by one of my favorite writers, Barry Gander, raised my hackles while opening my eyes to the potential vulnerability that AI poses. On a basic business level, AI is giving everything away for free. The bulk of users are on free plans which creates two problems: a consumer base that now expects to have AI for free and the challenge of raising capital through other means to pay for the incredibly costly operating expenses that produce this “product”.

Currently, the money comes from investments: venture capital, corporate strategic investment, private equity and government and public funding. This, however, is not sustainable.

AI must turn a profit from selling its product in order to avoid literally having the rug pulled out from under. When something like this last happened, it was the dot-com and sub-prime mortgage lending bubbles. What got us out of that mess was a government bail-out.

Focusing on the Healthcare Sector

Unless you have a lot of money in the stock market, you may believe that your risk exposure is pretty low, and truth be told, it probably is. But, should AI lose its funding and go down in flames, there is a risk that will impact you: loss of your healthcare services.

Healthcare is currently one of the few positive economic indicators in terms of growth. It is a jobs creator and is making insurance companies lots and lots of money. It is also one industry that has jumped on the AI bandwagon wholeheartedly.

AI in Healthcare

You are already aware of the many ways AI is partnering in providing your care. From chatbots that help you identify symptoms to on-line pre-registration for procedures, AI is everywhere in healthcare.

Here are some startling statistics:

  • Mayo Clinic is investing more than $1 billion in AI across 200+ projects spanning diagnostics, patient care, and administrative efficiency
  • 90% of health systems now use AI for imaging and radiology, making it the most widely deployed AI application in clinical medicine
  • 67% of U.S. hospitals use AI for early sepsis detection
  • 60% rely on ambient AI note-taking tools for clinical documentation

Vulnerability

I read posts every day about how companies are using AI to develop new drugs. How the scourge of charting and billing is being made simpler using e-Scribes and spot checks. This helps the business side of healthcare increase accuracy and efficiency, and ultimately benefits not just the for-profit side of things, but actually results in better patient care.

So, imagine you have invested a chunk of money into setting up one of these high-falutin’ systems and they are humming along and you are happy with them and you get a notice that their services are down. Then you get a follow-up saying that the company that was providing the information has gone out of business.

There is a very high risk of a stock market collapse. AI-related stocks accounted for approximately 75% of S&P 500 returns, 80% of earnings growth, and 90% of capital spending growth since ChatGPT launched in November 2022. AI-related capital expenditures surpassed the US consumer as the primary driver of US economic growth in the first half of 2025, contributing approximately 1.1% of GDP growth. All of which represents investment not “revenue”.  These stocks don’t represent an actual product – just the promise of what AI will be able to do for you.

Many of the largest AI deals involve companies investing in each other’s services, creating the appearance of revenue without corresponding external demand — a pattern financial experts describe as circular financing. In other words, I put money into your cookie jar so it looks full, then you invest that same money back into my cookie jar, so it looks like my cookie jar is really growing, and we keep swapping the same cookies back and forth, without really adding any more cookies!

Should You Be Worried?  In a Word, Yes!

Because discussion around healthcare and AI tends to be in terms of financials, the risk analysis is also couched in those terms. For example, one analysis broke down the risk this way:  

62% of Americans reported owning stocks in 2025. Many of these stock holdings are in pension funds. If there were a downturn in the valuation of these stocks, it would directly reduce household wealth and retirement savings.

Governments that have made AI infrastructure central to their economic strategy — including the US, UK, UAE, and Saudi Arabia — face fiscal and reputational risk if AI doesn’t deliver what it promises. Monies from governments come from taxpayers. Thus, everyone who pays taxes has a stake in whether or not AI can actually deliver on its promises of future savings, or if it will require more money out of your pocket to rescue other investors.

Banks and private credit funds that have extended debt to AI companies face losses if the AI behemoths are unable to pay back their loans. This is a risk that any financial corporation knowingly enters into and why these loans have to be guaranteed by law. Those laws have been systematically loosened or completely removed in the past decade, allowing for more speculation.

How Does This Impact Me?

What is missing, of course, is just how this would impact end users, in this case healthcare systems, providers, and patients, who are at highest risk for catastrophic problems. The answer is unclear at best, but frightening at worst. This would be like a nationwide energy black-out that was not temporary, but would last for years.

From the patient standpoint, your healthcare records would be inaccessible. What medications you take, what procedures you’ve had, who you have seen for what, would need to be recreated from paper. You couldn’t rely on someone reminding you when your prescriptions needed to be refilled or when your next appointment is.

From the provider standpoint, all the information on your patients, dates of birth, social security numbers, dates of service, referrals, charting notes, all would need to be recreated from old records. You couldn’t get pre-authorization for referrals or meds or follow-up with credentialing for your office.

From the healthcare system and insurance coverage and payment standpoint, the everyday functions for running the system including ordering, running of machines, staffing, scheduling, and billing would all need to be collected, confirmed, and somehow coordinated with the literally hundreds of payers that keep the doors open for patient care 24/7.

All these systems used to be done by hand, so it’s not as if we don’t know how. It’s just that we haven’t done things that way in a long time!

What to Do? What to Do?

If I ran the world, I would recommend we slow down, maybe even stop the bandwagon, and remember that the sparkle and shine of new technology may not always turn out to be what was promised. We need to keep parallel systems going for a bit. We need more pilot programs to test out real-world scenarios. And we need to find a reliable source of funding that is sustainable over time to secure a future where AI can deliver its promises in the healthcare field.

Increasingly to me, this looks like public/private partnerships. Models where a portion of our taxes are specifically set-aside for endowing this kind of technology so that all of us can benefit from its implementation.

We have never allowed our electrical grid to operate without a public safety net. There is no good reason to build our healthcare system on one that has none.

The AI bandwagon is moving fast, and the people driving it have already jumped off into their private jets. The rest of us are still aboard — and it would be worthwhile knowing where the brakes are.

One response to “The AI Bandwagon and Why the Wheels Are Coming Off”

  1. Timothy Louis Gieseke MD Avatar
    Timothy Louis Gieseke MD

    Mary – though I’m retiring soon from health care, I have already been using AI to help with diagnosis and more targeted patient care, both of which have real patient benefits. For me, it’s also a tremendous time saver. If I were to continue practicing medicine, I would be an early adoptor of ambient AI which would improve the quality of my documentation of patient care at a fraction of the time, and would also allow me to be more present with patients during encounters. I too am using the free versions of google gemini and open evidence (based on quality medical journals, but only for licensed professionals. Using these tools requires expertise to know when AI is hallucinating, so there are risks if someone lacks expertise in the area of exploration.
    Having experienced the dot.com bust and housing mortgage bust, I suspect you are right that we should expect a correction. I think we are already seeing that as companies expecting increased efficiency are laying off workers. I also think the huge cost of electricity that is required for AI will need to be funded. These cracks are already appearing in stock valuations. Hopefully, the transition to reliable funding will occur prior to a major industry wide collapse.