AI should in fact replace nurses and doctors tripping over themselves in "heroic" panic
And we, healthcare frontline workers, should stop glorifying system failure as human heroism, and embrace AI to help gain back to the space to practice calm and thoughtful humanity
There is a viral video circulating on healthcare worker and healthtech socials….
Having been the panicked on-call night hospitalist running to too many of those calls in too many understaffed and dysfunctional hospitals early in my career, I suppose I have something a kin to a trauma response when I see these clips. I don’t see anything being displayed here as good or human. If humanity never has to have scenes like this again because of AI, that is a very good thing.
It’s System Failure, Not Heroism
I suppose most people look at this video and see “heroes”. I look at this, and I see “health system failure”.
I suppose most people see “this is the humanity we can’t replace”. I look at this and see “this should never happen, and we, as humanity, have failed other humans here”.
I suppose most people see “saving lives”. I look at this and see “it is equally likely they are futilely torturing a dying person in their last moments because no one had the time or space to have a real conversation with the patient and family about their knowable prognosis and care options before this happened”.
I suppose most people see “this had to be done!”. I look at this and see “this could have been prevented”.
I suppose most people see “they heroically responded to an unpredictable emergency”. I look at this and see “we could have and should have known this was going to happen and done something different about it.”
I suppose most people see “this is why people go into medicine”. I see “this is what nearly drove me out of medicine”.
AI should replace this
The most useful aspects of AI in healthcare are prediction, risk stratification, resource allocation, and relief from administrative burden.
Having spent too many night running these sorts of panicked codes, I can guarantee most of the human beings on the other end of that crash cart’s drugs— were:
under-triaged to floor rather than ICU (no one acts like this- or should act like this- in the unit),
were clearly known about by staff and what you’re seeing is the “oh shit, I should have been in that room, not typing bullshit into an EMR behind the desk I’m about to climb over”,
or were people who would never have wanted those crash cart drugs pumped into them in the first place, if they had had a real conversation about their end of life goals of care with a doctor who had the time and real humanity to talk with them about it.
All three of these problems:
risk stratification,
staffing allocation and placement, and
freeing up clinician time from administrative tasks to have real human conversations with dying people,
are exactly how AI can and should replace these panic moments of health system failure, masked as “heroism” and “humanity”. We just need to use the AI correctly, train it specifically to help reduce this sort of panicked response in the first place.
The better video of “AI can’t replace this” is a montage of the ICU nurse calmly checking on a person making rounds double checking the monitoring machines work with her hands on a human being, a palliative care physician sitting in a family meeting with a conscious patient telling his loved ones what he actually wants out of the end of his life, a crash cart being calmly and appropriately placed in the room (not down the hallway) on admission, and of course a house calls primary care doctor doing the simple things, like helping a CHF patient check their daily weights, so they never wind up on the receiving end of any of this.

