A recent article in the British Medical Journal made the headline-grabbing claim that medical errors were now “the third leading cause of death in the US,” behind only cancer and heart disease. Medical errors, in their estimate, caused more deaths each year than motor vehicles, firearms, and suicides combined.


The backlash from the medical profession has already started. STAT News posted an equally-provocative article, written by an assistant professor of medicine, “Don’t believe what you read on new report of medical error deaths.” MedPageToday grumbled about the “superficial coverage” and made several complaints. Skeptical Scalpel said the article “shines no new light, only heat, on the subject.”


So who’s right?


Let’s start with two basic points. First, the primary author of the BMJ article, Martin Makary, isn’t a quack, but rather a surgeon at Johns Hopkins who has regularly written about transparency in health care. Second, the BMJ article wasn’t an original study, but rather a two-page analysis of existing literature about medical errors. There wasn’t much “new” in the article, except that they took the estimated rate of death from medical error in the 1999 Institute of Medicine report, To Err Is Human: Building a Safer Health System, and extrapolated it to the present by using 2013 US hospital admissions.


As they said:


We calculated a mean rate of death from medical error of 251,454 a year using the studies reported since the 1999 IOM report and extrapolating to the total number of US hospital admissions in 2013. We believe this understates the true incidence of death due to medical error because the studies cited rely on errors extractable in documented health records and include only inpatient deaths. Although the assumptions made in extrapolating study data to the broader US population may limit the accuracy of our figure, the absence of national data highlights the need for systematic measurement of the problem.


That’s where they get the “third leading cause of death” number from. It’s simple math using the Institute of Medicine’s own prior estimates: because the rate of medical errors doesn’t seem to have gone down at all (see the 2010 study cited below), more people in hospitals now means more preventable errors.


The whole point of the article was to point out that we don’t really know how big the problem of medical errors are, and few people in the medical community are even trying to figure out how big the problem is. As the BMJ article reports, since the Institute of Medicine report back in 1999, there have only been four major studies on the frequency of medical negligence and its effects:


  • The 2004 HealthGrades study of Medicare patients, Patient Safety in American Hospitals, which found “one in every four Medicare patients who were hospitalized from 2000 through 2002, and experienced a patient safety incident, died,” and that “Of the total of 323,993 deaths among patients who experienced one or more [patient safety incidents] from 2000 through 2002, 263,864, or 81%, of these deaths were potentially attributable to the patient safety incident(s).”
  • The 2010 Office of Inspector General report, Adverse events in hospitals: national incidence among Medicare beneficiaries, which found “13.5 percent of hospitalized Medicare beneficiaries experienced adverse events during their hospital stays,” of which 44% were preventable.
  • A 2010 study in the New England Journal of Medicine, Temporal Trends in Rates of Patient Harm Resulting from Medical Care, which found a quarter of hospital admission resulted in harm to the patient, that 5.3% of those harms were permanent or fatal, and that “the rate of harm did not appear to decrease significantly during a 6-year period ending in December 2007, despite substantial national attention and allocation of resources to improve the safety of care.”
  • A 2011 study in Health Affairs, ‘Global Trigger Tool’ Shows That Adverse Events In Hospitals May Be Ten Times Greater Than Previously Measured, which found “adverse events occurred in one-third of hospital admissions,” and that 1.13% of admissions included a preventable leath adverse event.


One interesting point: when Makary extrapolated from the 2011 Health Affairs report (using the same simple math as he did with the 1999 Institute of Medicine report), he found there would have been approximately 400,201 deaths from medical errors in 2013. If this sounds familiar, it’s because it’s similar to the method used in a 2013 article in the Journal of Patient Safety, which concluded, “the true number of premature deaths associated with preventable harm to patients was estimated at more than 400,000 per year.” That number, in turn, was widely publicized by, among others, ProPublica.


So, what’s all this mean?


I think Skeptical Scalpel hit upon the most important point, although they didn’t realize it:


I don’t think there is a doctor in the United States who would be stupid enough to write “medical error” on a death certificate for any patient.

About 18 years ago, The Institute of Medicine “called for a culture of confession” in its first report on medical error. So far, that culture has not materialized.


Indeed. Doctors are more likely to hide errors than to admit them. Yet, all of the studies above are based on information reported by the doctors and hospitals. If anything, the studies underestimate the prevalence of medical errors. It doesn’t take a stretch of the imagination to assume that doctors and hospitals are likely underreporting indications of an error than overreporting them.


MedPageToday’s primary criticism is:

But the paper did not attempt to assess trends in the rate of medical error-related deaths, nor did it collect any new data. So it’s dubious to suggest — without explanation — that errors are “now” coming into 3rd place.


It’s not dubious to suggest it at all. There’s no reason to believe that medical errors are declining. Indeed, the 2010 study in the New England Journal of Medicine specifically found that medical errors were not declining despite the attention given to the issue after the 1999 Institute of Medicine report.


Finally, it’s hard to take the response from STAT news seriously, because it doesn’t seem they even read the study. They complain that “The new estimate of 251,454 deaths matters because the sensational figure is imprecise and may be wrong by a large magnitude.” Well, yes, of course it’s “imprecise,” because that’s exactly the point Makary is making. The disturbing part is that the estimates are, if anything, lower than the actual error rate, because these numbers are based upon indications in the hospital record that there was an error. If the hospital made a mistake and didn’t realize it, then that was left out of the record entirely. Same goes for mistakes that the hospital didn’t properly report.


But the most troubling part of the STAT news response is this:


When it comes to suspected errors, those who think they can always pinpoint which actions led to potentially preventable harm are either kidding themselves or are incredibly arrogant. One of the most difficult things about medicine is that much of the time we don’t know for sure if an outcome would have been different had we acted another way.


I take the opposite view: anyone who thinks that the rate of medical errors can’t even be estimated until each and every suspected error is proven “for sure” is “either kidding themselves or [is] incredibly arrogant.” As Makary pointed out,


Currently, deaths caused by errors are unmeasured and discussions about prevention occur in limited and confidential forums, such as a hospital’s internal root cause analysis committee or a department’s morbidity and mortality conference. These forums review only a fraction of detected adverse events and the lessons learnt are not disseminated beyond the institution or department.


Those of us who practice medical malpractice law have a name for this: the white coat code of silence. If doctors want to get better, they need to start being more open about each other and about themselves.