I like Consumer Reports, and, though I wish they would take a more active stance in supporting consumers’ legal rights, I think they’ve done a fine service for American consumers over the past 70-odd years with their independent reviews.

 

However, I think their hospital ratings system is doing more harm than good. I opened up the latest issue, with ratings for most states — including Pennsylvania and New Jersey — and was surprised to see a variety of small ambulatory surgery centers and smaller hospitals in exurbs and rural areas trouncing the world-renown teaching and research hospitals in Philadelphia.

 

I don’t think prestige makes anyone or any entity above criticism. Moreover, as a malpractice lawyer, I’m usually the first one to ring the alarm bells about the horrifying problems endemic to the medical profession, and I certainly don’t just take a doctor’s or a hospital’s word for the quality of their care. But when you review and litigate as many cases as we do, you get to know the medical community quite well, and something just plain didn’t sound right about the rankings, which seemed to be reversed. Few malpractice lawyers around here would agree that patients should choose some of the highest ranked hospitals on the list (many of which are simply surgical centers with 30 or fewer beds) over some of Consumer Reports’ lower ranked hospitals, like Pennsylvania Hospital and Jefferson Hospital, which received the lowest possible rating (a solid black blob), or Einstein and Hahnemann, which received the second lowest possible rating (a half-black blob).

 

I wasn’t the only one surprised; up in Boston, Massachusetts, for example, Carney Hospital had the highest rating, while Brigham and Women’s and Mass General — both world-renown — had the lowest. WBUR looked into the disparity for surgical ratings, and talked with Mass General’s Vice President for Quality and Safety. They pointed out a variety of problems with the ratings, including how the ratings don’t take into account a variety of important factors, including:

 

  • how sick the patient was when he or she came in for surgery or the severity of their disease
  • how many other conditions the patient may have
  • how many complications actually occurred

 

Indeed, reviewing the 44-page outline of how Consumer Reports rates hospitals reveals, first, that they’re not that different from the Hospital Safety Score, and, second, that their analysis is subject to numerous limitations. In assessing post-surgical complications, for example, the bulk of the score comes from the presence of Accidental Puncture or Laceration (30%), Pressure Ulcers (24%), Postoperative Pulmonary Embolism or Deep Vein Thrombosis (24%), and Central Venous Catheter related bloodstream infection (13%).

 

It’s not crazy to attempt to measure hospitals this way. All of these are part of the Agency for Healthcare Research and Quality’s National Quality Measures Clearinghouse. But two problems leap out at me. 

 

First, this data is not necessarily directly comparable across procedures and hospitals. The teaching and research hospitals are often the only hospitals that can even attempt to save the lives of the patients in the worse conditions; more than a few patients in the Philadelphia metro area who are near death are transferred to Jefferson, or Temple, or Einstein, or the like, where the doctors operate on them, barely keep them alive, and then the patient develops a postoperative PE/DVT or a central line infection — not because adequate steps weren’t taken, but because the patient was in such bad shape that the complication was more likely to happen.

 

Second, the bulk of the data is typically self-reported by the hospital. If a patient develops one of the most common accidental cuts or tears during a procedure — i.e., a bowel perforation — and is negligently discharged, and then ends up presenting to another hospital, then this whole “Accidental Puncture or Laceration” never shows up in that hospital’s numbers. Instead, if the patient has their bowel perforation diagnosed before they leave the hospital — as it should be! — then that counts against the hospital. Same goes for pressure sores: I’ve seen more than a few hospital deem a pressure sore to be “stage 2” forever to avoid having to tell Medicare that it became a stage 3 or stage 4, because Medicare will stop paying and it’ll go into the database. You don’t have to take my word for it, though. In 2010, when the Las Vegas Sun went through nearly a half-million patient billing records, they found that hospitals had reported just one-tenth of the actual number of “sentinel events” that were required to be reported.

 

As the saying goes in Information Technology, garbage in, garbage out. Many times, a higher numbers of reported incidents can paradoxically indicate better care at the hospital by showing that they are experienced in treating the most serious cases and by showing that they have good quality-control measures in place.

 

To Consumer Reports’ credit, they realized this problem, and so they, like the Las Vegas Sun, tried to look past the self-reported number and tried to evaluate the billing codes to discover unreported preventable errors. Buried in the 44-pages of Consumer Reports’ methodology, they admit: “this measure [of complications] is limited by the accuracy of coding of complications in the billing records and research suggests that the patient safety indicators significantly underreport the number of errors that occur in hospitals.” Yet, Consumer Reports didn’t do that for many of the smaller hospitals, thereby likely putting the larger hospitals — like the aforementioned teaching and research hospitals — at a disadvantage: “Data come from hospital billing claims for Medicare patients paid through the Inpatient Prospective Payment System, and thus only includes hospitals that are paid through that system. Results are only reported for hospitals with at least 25 cases for an individual measure.”

 

Moreover, even the billing itself is likely wrong about the true condition of the patient. As Consumer Reports puts it later in the methodology:

 

Research suggests that hospitals significantly underreport the number of adverse events that occur. As noted elsewhere, the data source we use is from billing claims data the hospital submits to CMS for payment. To some degree, such data are subject to what is commonly called “gaming,” in which a hospital intentionally provides an inaccurate or incomplete representation on the claim of what occurred during the hospital stay, in order to enhance their performance when the data are used to measure the occurrence of adverse events. Gaming is minimized by federal oversight audits, and by the fact certain types of inaccurate claims submission are seen as fraudulent billing to Medicare and punishable by law.

 

I suppose that depends on what they mean by “minimized.” Gaming is certainly reduced by audits and by False Claims Act liability, but it is by no means minimal — it’s widespread, and getting worse, particularly with the growth of new profit centers that depends on excess billing, like robotic surgery and Pharmacy Benefit Managers. But let’s put the intent of under-reporting hospitals aside and focus on the data: in 2012, Health & Human Services found that “Hospital incident reporting systems captured only an estimated 14 percent of the patient harm events experienced by Medicare beneficiaries.”

 

Truth is, even if we spend thousands of hours combing through Medicare billing — which even Consumer Reports didn’t quite try to do, as they instead worked through population-based models — we’re still going to miss the majority of preventable harms.

 

Consumer Reports’ response to the similar concerns raised in Boston was, in essence, “that’s the best we can do.” From the WBUR article:

 

“That’s a little bit of the point,” says Doris Peter, with Consumer Reports Health Ratings Center, “that clinical data is not widely available to the public and we can’t base our ratings on clinical data if it’s not made available.”

 

Dr. John Santa, medical director of Consumer Reports Health, says in the magazine that while the ratings aren’t perfect, “we think they’re an important step in giving patients information they need to make an informed choice.” And, he adds, “we hope that by highlighting performance differences, we can motivate hospitals to improve.”

 

On the one hand, Consumer Reports has a point: it is no answer for hospitals to say, “well, you don’t have the data to fairly evaluate these issues” while keeping that very same data under lock and key.

 

On the other hand, that doesn’t necessarily mean Consumer Reports, when telling consumers how to choose a hospital, should suggest they rely on the very same figures that Consumer Reports itself admits are flawed. Remember what happened when the D.C. school district put all its emphasis into standardized test scores? Or what has happened to law schools’ employment figures now that U.S. News tells law school applicants what to think? There are over 5,700 hospitals in the United States competing for over 36,000,000 annual admissions (stats here), competing for hundreds of billions of dollars in net patient revenue. Read this article about hospital CEO pay and then ask yourself if you think the net effect of hospital rankings will be fewer errors or fewer reported errors.

 

The end result is that Consumer Reports has constructed a system that, at best, does little to truly inform patients and, at worst, punishes the hospitals that take on the hardest cases and try the hardest to improve their own quality.

 

But, since this post is about choosing hospitals, I figure I should write about how a patient should pick a hospital.

 

Most of the finest work of professionals — the work that truly sets them apart from one another — is wholly opaque to their patients and clients. Not too long ago, in the midst of a complicated case with a variety of specialized lawyers on both sides, I saw a lawyer on our side turn what looked like a total defeat for us into an inescapable procedural quagmire for the other side. Yet, it’s a challenge to even explain what occurred to someone who isn’t versed in that unique specialty, much less to non-lawyers. I thus don’t have a lot of hope that we’ll ever be able to boil down “what makes a doctor or hospital great” into a red or black blob in a chart.

 

Just like with lawyers, accountants, and other professionals or skilled workers, when it comes to a doctor or hospital, there’s no substitute for a knowledgeable personal recommendation. If you know a doctor or a nurse at a particular facility who speaks highly of that place or somewhere else they would know, then go with that — but most people won’t be able to get such a recommendation for a specialized doctor, which is probably why I-95 is littered with just as many billboards for orthopedic surgeons as personal injury lawyers.

 

If you can’t find someone off the bat, keep trying. The old-fashioned way is still the best way: do your own research, learn as much as you can, and seek out people whose opinion you would trust and ask them for a recommendation. After you’ve picked one or two, don’t feel intimidated by the medical surroundings, and don’t be afraid to get a second opinion. In the end, a doctor no different from a mechanic, plumber, or lawyer: they have a specialized skill you don’t have and, hopefully, specialized experience. Ask them how many times they’ve performed the procedure, what outcomes have resulted, and what they worry about with the procedure. This last question startles a lot of doctors, but it’s often the best way to assess their experience and their ability with the procedure.

 

Whatever you do, don’t blindly trust a blob in a consumer magazine.