I write this blog primarily for lawyers and others interested in the law. If you’re looking for a lawyer, start with my legal services page. You can call me at 215-948-2718 or email me at firstname.lastname@example.org. Mail can be sent to:
Max Kennerly, Esq.
1701 Walnut Street, 7th Floor
Philadelphia, PA 19103.
A week ago, the Wall Street Journal published an excellent article, “Clues to Better Health Care From Old Malpractice Lawsuits,” which detailed the way that malpractice insurers and medical safety groups have been pouring through thousands of closed malpractice cases to see ways they can improve health care.
As the Wall Street Journal says:
There are common themes in claims from almost every medical specialty—including failure to properly diagnose a patient or poor technique in a procedure. But data collections from different specialty groups are also helping to identify issues unique to different types of doctors, including primary-care physicians, anesthesiologists, emergency-room doctors and cardiologists.
It should come as no surprise that many of the “issues unique to different types of doctors” are exactly the same types of cases for which medical malpractice lawyers routinely advertise. Consider this list of improvements to Emergency Care:
Of 332 claims studied from 2007 to 2013, 52% of patient injuries were found to be caused by patient assessment issues.
- The diagnosis was wrong because the doctor failed to consider other diseases and conditions with similar symptoms.
- The doctor failed to order the right diagnostic tests.
- The patient was discharged too soon.
- The physician didn’t deal with abnormal test results or examination findings or use available information in the patient’s medical record.
Solutions: Avoid first-impression or intuition-based diagnosis. Make sure all specialists who evaluate patients have complete data.
Indeed. The very best source for information about how to improve health care comes from, no surprise, the times when it didn’t work right. That is to say: medical malpractice lawsuits. Continue reading
A recent article in the British Medical Journal made the headline-grabbing claim that medical errors were now “the third leading cause of death in the US,” behind only cancer and heart disease. Medical errors, in their estimate, caused more deaths each year than motor vehicles, firearms, and suicides combined.
The backlash from the medical profession has already started. STAT News posted an equally-provocative article, written by an assistant professor of medicine, “Don’t believe what you read on new report of medical error deaths.” MedPageToday grumbled about the “superficial coverage” and made several complaints. Skeptical Scalpel said the article “shines no new light, only heat, on the subject.”
So who’s right?
Let’s start with two basic points. First, the primary author of the BMJ article, Martin Makary, isn’t a quack, but rather a surgeon at Johns Hopkins who has regularly written about transparency in health care. Second, the BMJ article wasn’t an original study, but rather a two-page analysis of existing literature about medical errors. There wasn’t much “new” in the article, except that they took the estimated rate of death from medical error in the 1999 Institute of Medicine report, To Err Is Human: Building a Safer Health System, and extrapolated it to the present by using 2013 US hospital admissions. Continue reading
On Monday, a jury in Missouri hit Johnson & Johnson with a $55 million verdict in favor of a woman who developed ovarian cancer after decades of using talc baby powder in her vaginal area as part of her normal routine. Younger readers might find this practice unusual, but this was commonly recommended and encouraged through advertisements with slogans like, “just a sprinkle a day keeps odor away.” To this day, Johnson & Johnson still doesn’t warn against use in the vaginal area, and instead continues to encourage adults to use it all over their bodies, because it “gives a cooling sensation, and helps to prevent chafing.”
The case was the second such huge verdict this year, following a $72 million verdict in February. But this verdict is in many ways a better indicator of the strength of these lawsuits: this case was selected for trial by the defendants, apparently based on the belief that the woman’s pre-existing endometriosis would absolve Johnson & Johnson. As the defense lawyer told the jury:
“The fact is, endometriosis is a recognized, significant risk factor for ovarian cancer,” she said. “It’s highly unlikely [that] Mrs. Ristesund would have had ovarian cancer if she had not had endometriosis. In fact, there’s no proof, none, that she wouldn’t have had ovarian cancer had she not used talc. None.”
Afterwards, another of Johnson & Johnson’s lawyers told the press: “The scientific reality is that cosmetic talc does not cause cancer.” This statement isn’t true, but I’ll get to that in a moment. Continue reading
Heparin is one of the most basic medicines used in medicine, the primary anticoagulant used by hospitals, which is why it’s part of the World Health Organization’s List of Essential Medicines.
But anticoagulants are so powerful that they are used as rat poison. Anticoagulants make a patient 10 times more likely to develop intracerebral hemorrhage, and thus all of them — Heparin, Coumadin, warfarin — have to be used with the utmost caution.
A Philadelphia jury has hit the Hospital of the University of Pennsylvania with a $44.1 million verdict for failing to recognize a woman’s adverse reaction to anti-coagulant medication before she suffered a brain hemorrhage.
While at the neurological intensive care unit, Tate was given the heparin. Using an activated partial thromboplastin time (or aPTT) test, staff measured coagulation in Tate’s blood as it rose from 19 seconds to nearly 32 seconds, court papers said. Testing then stopped for two days, until Tate sustained the brain hemorrhage, Tate’s memo said. When she was tested again, her aPTT level was 61, according to Tate’s memo.
The amount of heparin given is typically based upon a “nomogram,” in which the patient’s initial heparin dose is calculated purely on the basis of their weight. But that’s just to start the heparin. The cardiovascular system is dynamic and constantly changing in response to conditions, including both whatever medical condition brought the patient to the hospital (like surgery) and to reactions to the medicine itself. Thus, as a 2012 review by the American College of Chest Physicians noted, “because the anticoagulant response to heparin varies among patients, it is standard practice to monitor heparin and to adjust the dose based on the results of coagulation tests.” There’s a couple different types of coagulation tests. For years, aPTT was the most common, although newer evidence points towards using the antifactor Xa test. Continue reading
“Evidence-based medical treatment guidelines” sounds like such a good idea. Who would want medical treatment that wasn’t based on evidence?
The problem is in the details. Way back in 1996, when “evidence-based medicine” was coming to the fore, the originators of the concept went out of their way to say “evidence-based medicine is not cookbook medicine,” and that it can “never replace individual clinical expertise and it is this expertise that decides whether the external evidence applies to the individual patient at all, and if so, how it should be integrated in a clinical decision.”
Fast-forward twenty years, and now the Pennsylvania General Assembly is considering whether to use evidence-based medicine as the sort of “cookbook medicine” it was never meant to be. Continue reading
Over at The Green Bag, Judge Richard Posner published “What Is Obviously Wrong With the Federal Judiciary, Yet Eminently Curable, Part I.” The article is quintessential Posner: concise, expansive, forceful, and packed with good and bad ideas with minimal supporting citations.
Let’s focus today on his arguments about Federal Rule of Evidence 706:
A big problem with jury trials is that often they involve technological or commercial issues that few jurors understand (not that many judges understand them either) and that the lawyers and witnesses are unable or unwilling to dumb down to a level that the jurors would understand. There is a solution to this problem, however, though one that few judges employ: appointment by the judge of an expert witness (thus a “neutral” expert, by virtue of not having been selected by the lawyer for one party to the litigation). The authority to make such an appointment is explicitly conferred on federal judges by Rule 706 of the Federal Rules of Evidence, but is alien to the Anglo-American judicial culture, in which the witnesses in a case are designated by the lawyers rather than by the judge.
The fault is the culture. Our legal culture, in contrast to that of most countries in the world (notably Japan and the nations of Continental Europe), is “adversary,” in the sense that the judge is the arbiter of a contest–a drama, really – put on by the lawyers for the contending parties. . . .
Id. at 190 (hyperlink added). Judge Posner has been beating that drum for over twenty years now. See, e.g., Indianapolis Colts, Inc. v. Metro. Balt. Football Club Ltd., 34 F.3d 410, 414-415 (7th Cir. 1994). The chorus seems to be growing louder. Five years ago, the American Bar Association published an article summarizing much of the federal court precedent on the issue, and suggesting wider use. Three years ago, one of the experts appointed by Judge Posner wrote his own article in support. Last year, Pennsylvania trial judge Bradford H. Charles (Lebanon County) wrote a thorough law review article in favor of the practice.
As a parent, this is another story that is impossible to comprehend: A 7-year-old girl is now dead after the bouncy castle she was playing on blew away at an Easter fair in Essex, England.
It is believed the castle was swept away by a gust of wind. The girl, Summer Grant, was taken to a local hospital and died of multiple injuries several hours later. A 24-year-old woman and a 27-year-old man have been arrested on suspicion of manslaughter by gross negligence, according to the Essex police on its Facebook page.
It’s of course tragic, but it’s not “impossible to comprehend.” Back in 2012, the medical journal Pediatrics published a study, “Pediatric Inflatable Bouncer–Related Injuries in the United States, 1990–2010,” which concluded:
From 1995 to 2010, there was a statistically significant 15-fold increase in the number and rate of these injuries, with an average annual rate of 5.28 injuries per 100 000 US children (95% CI: 2.62–7.95). The increase was more rapid during recent years, with the annual injury number and rate more than doubling between 2008 and 2010. In 2010, a total of 31 children per day were treated in US EDs for an inflatable bouncer–related injury, which equals a child every 46 minutes nationally.
It’s not incomprehensible when an inflatable amusement floats away in a gust of wind. It’s preventable.
And that’s the essence of my job: figuring out–after the fact–if an accident was preventable. You can imagine how risk averse I am as a result.
However, just because I’m a trial lawyer doesn’t mean that my kids can’t have fun. It just means that I strive to be reasonable, but sometimes “reason” has to almost hit me on the head. Continue reading
Back in July 2014, I wrote a post about the misuse of “statistical significance” by defendants and courts trying to apply the Daubert standard to scientific evidence. As I wrote,
It’s true that researchers typically use statistical formulas to calculate a “95% confidence interval” — or, as they say in the jargon of statistics, “p < 0.05” — but this isn’t really a scientifically-derived standard. There’s no natural law or empirical evidence which tells us that “95%” is the right number to pick to call something “statistically significant.” The number “1 in 20” was pulled out of thin air decades ago by the statistician and biologist Ronald Fisher as part of his “combined probability test.” Fisher was a brilliant scientist, but he was also a eugenicist and an inveterate pipe-smoker who refused to believe that smoking causes cancer. Never underestimate the human factor in the practice of statistics and epidemiology.
(Links omitted; they’re still in the original post.) As expected, defense lawyers criticized my post.
Last week, the American Statistical Association published its very first “policy statement” on “a specific matter of statistical practice,” making clear that tossing around the term “statistical significance” is a “considerable distortion of the scientific process:”
Practices that reduce data analysis or scientific inference to mechanical “bright-line” rules (such as “p < 0.05”) for justifying scientific claims or conclusions can lead to erroneous beliefs and poor decision-making. A conclusion does not immediately become “true” on one side of the divide and “false” on the other. Researchers should bring many contextual factors into play to derive scientific inferences, including the design of a study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity of assumptions that underlie the data analysis. Pragmatic considerations often require binary, “yes-no” decisions, but this does not mean that p-values alone can ensure that a decision is correct or incorrect. The widespread use of “statistical significance” (generally interpreted as “p ≤ 0.05”) as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process.
Hallelujah! Continue reading
At the invitation of the George Mason University’s Law & Economics Center, I recently went to Washington D.C. to debate Ana Reyes of Williams & Connolly on the subject of preemption in drug injury lawsuits. The video is available here.
I stated many facts during the discussion, and below are my sources for them. Continue reading
The science media has blown up recently over Sci-Hub, dubbed “the Pirate Bay of the science world.” Here’s a BigThink article, a ScienceAlert article, and an Atlantic article. Sci-Hub is, to put it mildly, the greatest open repository of scientific papers in the history of the world. There’s just a small problem: those papers are almost all copyrighted, and the whole purpose of Sci-Hub is to circumvent paying the copyright holder.
Unsurprisingly, Elsevier, the juggernaut scientific journal publisher, has sued the proprietor of Sci-Hub, neuroscientist Alexandra Elbakyan, for running the database. Elsevier says in their complaint that they host “almost one-quarter of the world’s peer-reviewed, full-text scientific, technical and medical content,” amounting to “over 10 million copyrighted publications.” As they brag, “[m]ore than 15 million researchers, health care professionals, teachers, students, and information professionals around the globe rely on ScienceDirect as a trusted source of nearly 2,500 journals and more than 26,000 book titles” — all of whom have to pay for access, typically $35 per article.
In case you’re wondering: the actual authors of the articles don’t receive a dime of that income. Elsevier owns the copyright to those articles. Elsevier thus doesn’t create anything, they’re just the middleman between those 15 million “researchers, health care professionals, teachers, students, and information professionals” and the accumulated knowledge they need to do their jobs. Even Harvard found it difficult to stomach the huge fees charged by Elsevier. Perhaps even more frustrating, many of those papers sitting behind a paywall were funded by U.S. taxpayers through National Institutes of Health grants, but the NIH’s public access policy doesn’t require public access until “no later than 12 months after the official date of publication.” That’s fine for the casual reader, but for researchers in the field, it means they’re paywalled off from the latest scientific information. Continue reading