Back in July 2014, I wrote a post about the misuse of “statistical significance” by defendants and courts trying to apply the Daubert standard to scientific evidence. As I wrote,

It’s true that researchers typically use statistical formulas to calculate a “95% confidence interval” — or, as they say in the jargon of statistics, “p < 0.05” — but this isn’t really a scientifically-derived standard. There’s no natural law or empirical evidence which tells us that “95%” is the right number to pick to call something “statistically significant.” The number “1 in 20” was pulled out of thin air decades ago by the statistician and biologist Ronald Fisher as part of his “combined probability test.” Fisher was a brilliant scientist, but he was also a eugenicist and an inveterate pipe-smoker who refused to believe that smoking causes cancer. Never underestimate the human factor in the practice of statistics and epidemiology.

(Links omitted; they’re still in the original post.) As expected, defense lawyers criticized my post.

Last week, the American Statistical Association published its very first “policy statement” on “a specific matter of statistical practice,” making clear that tossing around the term “statistical significance” is a “considerable distortion of the scientific process:”

Practices that reduce data analysis or scientific inference to mechanical “bright-line” rules (such as “p < 0.05”) for justifying scientific claims or conclusions can lead to erroneous beliefs and poor decision-making. A conclusion does not immediately become “true” on one side of the divide and “false” on the other. Researchers should bring many contextual factors into play to derive scientific inferences, including the design of a study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity of assumptions that underlie the data analysis. Pragmatic considerations often require binary, “yes-no” decisions, but this does not mean that p-values alone can ensure that a decision is correct or incorrect. The widespread use of “statistical significance” (generally interpreted as “p ≤ 0.05”) as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process.

Hallelujah!
Continue Reading

Update: It’s worth pointing out that, a year and a half after Dr. Anick Bérard’s testimony was precluded as “unreliable,” she published in the Journal of the American Medical Association, using many of the same methods the court deemed unacceptable.

Back in 2012, I wrote: “Scientific evidence is one of those rare areas of law upon which every lawyer agrees: we are all certain that everyone else is wrong.”

There have been some missteps in the law’s use of scientific proof as evidence in civil litigation — like when the Supreme Court affirmed a trial court holding in Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999), that an engineer with a Masters in Mechanical Engineering who had worked in tire design and failure testing at Michelin was nonetheless incompetent to testify about tire failures — but, by and large the standard articulated in Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993) makes sense. Courts review an expert’s methods, rather than their conclusions, to ensure that the expert’s testimony has an appropriate scientific basis.

To go with the baseball metaphors so often (and wrongly) used in the law, when it comes to Daubert, the judge isn’t an umpire calling balls and strikes, they’re more like a league official checking to make sure the players are using regulation equipment. Mere disagreements about the science itself, and about the expert’s conclusions, are to be made by the jury in the courtroom.

In practice, though, the Daubert standard runs into problems when courts erroneously decide factual disputes about methodology and conclusions, issues which are better left to cross examination of the experts at trial. Consider the June 27, 2014 opinion in the Zoloft birth defects multidistrict litigation, which struck the testimony of plaintiffs’ “perinatal pharmacoepidemiologist,” Dr. Anick Bérard. Dr. Bérard holds a Ph.D. in Epidemiology and Biostatistics from McGill University, teaches at the Université de Montréal, and has conducted research on the effects of antidepressants on human fetal development. The expert was going to opine that “Zoloft, when used at therapeutic dose levels during human pregnancy, is capable of causing a range of birth defects (i.e., is a teratogen),” an opinion based upon her review of a variety of studies showing a correlation between SSRI use and birth defects. The court had multiple grounds for striking the opinion, but a key issue relating to statistics jumped out at me.
Continue Reading

Scientific evidence plays a crucial role in virtually all mass torts cases (whether prescription drugs, environmental exposures, or consumer products), and so, when the National Research Council and the Federal Judicial Center published the Third Edition of the Reference Manual on Scientific Evidence, lawyers took note. Apart from Supreme Court opinions — which these days often raise more questions than they answer, which is partly why Daubert is still the leading case twenty years later — the Manual is likely the primary reference federal judges use to guide them in deciding what scientific evidence they allow into a jury trial.

Scientific evidence is one of those rare areas of law upon which every lawyer agrees: we are all certain that everyone else is wrong.

Defense lawyers think judges too easily allow in “junk science” from plaintiffs, citing the silicon breast implant litigation, which resulted in over $3 billion in settlements and compensation for autoimmune injuries that most scientists now agree weren’t caused by the implants. Plaintiff’s lawyers, in turn, think the silicon implant case is the exception that proves the rule, and that courts these days more frequently use Daubert and Frye to destroy plaintiffs’ cases by wrongly excluding from trial valid scientific and medical testimony (here’s an example involving vinyl chloride and cancer, and another involving Tylenol and liver damage, and don’t forget Kumho Tire’s indefensible exclusion of an eminently qualified tire tread separation expert), while allowing defendants to bring in all kinds of unscientific nonsense (like the natural forces nonsense in shoulder dystocia lawsuits that’s allowed everywhere except New York).

(In the criminal context, prosecutors complain about the “CSI Effect,” the claim that jurors today expect forensic evidence in every case, while criminal defense lawyers counter that the forensic evidence offered is often garbage and speculation from people with a diploma mill degree.)

As far as I can tell, mostly defense lawyers took note of the Reference Manual publicly, and they took a starkly negative view of it. Nathan Schachtman says “there is a good deal of equivocation between encouraging judges to look at scientific validity, and discouraging them from any meaningful analysis by emphasizing inaccurate proxies for validity, such as conflicts of interest.” David Oliver has been on the warpath, claiming “the fix is in” and most recently criticizing the chapter, “How Science Works,” written by David Goodstein, Professor of Physics and Applied Physics at CalTech.

Oliver complains:

Avoiding any pretense of humility the Reference Manual dismisses as woefully naive and inadequate those claims about the essence of the scientific endeavor that were ingrained in us in school. … Unsurprisingly the Reference Manual, operating on the view that objectivity is an illusion, that you can never prove anything is false and that you can never prove anything is true (“the apparent asymmetry between falsification and verification that lies at the heart of Popper’s theory thus vanishes”) and thus without any track to follow, quickly careens into post-modernism. … So all the great thinkers were wrong. Objectivity is out. Testability is out. Keeping an open mind is out. Skepticism is right out. The appeal to authority is not a logical fallacy but fundamental to science.

I think Oliver has misunderstood the purpose of the chapter. 
Continue Reading

Once you’ve been a trial lawyer for long enough, there are some consumer products you just don’t look at the same anymore, because you’ve heard about them too many times from other trial lawyers or because you’ve sat across a conference table from someone telling you about the worst thing that ever happened to their family. ATVs cause a death or two every day. Gas cans without a flame arrestor or a spill-proof lid severely burn a child under six years old every day or two. Trampolines send 275 kids and teenagers to the emergency room with serious injuries every day.

So it goes with tire failures, which cause a death or two a day, and 15-passenger vans, which have a fatal crash or two every week. Tire blowouts and tread separation are so common, and passenger vans so prone to rollovers, that, when I saw the main characters in Inception get into a Ford E-Series, I instinctively thought, “they’re going to roll it.” (Sure enough, they did, though in fairness to the van, they were being rammed.)

Last week, the National Highway Traffic Safety Administration (NHTSA) sent out a well-meaning press release warning “colleges, church groups, and other users of 15-passenger vans” to take additional precautions, because:

Recognizing that 15-passenger vans are particularly sensitive to loading, the agency warns users never to overload these vehicles under any circumstances. NHTSA research shows overloading 15-passenger vans both increases rollover risk and makes the vehicle more unstable in any handling maneuvers.

Tire pressure can vary on front and back tires that are used for 15-passenger vans. This is why the agency urges vehicle users to make certain the vans have appropriately-sized and load rated tires that are properly inflated before every trip. Taking into account the fact that tires degrade over time, NHTSA recommends that spare tires not be used as replacements for worn tires. In fact, many tire manufacturers recommend that tires older than 10 years not be used at all.

It’s those last two sentences that drive trial lawyers like me bonkers. The NHTSA knows that’s a grossly inadequate warning and knows most consumers have no idea about the real danger of tire failure or how to prevent it. Old tires and their propensity towards tread separation and blowout are a simple scientific fact, but the $30 billion tire industry, the NHTSA, and some courts have all resisted accepting it for years.

Nudged forward by the Bridgestone / Ford Explorer tragedies, the NHTSA in August 2007 finally published its report on tire aging and accidents, but frustratingly concluded only that “NHTSA’s research supports the conclusion that the age of a tire, along with factors such as average air temperature and inflation, plays some role in the likelihood of its failure,” without making any real conclusions. They’ve initiated a follow-up study.

The car manufacturers, though, have long since distanced themselves from the idea that tires can last forever, or that the only time that matters is “time in service.” The ten year expiration date referenced by the NHTSA is what the tire manufacturers begrudgingly admit to, but the real figure from the car manufacturers is six years. Ford, Chrysler, Nissan, BMW, Mercedes-Benz, Volkswagen and Toyota all say six years. I don’t know of any car manufacturer willing to recommend a longer date.

Why not? Because they know tires after six years, particularly in hot climates, will oxidize and break down, so that the glue holding the tire together starts to degrade, making tread separation far more likely. Car companies may or may not make safety job #1, but they sure do respond to lawsuits, and the more times they get hit with multi-million dollar verdicts for knowingly having defective tires on their vehicles, the more likely they are to do something about it.

Yet, apparently there haven’t been enough lawsuits, because there are still tens of thousands of expired and dangerous tires out there. As Rich Newsome noted while discussing a Yokohama recall, the danger is a matter of high-school chemistry: 
Continue Reading

One of the more sobering parts of being a trial lawyer is reviewing intakes of potential cases. We routinely talk with people who have just lost a spouse or child or who have recently suffered an injury that will leave them permanently disabled. Many of these accidents happened in the course of activities we all know to have an element of danger, but many involve doing the same thing a million other people do every day. No one expects that giving their kid Motrin will cause a horrific skin disease or that their tap water might be so polluted that it’s flammable.

Now, a growing body of medical studies shows that acetaminophen (Tylenol in the US, Paracetamol everywhere else) is dangerous at far lower doses than previously believed. It’s been known for decades that acetaminophen overdoses cause liver damage (for example, “acetaminophen hepatotoxicity far exceeds other causes of acute liver failure in the United States,” and some estimates by the American Association of Poison Control Centers suggest more than 50,000 emergency department visits every year related to acetaminophen), particularly when combined with alcohol, but it was generally considered safe if taken at anywhere near the recommended amounts.

Recent studies suggest that’s not the whole story. Just a few weeks ago, a new study in the British Journal of Clinical Pharmacology found that “staggered overdoses,” in which patients repeatedly took amounts slightly higher than the recommended dosage, were the cause of a substantial portion of the hospital admissions for acetaminophen-induced liver damage, and could be more dangerous than individual overdoses, in part because staggered overdoses were harder to diagnose and treat.

In June 2009, the FDA’s Drug Safety and Risk Management Advisory Committee, Nonprescription Drugs Advisory Committee, and the Anesthetic and Life Support Drugs Advisory Committee all voted in favor of:

  • Reducing the current dosage strengths of acetaminophen in nonprescription products to below 4 grams/day
  • Limit formulations of over-the-counter liquid doses of acetaminophen
  • Eliminating prescription acetaminophen combination products (e.g., oxycodone)
  • Requiring a boxed warning for prescription acetaminophen combination product

The FDA disappointingly didn’t act on most of that, and instead took eighteen months to take the weakest action it could:

On January 13, 2011, FDA announced that it is asking manufacturers of prescription acetaminophen combination products to limit the maximum amount of acetaminophen in these products to 325 mg per tablet, capsule, or other dosage unit. FDA believes that limiting the amount of acetaminophen per tablet, capsule, or other dosage unit in prescription products will reduce the risk of severe liver injury from acetaminophen overdosing, an adverse event that can lead to liver failure, liver transplant, and death.

The size of the individual dosage unit was never the problem, though. As the Acetaminophen Hepatotoxicity Working Group for the FDA Advisory Panel found in its report, the problem was far more complicated than the pills being too big:

There is no single factor that leads consumers (also referred to as patients in this report) to develop acetaminophen-related liver injury. The contributing conditions for these cases are multi-factorial and require different interventions that attempt to address each factor. For example, when someone takes an amount greater than labeled, it is unclear whether it is a case of failing to read the directions, failing to understand the directions, failing to understand that severe liver injury can result from not following the directions or failing to realize that more than one of the medications used contained acetaminophen.

The Working Group concluded, “Thus, it is necessary to address all of these causes in attempting to prevent future cases, making clear directions conspicuous and easy to understand and making consequences of overdose unequivocally clear.” (Emphasis added.)

It’s not just the pill size. It’s not just the recommended maximum dosage. The core problem is that consumers and patients have learned, from years of Tylenol advertising and liberal use of acetaminophen by their parents, nurses, and doctors, that it’s a “safe” drug, like caffeine, that can be used every day and without much consequence unless you have a particular susceptibility to it or if you intentionally take way too much. Consumers look at recommended dosages like they do speed limits: you can use that amount without any problems, but try not to go too far above it. Problem is, if you did that with the 4 grams/day of acetaminophen guideline, you had a much higher risk of liver damage, even if you didn’t do it all the time. 
Continue Reading

[Update: For the first time, the federal Environmental Protection Agency has released a report that finds a connection between fracking and groundwater contamination at a site in Wyoming. Unsurprisingly, natural gas companies have gone on the offensive about it. The report certainly helps strength plaintiffs’ claims arising from these types of claims, but they remain on the cutting edge of science, and thus are difficult to prove in court because our court system operates primarily off of long-standing scientific consensus, rather than novel, even if strongly meritorious, theories and evidence. Much will depend on what comes out of the EPA’s big report on the health hazards of fracking, slated for release in 2014.]

I’ll admit it: I find real estate law boring. On Monday, though, I saw an article about a an 1882 Pennsylvania case that touched upon gas deposits in the Marcellus Shale, a hot topic these days:

In 1882, the Pennsylvania Supreme Court announced a presumption that a reservation of “minerals” does not include oil absent evidence within the four corners of the deed of a contrary intent. Dunham v. Kirkpatick, 101 Pa. 36 (1882). In 1960, the Supreme Court announced that its decision in Dunham was a rule of Pennsylvania property law and that pursuant to the Dunham logic, a grant or reservation of “oil” would not include “gas” absent clear expression of the parties’ intent to do so. Highland v. Commonwealth, 161 A.2d 390 (Pa. 1960).

In Butler v. Charles Powers Estate, the Court of Common Pleas of Susquehanna County was faced with a similar claim that a reservation of “one half the minerals and Petroleum Oils” included the Marcellus Shale and, therefore, any gas contained therein. … On September 7, 2011, the Superior Court of Pennsylvania (at No. 1795 MDA 2010,  2011 PA Super 198) reversed and remanded Butler, ruling that the Plaintiffs should be given the right to develop a record in an attempt to prove: a) that the Marcellus Shale is a “mineral” and therefore within the reservation; b) that as unconventional gas, Marcellus Shale gas was not the type of natural gas contemplated in Dunham and Highland; and c) that shale may be more similar to coal than conventional oil and gas reservoirs, so that under Pennsylvania’s nearly unique position that coalbed methane is owned by the owner of the coal (see U.S. Steel Corp. v. Hoge, 468 A.2d 1380 (Pa. 1983)), the owner of the shale may own any gas contained therein.

Almost as interesting as the Butler ruling — which, it could be argued, was very favorable to longtime Pennsylvania residents who might still hold some of these “mineral” rights and very unfavorable to the companies coming in for the natural gas “fracking” boom — was the author of the article, Russell L. Schetroma, of Steptoe & Johnson. There are two Steptoe & Johnson firms, but I didn’t know either to have expertise in Pennsylvania mineral rights law; that’s only happened since September, when they acquired the energy practice from a mid-sized firm near Pittsburgh, including Mr. Schetroma. As The Legal Intelligencer coincidentally reported the next day:

As the natural gas industry continues to expand its footprint in Western Pennsylvania, the region is quickly becoming a desired destination for energy and environmental attorneys.

Two Texas-based firms with relatively new offices in the Pittsburgh area recently brought aboard attorneys who moved either hundreds or thousands of miles to be on the groundfloor of Pennsylvania’s natural gas boom.

The article doesn’t even mention Steptoe & Johnson, but instead focuses on two other huge corporate firms racing to merge with small firms well versed in Pennsylvania real estate and environmental law. Even the lawyers (especially the lawyers?) are rushing to cash in on the Marcellus Shale natural gas boom.

The obvious group missing, though, are the trial lawyers, despite endless press and insurance company speculation that plaintiffs’ lawyers would be racing to file environmental contamination claims against oil and gas companies using hydraulic fracturing. After all, more than a year ago there was significant national press attention (Vanity Fair, New York Times) over contamination of water wells throughout Pennsylvania. That would, under ideal conditions, prompt government action to ensure the safety of residents, but the federal government has taken a pass — fracturing was exempted in 2005 by the intentionally misnamed Safe Drinking Water Act — and Pennsylvania’s own state government is, shall we say, highly inclined to take the gas companies’ side.

It would thus seem to be a perfect storm for lawsuits: the government has given multiple wealthy corporations almost carte blanche to engage in a dangerous process which can cause serious property damage and personal injury.

For all the discussion about environmental contamination caused by fracking in Pennsylvania, though, only a few lawsuits have been filed. I only know of two that have even passed the initial pleadings stage: Fiorentino v. Cabot Oil & Gas Corp. (reported by Reuters here) and Berish v. Southwestern Energy (reported by the NYTimes here). Both allege violations of Pennsylvania’s Hazardous Sites Cleanup Act (35 P.S. §§ 6020.101-6020.1305 (“HSCA”)), Negligence, Private Nuisance, Strict Liability, Trespass, and seek the establishment of a Medical Monitoring Trust Fund.

Both cases have survived the defendants’ motions to dismiss — the Fiorentino order is here, the Berish order here, and both are clear and well-written, and thus recommended reading even for non-lawyers — and are now in discovery. Fiornetino, arising from contamination in Dimock, has resulted in a settlement of the claim brought by the Pennsylvania Department of Environmental Protection to compel Cabot into paying to connect them to the main water systems, for methane mitigation systems, and for water treatment systems, but the plaintiffs’ individual claims for damages remain.

Seeing only two cases at the moment, neither of which has made it to any dispositive legal holdings or factual findings, the question, then, is: where are all the fracking water contamination lawsuits?


Continue Reading

In retrospect, it’s obvious: battering your brain and sustaining concussions on a regular basis as part of your job can have severe long-term consequences. I remember back when I played football in school that there was already a long-standing debate over the apparent safety of big, heavy helmets with wire face masks. At first blush, it seemed the answer to the broken noses, broken jaws, and facial and head laceration that had long plagued football was to use modern plastic injection-molding techniques and build bigger helmets with bigger face masks. More padding is safer than less padding, right?

The helmets, though, opened up an entirely new set of tactics in which players would use their own heads — shielded by the hard helmets and face masks — as weapons. If you’re a coach or an owner, why limit players to shoving opponents around when they can use their helmets as a battering ram? The NCAA and NHSFF both quickly picked up on the technique and banned initial contact of the head in blocking and tackling, but the NFL declined.

The effect, in terms of brain injury, was to convert football from a grappling sport like rugby or wrestling characterized by limb and torso fractures into a striking sport like boxing characterized by closed head injuries. Like when boxing started putting on bigger and bigger gloves, the sport is a lot less bloody but a lot more dangerous. As ugly as mixed martial arts fights get, truth is, they’re safer on the brain (PDF of “Incidence of Injury in Professional Mixed Martial Arts Competitions” in the Journal of Sports Science and Medicine) because there are only so many times that you can punch someone in the face with an ungloved hand without giving up because of the pain or because of a broken hand. (“I broke his hand with my face” is more than just a schoolyard excuse.) In contrast, there’s no limit on how many times someone wearing large, soft boxing gloves can batter their opponent’s brain, and a large number of fights today end with a knockout — and the concussion that causes a fighter to stay down for ten seconds.

But, no matter how obvious it may have even been at the time, the NFL continued to deny any connection between routine closed head injuries in football and long-term consequences like dementia or early-onset Alzheimer’s disease. Players believed them; the NFL is undoubtedly in the better position to know.

That all started to change two years ago. From the new Easterling et al. v. National Football League putative class action:

On September 30, 2009, as a part of its continuing active role in disputing and covering up the causative role of repeated concussions suffered by NFL players and long-term mental health disabilities and illnesses, the defendant disputed the results of a scientific study that it funded. On the aforementioned date, newspaper accounts were published detailing (an unreleased) a study commissioned by the NFL to assess the health and well-being of retired players, which found that the players had reported being diagnosed with dementia and other memory-related diseases at a rate significantly higher than that of the general population. Despite the findings of this study, showing that 6.1 percent of retired NFL players age 50 and above reported being diagnosed with dementia, Alzheimer’s disease and other memory related illnesses, compared to a 1.2 percent for all comparably aged U.S. men, the defendant’s agents disputed these findings and continued the mantra in the Press that there is no evidence connecting concussions, concussion like symptoms, NFL football and long-term brain illness or injury, including but not limited to Chronic Traumatic Encephalopathy (CTE), dementia, etc.

The issue was then dramatically brought back into headlines by the suicide of Dave Duerson, who, in an ironic mixture of mental illness and rational foresight, donated his brain to Boston University so they could test it for brain damage. They did, and found signs of chronic traumatic encephalopathy.

The Plaintiffs in the new action — seven former NFL players, including Jim McMahon — allege that the NFL knowingly kept the sport violent and dangerous (which, some commentators argue, is what NFL fans want) and want to establish a class action for:

All former NFL players who sustained a concussion(s) or suffered concussion like symptoms while in the NFL league, and who have, since leaving the NFL, developed chronic headaches, chronic dizziness or dementia or Alzheimer’s disease and/or other physical and mental problems as a result of the concussion(s) suffered while a player.

The lawsuit seeks money damages, declaratory relief, and “the establishment of a medical monitoring class.”

And that’s where they’ll have a problem.
Continue Reading