Scientific evidence plays a crucial role in virtually all mass torts cases (whether prescription drugs, environmental exposures, or consumer products), and so, when the National Research Council and the Federal Judicial Center published the Third Edition of the Reference Manual on Scientific Evidence, lawyers took note. Apart from Supreme Court opinions — which these days often raise more questions than they answer, which is partly why Daubert is still the leading case twenty years later — the Manual is likely the primary reference federal judges use to guide them in deciding what scientific evidence they allow into a jury trial.

Scientific evidence is one of those rare areas of law upon which every lawyer agrees: we are all certain that everyone else is wrong.

Defense lawyers think judges too easily allow in “junk science” from plaintiffs, citing the silicon breast implant litigation, which resulted in over $3 billion in settlements and compensation for autoimmune injuries that most scientists now agree weren’t caused by the implants. Plaintiff’s lawyers, in turn, think the silicon implant case is the exception that proves the rule, and that courts these days more frequently use Daubert and Frye to destroy plaintiffs’ cases by wrongly excluding from trial valid scientific and medical testimony (here’s an example involving vinyl chloride and cancer, and another involving Tylenol and liver damage, and don’t forget Kumho Tire’s indefensible exclusion of an eminently qualified tire tread separation expert), while allowing defendants to bring in all kinds of unscientific nonsense (like the natural forces nonsense in shoulder dystocia lawsuits that’s allowed everywhere except New York).

(In the criminal context, prosecutors complain about the “CSI Effect,” the claim that jurors today expect forensic evidence in every case, while criminal defense lawyers counter that the forensic evidence offered is often garbage and speculation from people with a diploma mill degree.)

As far as I can tell, mostly defense lawyers took note of the Reference Manual publicly, and they took a starkly negative view of it. Nathan Schachtman says “there is a good deal of equivocation between encouraging judges to look at scientific validity, and discouraging them from any meaningful analysis by emphasizing inaccurate proxies for validity, such as conflicts of interest.” David Oliver has been on the warpath, claiming “the fix is in” and most recently criticizing the chapter, “How Science Works,” written by David Goodstein, Professor of Physics and Applied Physics at CalTech.

Oliver complains:

Avoiding any pretense of humility the Reference Manual dismisses as woefully naive and inadequate those claims about the essence of the scientific endeavor that were ingrained in us in school. … Unsurprisingly the Reference Manual, operating on the view that objectivity is an illusion, that you can never prove anything is false and that you can never prove anything is true (“the apparent asymmetry between falsification and verification that lies at the heart of Popper’s theory thus vanishes”) and thus without any track to follow, quickly careens into post-modernism. … So all the great thinkers were wrong. Objectivity is out. Testability is out. Keeping an open mind is out. Skepticism is right out. The appeal to authority is not a logical fallacy but fundamental to science.

I think Oliver has misunderstood the purpose of the chapter. 

“How Science Works,” one of the early chapters of the Manual, begins:

The purpose of this chapter is not to resolve the practical difficulties that judges will encounter in reaching those decisions; it is to demystify somewhat the business of science and to help judges understand the Daubert decision, at least as it appears to a scientist. In the hope of accomplishing these tasks, I take a mildly irreverent look at some formidable subjects.

Goodstein uses the word “irreverent,” but I think the more descriptive word is “realistic.” Goodstein’s primary purpose in writing the chapter, at least as I interpret it, was to dispel several myths about science that everybody, including judges, learns in high school.

Scientists, even those in the “hard” sciences that are based primarily on empirical observations and mathematical analysis, have their own dogmas, prejudices, incentives, and conventions. That’s of course not to say that science is bad or wrong or useless — the only reason you can read this on your computers is because thousands of scientists over the years came to exactly the right conclusions about electricity, metallurgy, chemistry, mathematics, quantum theory, and information theory — but just to admit the obvious, which is that scientists are people and science happens under many of the same constraints as every other social endeavor. As much as we’d like to trust scientists as objective experts whose assertions should be accepted ipse dixit (a phrase that dates back to Pythagorus and is today routinely used by lawyers trying to discredit their opponent’s expert), the truth is that courts shouldn’t be afraid to look at scientists as people and evaluate them accordingly.

I have no doubt that Goodstein would recommend scientists in pursuit of some fundamental truth about nature should hold themselves to the highest standard, and should do everything in their power to put objective analysis ahead of subjective belief. At the same time, I can’t imagine anything more foolish and counterproductive that a judge trying to rule on a pretrial motion could do than to assume that everyone who calls himself a scientist is unbiased and incapable of making a mistake.

A judge hearing a Daubert or Frye motion who tries to figure out what the “correct” scientific answer is to the issue in the case has already committed a reversible error. That’s not their job. Their job is to make sure the jury isn’t going to hear pure baloney, not to pick one scientist’s opinion over another’s. As Justice Breyer writes in the Preface to the Manual:

The search is not a search for scientific precision. We cannot hope to investigate all the subtleties that characterize good scientific work. A judge is not a scientist, and a courtroom is not a scientific laboratory. But consider the remark  made by the physicist Wolfgang Pauli. After a colleague asked whether a certain scientific paper was wrong, Pauli replied, “That paper isn’t even good enough to be wrong!” Our objective is to avoid legal decisions that reflect that paper’s so-called science. The law must seek decisions that fall within the boundaries of
scientifically sound knowledge.

And to avoid legal decisions based on science that is “not even wrong,” courts need to recognize the reality of how scientific research is produced, which is some distance from the idealized vision of the scientific method. That’s all Goodstein was getting at.

  • YourLittleBrother

    You may wish to point out that the reference manual can be downloaded for free by anyone.

    My brief survey of its 1035 pages reveals no mention of Feynman. Every attorney and judge, every scientist too, should keep a copy of Cargo Cult Science on their smartphone.

    • Imagine a society in which everyone’s read Feynman. I link to a copy of the Manual in the beginning, but I suppose that’s not obvious.


      • velvetfish1

        I can’t imagine it. I’ve tried to understand the mathematics and figures behind Feynman’s diagrams and I’m still incapable of doing it and I am sure I have far better math skills than most. I have even struggled to understand many issues his his twp volume basics physics text book.

        The sad truth is that for humans, who prefer to accept their most cherished beliefs as facts, especially out of political and personal convenience, will accept sophism over science 99% of the time. Indeed our entire legal system is based on sophism rather than science. Most simply don’t even know the difference and consequently, we will all pay the ultimate price for our ignorance, human extinction. Its coming to a planet near you.

        • I don’t agree that the legal system is based on sophism. Sure, lawyers and judges can sometimes distort perfectly reasonable rules into something that defies the spirit and purpose of the rule, but by and large the legal system makes sense and works reasonably well. Studies have shown that even in complicated cases, like medical malpractice actions, juries reach the same result that a panel of experts would reach.

          The real question, IMHO, is one of which rules we apply. We too frequently pretend that legislators and Supreme Court Justices are fairly and dispassionately evaluating technocratic rules. Nothing could be further from the truth. Constitutional law is nothing if not a series of policy choices; the reason you can be strip-searched after being picked up for jaywalking is because 5 Justices of the Supreme Court think that’s okay, not because of some sort of intrinsic meaning embedded within the Bill of Rights.

          Same goes for scientific evidence. When a court deems one party’s evidence junk and the other’s reliable science, sometimes that’s a product of good and sound evaluating of methodology, and sometimes it’s a simple policy decision on the part of the judge.

      • YourLittleBrother

        Whoops, so you did. I visited first the “published” link, and from there it’s a bit confusing as it asks for email and who you are and stuff like that before allowing you to download it.

        Actually, I wish I had the time to read it. The chapter titles themselves sound interesting.

        One chapter not there is “Forensic Voice Identification” which has made the news a little bit lately with the Zimmerman / Martin case. The news and many attorneys at various blogs got all caught up in the pronouncements of two “forensic voice identification” experts that claimed they could determine who was screaming. One expert with lots of expert witness experience said he had software that could do this.

        A couple of blogs, TalkLeft, and JustOneMinute (and probably others) were able to show these claims of “scream identification” were mostly just impossible woo.

        • “Voice identification” is a lot like the “enhance” tool for surveillance footage on CSI. Sure, there are detectable patterns so that you can indeed show one conclusion is more likely than another, but anyone who claims definitive proof based on those is kidding themselves.


  • Ezracolbert

    60 years ago now, J K Galbraith wrote advice for a young beauracrat. He noted that experts can always be wrong, and the experts confidence or vehemence in his story was not a reliable indicator of how correct the expert was.
    More recently, in A Civil Action, both of the expert witness are painted, as, at best, fools who thought they had an easy payday.

  • ezracolbert

    I started to read oliver, and the best you can say is that he is a terrible writer; he really doesn’t know how to construct a coherent paragraph (and his blog sucks, technically – there is a broken comment form, I mean in 2012, who has a broken comment form ?)