In Episode 2 of the House of Satori podcast, I made the following (paraphrased) claims:

  1. The state of science is unwell;
  2. A sample of the majority of published papers are likely irreproducible, or just plain garbage.

I know of no other way to make the case for these propositions than by example, knowing full well that this is subject to the counterclaims that “these are just anecdotes” and/or that they are not representative of the whole of science. To somewhat blunt such criticism, I begin with examples of people who were at the pinnacle of some of the most prestigious scientific and medical journals and let their experiences begin the discussion. I finish my proof by offering a history lesson on the origins of these phenomena, which I believe assists in showing that the “how” of “how we got here” helps explain and buttress the conclusion that “here” is, scientifically speaking, a very bad place.

Ben Goldacre is a British doctor and author who wrote the wonderfully evocative “Bad Science” and its companion follow-up “Bad Pharma.” Ben makes a living speaking and writing about the vagaries of pseudoscience and does so in a wonderfully humorous and accessible way. Having seen him speak in person, I tend to think of him as a kind of Malcolm-Gladwell-meets-Michael-Lewis of junk #science. His examples are relevant, modern, funny (most of the time), pervasive, and cross cultural boundaries. His chapter in “Bad Science” entitled “The Doctor Will Sue You Now” should be read by all high school students as an introduction to science by showing these future adults how governments, physicians, and all of the supposed checks and balances of peer review not only don’t prevent, but in actuality help enable, the kind of fraud peddled by Dr. Matthias Rath – who claimed he could cure AIDS with his multivitamins and managed to get the support of the South African government for a completely unethical clinical trial on human beings.

Goldacre’s book begins with an offhand remark that I believe merits consideration on this notion of the poor state of science.

“The hole in our culture is gaping: evidence-based medicine, the ultimate applied science, contains some of the cleverest ideas from the past two centuries; it has saved millions of lives, but there has never once been a single exhibit on the subject in London’s Science Museum.”

Bad Medicine, Preface, p. x (emphasis added).

If Goldacre is correct that medicine is truly an “applied science,” then I feel confident that my claim that science is a mess can be amply proven because “The Mess” that is modern medicine can be considered an archetype for all of the ills of science. Additionally, the greatest killer of human beings in the United States right now is chronic disease: that is to say, far and away in the United States, more people die every year as a result of repeated, entirely optional, bad behaviors than any other single disease or causal factor. We also spend about $0.86 out of every healthcare dollar on the various chronic diseases. As just one salient example, when I was the general counsel for CrossFit, Inc., and we were approaching 7,000 gyms in the United States, we got curious to know what other ‘chains’ were growing as fast. Starbucks and Subway certainly had more locations, but they had started well before us and were no longer opening stores as quickly. After some web searching we found the only business opening as many new locations was DaVita – kidney dialysis centers – owned by Berkshire Hathaway. Diabetes and its associated disease states are a massive drain on the healthcare budget. Then think about adding in coronary artery disease (Thanks govt nutrition guidelines!) and the associated problems, most strokes (Thanks doctors who advocated smoking!), etc. Yet these are diseases of advanced civilization. We continue to pat ourselves on the back at our advanced #science(!) while we kill ourselves with lifestyle behaviors at a rate that approaches the death chambers at Auschwitz. And speaking of which, one would think that the entirety of WW2 and its aftermath, with the use and application of science to produce more efficient ways of killing, should give us great pause to consider whether our science (with or without the hashtag) might need some recalibration, yet there is no post-war period of philosophical introspection about science at all to which one can point. FN1

Richard Smith began his career as a physician in Great Britain, and finished it as the chief editor of the “prestigious” BMJ (previously known as the British Medical Journal) from 1991-2004. He was also head of the BMJ publishing group and worked at BMJ for a total of 25 years, beginning in 1979. Here is his take from 2006 on some ideas for reform, or even abandonment, of the peer review process in medical and scientific journals. Ten years later, however, in a lecture to the International Journal of Epidemiology, he was singing a slightly different tune: blow the entire system up. One might argue that this is simply a case of one man’s bitterness, but Marcia Angell, the former editor-in-chief of the New England Journal of Medicine, arguably the most influential medical journal on the planet, had very similar things to say after her time in academic publishing (in 2009). She was disenchanted enough to write a book entitled “The Truth About the Drug Companies: How They Deceive Us and What to Do About It.” Her conclusion?

“It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of The New England Journal of Medicine.”

Dr. Marcia Angell, 2009

A great place to really get one’s “bad science” is Retraction Watch. Retraction Watch, as the name implies, began as a simple blog about scientific papers that were subsequently retracted, in part because – just like other publishers, especially newspapers and magazines – science publishers aren’t particularly keen on reporting that something they previously published was completely wrong. More importantly, in addition to burying subsequent retractions, no one is charged with connecting a paper’s retraction to all of the subsequent papers and research that relied upon the conclusions of the original faulty paper. Entire fields of study have been wiped out by faked papers and research, except that no one ever goes back and actually wipes those fields out: they just keep right on chugging.

John Ioannidis wrote a paper attempting to explain mathematically why “Most Published Research Findings Are False.” The sidebar to that article is worth following as Ioannidis responded to the firestorm that his paper generated.

My own personal favorite is based upon my experience as a criminal defense attorney with government run forensic laboratories. While this may bum-out the viewers that keep the “CSI” franchise and its sponsors in business, I invite anyone to go to a search engine and type in the name of a state agency’s lab – the ones that handle forensic analysis of evidence in criminal trials – and the word “scandal” after it and see what results return. Massachusetts, for example, is still dealing with the fallout from their drug-using forensic chemistsButMassachusettsishardlyunique. (Yes, those are all separate links). Indeed, one could argue that government attempts to use #science to put people in cages is the perfect jumping-off point for explaining how we got “here” with bad science, which brings me to taking the lawyers to task – because boy, do lawyers suck at science… and this goes well beyond the long-standing joke that a law student is just a med school student who couldn’t pass organic chem.

The Beginning of the Downhill Run

More on the lawyers in a minute, but first, if we’re going to look at causes of post-modern science, it begins, ironically enough, with one of the greatest pieces of pure science ever done: the 1905 publication of the Theory of Special Relativity by a brilliant Swiss patent clerk named Albert Einstein. Later that same year, in a follow-up paper, Einstein altered the course of world history withe E=mc^2. The cultural change in science didn’t occur for years afterwards, of course. Einstein’s predictions about the nature of light and how it would bend around the sun as a consequence of gravity weren’t even tested until the famous eclipse of May 29, 1919, but it is Einstein who was both the pinnacle of modern science and the harbinger of post-modern science.

The eclipse was not, of course, the *single* causative event of its era that led to the diminution of real science. In point of fact, it may have been the second, or third, or fourth most important event, but it was undoubtedly a part of a series of events between World Wars that changed the course of science. (FN2) The inter-bellum period of the early twentieth century saw radical changes in the prevailing models of how economics, politics, public policy, and even how the universe worked. In the ‘science’ of economics, Marxism and its champion in John Maynard Keynes rose to ascendance in the same period that eugenics was a very real policy of multiple states in the U.S., born out of the (mis)application of Darwin’s theories on evolution to social structures. The same was undoubtedly true for Einstein and relativity.

It is hard to fully appreciate now the impact Einstein’s general theory of relativity – and its “proof” by measuring the position of stars only visible during a total eclipse – had on the underlying faith in science and a wide swath of popular culture. Encyclopedia Brittanica notes:

The ideas of relativity were widely applied—and misapplied—soon after their advent. Some thinkers interpreted the theory as meaning simply that all things are relative, and they employed this concept in arenas distant from physics. The Spanish humanist philosopher and essayist José Ortega y Gasset, for instance, wrote in The Modern Theme (1923),

‘The theory of Einstein is a marvelous proof of the harmonious multiplicity of all possible points of view. If the idea is extended to morals and aesthetics, we shall come to experience history and life in a new way.’

The revolutionary aspect of Einstein’s thought was also seized upon, as by the American art critic Thomas Craven, who in 1921 compared the break between classical and modern art to the break between Newtonian and Einsteinian ideas about space and time.

Encyclopedia Brittanica, “Relativity” entry, accessed 7/8/2019.

Einstein hadn’t simply upended Newton, however. Relativity also had consequences for even more ancient and revered studies, such as that staple of high-school mathematics, Euclidean geometry. Of course, Lobachevsky had already shown the possibility of alternate geometrical systems where parallel lines do cross as early as 1830, but Einstein’s postulate wasn’t just that alternatives geometries were possible, the eclipse of 1919 proved that, in fact, the Universe – our Universe – was decidedly non-Euclidean. FN3. So there went a two-thousand year-old pillar of mathematics, as well.

David Stove of the University of Sydney makes a compelling case for where the philosophy of science in the western world went awry in his book Popper and After: Four Modern Irrationalists, Pergamon Press, 1982. He also points to the disruption that Einstein’s relativity wrought upon the scientific world.

The crucial event was that one which for almost two hundred years had been felt to be impossible, but which nevertheless took place near the start of this century: the fall of the Newtonian empire in physics. This catastrophe, and the period of extreme turbulence in physics which it inaugurated, changed the entire history of the philosophy of science. Almost all philosophers of the 18th and 19th centuries, it was now clear, had enormously exaggerated the certainty and extent of scientific knowledge.

D.C. Stove, “Popper and After,” p. 51.

I believe it necessary to add that there were certainly other mathematicians, like Omar Khayyam, as far back as 1077 who were trying to prove the 5th postulate of Euclidean Geometry and expressing difficulty, essentially identifying its greatest flaw. Giovanni Saccheri seems to have discovered elliptical geometry in a work published in 1733, although he himself rejected the conclusions and his work remained obscure until the 19th century. The same is true of some middle-eastern mathematicians, like Ibn al-Haytham. Regardless, none of those works had anything like the popular impact that Einstein’s did, so I suppose one can add the power of the Guttenberg press to the list of confounding factors on the journey from modern science to post-modern science.

Stove takes only 100 pages to fully identify and explicate the source of irrationalism in science, beginning with Karl Popper and Popper’s fellow scientific irrationalists, and then tracing it back to Hume’s extreme inductive skepticism. Stove goes so far as to detail the flaw in inductive skepticism by use of symbolic logic. Stove contends that Hume’s belief that one could draw no conclusions at all from repeated observations in the past about the future was revived by Karl Popper in the aftermath of the “fall” of the Newtonian view of the universe. FN3.

In this dependence on Hume, Popper is only an extreme case of a general condition. For the influence of Hume on 20th-century philosophy of science in general is so great that it is scarcely possible to exaggerate it. He looms like a colossus over both of the main tendencies in philosophy of science in the present century: the logical positivist one, and the irrationalist one. His empiricism, his insistence on the fallibility of induction, and on the thesis which flows from those two, of the permanent possibility of of the falsity of any scientific theory, are fundamental planks in the platform of both of these schools of thought.

Id. p. 50.

The academy could hardly lay claim to being as powerful as it is in moving the cultural needle if it weren’t for one arm of it, law schools, and their itinerant armies of arguers, the lawyers who become judges. It gives me great pleasure in exposing my own profession’s failings at the highest levels, when the Supreme Court decided to weigh in on science. You and your friends may get drunk and do and say stupid things, but for real idiocy, you need Nine lawyers in black robes to show you how it’s done. Bad science in American culture got its entrenchment in the courts.

The “Generally Accepted” Test

In Frye v. United States, 293 F. 1023 (D.C. Cir. 1923), a man was convicted of second-degree murder and appealed his conviction. The basis for his appeal was the trial court’s decision to exclude the results from what amounted to an early form of the lie detector, which the defendant had ‘passed’ and wanted to submit to the jury via expert testimony. The judge did not allow the evidence and on appeal the D.C. Circuit upheld the court’s ruling. FN4.

Just when a scientific principle or discovery crosses the line between the experimental and demonstrable stages is difficult to define. Somewhere in this twilight zone the evidential force of the principle must be recognized, and while courts will go a long way in admitting expert testimony deduced from a well-recognized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs.

We think the systolic blood pressure deception test has not yet gained such standing and scientific recognition among physiological and psychological authorities as would justify the courts in admitting expert testimony deduced from the discovery, development, and experiments thus far made.

Id., at 1023 (emphasis added).

This case and its announced legal standard held sway in courts around the Nation for almost seven decades, even surviving the court reforms of the 1950s and the adoption of the Federal Rules of Evidence in 1975. What is important about the decision, however, was that it gave legal definition to “science” in courtrooms across the United States, or at least, that is what subsequent courts did with it. The so-called “Frye standard” consulted no scientists nor, perhaps even more importantly, scientists steeped in the philosophy of science. The standard did nothing more than reify the idea of a ‘consensus’ of opinion about some particular technology at any given moment – in this case, the so-called ‘lie detector’ – as being the standard of admissibility for ‘scientific evidence’ in federal courts. It is a fundamental misapprehension about what makes something “science.”

By the time we get to the Supreme Court finally updating the “Frye standard” to discuss what the can qualify as ‘scientific knowledge’ for admissibility in the case Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993), the battle has already been lost, with Chief Justice Rehnquist’s mewling lament of a concurrence:

I defer to no one in my confidence in federal judges; but I am at a loss to know what is meant when it is said that the scientific status of a theory depends on its “falsifiability,” and I suspect some of them will be, too.

I do not doubt that Rule 702 confides to the judge some gatekeeping responsibility in deciding questions of the admissibility of proffered expert testimony. But I do not think it imposes on them either the obligation or the authority to become amateur scientists in order to perform that role.

Daubert, at 600-01. (emphasis and bold added).

Sacre bleu! Why, the very idea that a learned man or woman – and jurist – should know what science is!

Anthropogenic (human-caused) Global Warming being taught in schools as “science” is simply the culmination of a journey that begins with Karl Popper and his intellectual heirs – Kuhn, Lakatos, Feyerabend – coming to dominate the philosophy of science. Some seventy years after “general acceptance” in Frye the Supreme Court is now citing a self-admitted irrationalist, who did not believe that it was possible for human knowledge to be cumulative and advance, for the definition of what “scientific knowledge” is in courtrooms across the US. The opinion itself discusses all manner of non-science as being characteristic of science, or should I say #science, including “peer review” and “consensus.”

And that, my friends, is how we went from a society bursting with innovation and the understanding that science was an attempt to model underlying universal truths, to a level of specialization that approaches only insects, to our schools hammering our children with post-modernist ideas about what makes something #science. But everybody totally F***KING LOVES #SCIENCE!

FN 1 – There is no post-WW2 ‘scientific reformation,’ for example, with a commitment to use the awesome power of the atom to provide energy for all – there are instead only more bombs that can reach farther faster. This doesn’t even begin to address the science used to make chemical and biological weapons.

FN 2 – In order to avoid claims of ripping off other people’s ideas, I want to be clear that I am hardly the first person to point to the era between the First and Second World Wars as being historically significant for the changes that were wrought in this country – and across the world – most specifically in the outcome of the clash of ideas of the day. For example, the influence of Progressivism and the Eugenics movements, the 1917 Boshevik “October Revolution” and what followed, Marxism’s influence on Russia and its subsequent Lysenkoism, show that there were a whole slew of events that combined to destroy faith in long-standing cultural institutions across seemingly diverse fields of human action.

FN 3 – To be clear, Einstein’s theories did not in any way lessen the utility or power of Newton’s Laws, they only shortened their reach, but Einstein’s ideas about space-time and how gravity would bend light were a departure from Newton’s corpuscular model of light, as well as what had been the prevailing notion of space as essentially “inert.” Regardless of subsequent interpretations, for many people and pundits in that era, the boundaries of what could be known with certainty certainly seemed to have shrunk.

FN 4 – You may be saying, “Hey, wait a minute, are you saying lie detectors are good science??” and the answer would be an emphatic “No.” It’s not the outcome that the Supreme Court got wrong, it’s the reasoning. It is sometimes the case that the Supreme Court announces its decisions based upon what they claim is an underlying, transcendent or universal principle, which sometimes may be the case. The problem is that when they get it wrong, the underlying rationale becomes entrenched as the “law” of the United States. There are many, many examples of this beyond just Dred Scott v. Sanford, for example, in 1857, which came to stand for the proposition that Blacks were property.

Liked it? Take a second to support The Abject Lesson on Patreon.