David Healy: Do randomized clinical trials add or subtract from clinical knowledge 

 

Ken Gillman’s Comment

 

        The fact that David Healy can make so many cogent criticisms of decades of clinical-trial work justifiably leads one to conclude that investigators have made a poor fist of this task, ever since RCTs were first introduced. I certainly agree with that proposition and David illuminates many of the contributing factors and influences, but at the end of the day industry has only done what doctors allowed it to do; it is doctors who must take responsibility for their considerable failings in the execution of their obligations to patients and clinical science.

        “Do [RCTs] add or subtract from clinical knowledge?” One might say that they are such a small and inadequate tool of scientific investigation that the question itself is of marginal relevance.  The cognitive bias, sometimes called Maslow’s hammer, says, “To a man who only has a hammer, everything looks like a nail” (Maslow 1966).  There are many other tools in the scientific investigation box which have been left to gather dust and rust, ignored during the hegemonic reign of the RCT (and evidence-based medicine [EBM]).

        A commentary I wrote about trials dissects the deeply flawed STAR*D study (Gillman 2020) which spawned hundreds (>350!) of mostly superfluous papers; my commentary applies to RCTs generally. In a related commentary, I summarise the numerous eminent researchers who recognise other inalienable and fundamental flaws in EBM and RCTs (Gillman 2019).  EBM is the most egregious misnomer of the last few decades.

        The most remarkable thing about this whole saga — or would it be more appropriately classified as a circus, farce or tragedy? — is that it has taken so long for discussion to start about rectifying these problems.  Some might suppose that is a reflection on the perspicacity and rectitude of those in academic psychiatry.

        Judea Pearl, a Turing prize-winner and pioneer of modern causation thinking, has reminded us that RCTs do not, and cannot, address causality — and science is nothing without causality:

        “Causality is the key: there is no way of doing science without causality, it is the sine qua non for all understanding and progress” (Greenland, Pearl and Robins 1999; Pearl, Glymour and Jewell 2016; Pearl 2019).

        A great proportion of academic psychiatry is concerned with RCTs, related matters, and the resultant plague of meta-analyses: ergo, there is not much real science going on in psychiatry.

        It is time to stop juggling with apples and oranges and do some serious clinical science.

 

References: 

Gillman PK. Stepped trials: Magnifying methodological muddles — the supernatant effect. PsychoTropical Commentaries, 2020.  https://psychotropical.com/stepped_trials_magnifying_methodological_muddles/.

Gillman PK. ECT: Scientific methodology gone wrong. PsychoTropical Commentaries, 2019. https://psychotropical.com/ect-scientific-methodology-gone-wrong/.

Greenland S, Pearl J, Robins JM. Causal diagrams for epidemiologic research. Epidemiology 1999;10(1):37-48.

Maslow AH. The psychology of science: a reconnaissance. New York: Harper & Row; 1966.

Pearl J, Glymour M, Jewell NP. Causal inference in statistics: A primer. John Wiley & Sons; 2016.

Pearl J. On the Interpretation of do(x). Journal of Causal Inference 2019; 7(1). https://ftp.cs.ucla.edu/pub/stat_ser/r486.pdf.

 

June 3, 2021