You are here: Controversies / David Healy: Do Randomized Controlled Trials Add to or Subtract from Clinical Knowledge? / Jean François Dreyfus’ response to David Healy’s reply
tirsdag, 30-11-2021

David Healy: Do randomized clinical trials add or subtract from clinical knowledge? 


Jean François Dreyfus’ response to David Healy’s reply


        The latest version of David Healy’s paper on the role of RCTs in medical knowledge prompted me to reexamine some avenues I had cursorily explored in my previous comment.

        Is psychiatry a science, an art or, as for instance architecture, a combination of both? 

        An art? I see some of my readers shrugging their shoulders. Believe me, I am certainly not one of those idealizing my youth. I do not wish to go back to a golden age in which every patient was considered as a piece of art belonging to a physician who had no obligation to communicate about them to the outside world and who was to pay to take care of their mental health. I do understand that there is a price to pay if one wants to offer psychiatric care to everyone in need of it. But does it imply that the domain should no longer be concerned with thoughts, human relationships and deciphering the consequences of changes in our society? 

        In France nowadays, psychotherapy is a territory that belongs to psychologists. This results from the longing of psychiatrists, for decades, to be considered as full-fledged physicians − on par with cardiologists or gastroenterologists, who participate in the progress of human health (care). Taxonomy was considered as a mandatory initial step to distance psychiatry from a “healing art.” Various classification systems were proposed mostly based on (apparent) symptomatic similarities. As these classifications were not geared to causal mechanisms, it is not surprising that fierce discussions took place as there was no clear reason to choose one or another creed.

        Brain neurotransmitters at least provided a substrate to some categories. We went further away from art: dopamine, serotonin, epinephrin were the keys for opening the safe. Despite some progress we are still quite ignorant of the actual role of many transmitters and their interplay with brain structures but we had at least technologies and tools that could match those of the other branches of medicine. In addition, we only recently made sense of the role of genes and we have only little knowledge on how our environment may interact with them. Third party payers encouraged such an evolution towards “hard” facts as it seemed to allow better control of health expenditures and in our society, the one that pays rules the world. 

        If they cannot be considered as a science-based solution, why did we witness the advent of RCTs without fighting against them? As a corollary, is it true that giving up RCTs would improve the way we provide mental health care; in other words, is psychiatry a field where specific issues cannot be accounted for in RCTs? And if that is the case, why do we still rely on them? 

        Among many factors I chose three but I would like to stress that it is their mingling which is important and that they should never be considered as operating as unique causes.

        Technocracy, capitalism and (psychiatrists’) pride are the first answers that came to my mind, probably reflecting my own personal bias. However, who is the major culprit? It is highly dependent on whom the question is put to, whether they are dealing with politics, economics or medicine; even if we can agree on the diagnosis we disagree on the primum movens and it may be the reason for our inertia.

        As far as I can judge, health became a major political stake after WW2. Regulations were set to ensure that this commodity got all the  protection it deserved. In order to allow a supposedly fair comparison between treatments, rules were defined and in line with their success in agriculture and other life sciences, RCTs were considered the gold standard at least as regards efficacy. Since RCTs are mainly concerned with means and variances, one had to accept a huge loss with regard to individual outcomes. Scales that pool symptom scores to calculate an average outcome for each patient were designed. They had to have good metrological properties and in many cases this implied leaving one or two items aside, although they were initially considered as significantly related to the condition to be evaluated.

        Safety assessment was even a more complicated issue: should a neutral question be used such as “Did you notice any untoward effect on your health since this study has begun?”, which will lead to an underrepresentation of “shameful” or too common or too mild adverse events, or should a standard inventory be used with the risk of inducing patients to report AEs that would have passed unnoticed without such an instrument. Since no one dared to conduct a study just to compare treatment safety, it was generally contended that treatments were be compared at a more or less equivalent degree of safety. Of course, if it was obviously not the case, this was reported. However, since the therapeutic index was large, adverse events were watered down which explains why the safety section of clinical trials was frequently uninformative.

        Further, the “evidence-based” fashion pushed in the same direction: anecdotes that used to provide a faint but actual pharmacovigilance signal were despised as “prone to subjective biases” even when they were more than properly documented. Some authors even asked whether it was ethical to burden efficacy assessment in order to show a difference in safety, and this despite the Hippocratic motto: ἀσκέειν, περὶ τὰ νουσήματα, δύο, ὠφελέειν, ἢ μὴ βλάπτειν − which, as everyone knows more or less means Primum non nocere.

        Our second explanatory factor was capitalism. The essence of this doctrine is the maximization of long- or short-term profits.

        In 1977, for the first (and last) time, the multinational company I was working for, at that time the leader in CNS agents, set up a meeting of all its medical executives. I remember the mix of applause and  booing that resounded when the worldwide director for legal affairs ended his presentation. What had he said to cause such a turmoil? He had hammered out that the company was now in a very competitive market and that to maximize its profits, our affiliates were to prepare for a major change: they should no longer resort to costly mini-studies that pleased local key opinion leaders but did not make sense to other non-indigenous registration authorities; they only had to participate in major studies that would be designed by us, the headquarters, as acceptable worldwide.

        Large scale RCTs were to be designed with panels of worldwide leaders so as to be accepted everywhere. To do so, we had to use a common language, using the smallest common nosological denominator even if it meant losing a lot of human flavors. We also had to use internationally validated rating scales. I remember he used as a negative example a British QoL scale that considered normal a man who mowed his lawn on Saturday afternoon and said, “our Israeli friends would certainly not agree that a man living in a kibbutz in the middle of a desertic area, mowing the few grass sprouts showing in the sand on a Sabbath afternoon should be considered normal.” A few weeks later, I was summoned to set up an international advisory committee that would oversee every clinical trial to be set up henceforth.

        Let me be clear, this decision had been taken not on scientific grounds but because it made sense from an economic point of view, not only in terms of direct costs but chiefly because it shortened the duration to get a drug on the market and capitalizing on its profits.

        And what about our responsibility as practitioners? Were we completely insensitive to the charm of the nice-looking medical rep who showed us nice curves (not only hers) that could be readily interpreted, even if you had no idea of who was the expert signing the paper or about the intricate statistical procedure that allowed obtaining such nice graphics. It looked so much more scientific than the comments on the vignettes of a few dozen clinical cases, described according to a local nosology − for instance that used in Morita therapy. Well, the zero was not always visible on the graph but did we always go beyond the initial assertion to investigate a possible statistical nonsense? 

        Should RCTs be avoided and, if yes, how are we going to proceed?

         I will only cursorily deal with this topic as I consider my comment to be already too copious. Should we come back to the situation where key opinion leaders gave their impression − frequently that of a junior resident there being no volunteer for such an assignment; would we accept that? Would authorities agree with it? Would company executives accept to take such a risk? I do not think so, although we now know that, in many cases, if they are properly designed, open studies may provide outcomes the quality of which matches that of RCTs. But will we gain from such a paradigmatic change?

        I have to qualify what I would consider as properly designed open studies: unquestionable diagnosis, whatever the label put on a patient in a given culture; run-in periods of suitable duration to ensure proper wash-out and patient’s stabilization; prestudy statement indicating what are the variables on which the patients will be assessed; prestated defined rescue drugs and situations in which they may be used; detailed scenarios of dosing schemes and treatment duration; frequency of assessments; preestablished procedures to obtain a suitable assessment of safety; and so on.

        Now, let us be frank, the only physicians I have met who adhered to this kind of protocol were oncologists testing a new drug that could be used as a final recourse when everything else had failed. Are we likely to do so with a new antipsychotic drug for which we have limited efficacy data available? I may be wrong but I believe not many of us would sign on to conduct such a study? I am not even sure it is at all feasible. However, when I did take responsibility for such studies in diabetes or ophthalmology, I was indeed convinced by their results. Nevertheless, I still believe that to erect a skyscraper you need good foundations. No one will see them but without them the whole building will collapse. And probably the optimal way timewise and cost wise to obtain such foundations is to conduct properly designed RCTs.

        But there are some caveats: guidelines are not gospels but should be considered as helps; professionals who no longer see patients should not be given decisive arbitration power on how to design and conduct studies; industry statisticians should not play cops and robbers with health authority statisticians; and specific studies should be conducted to obtain a fair view on safety. I am not sure that post-marketing studies are sufficient. For instance, recently, the team I was advising discovered that, except for some neuroleptics and on a limited number of basic parameters, we had almost no data on the impact of CNS agents on androgenic hormones although some drugs had been available for decades. By the way if you have some results…


June 10, 2021