David Healy: Do randomized clinical trials add or subtract from clinical knowledge 

 

David Healy’s reply to Charles Beasley’s comments (Parts 1 & 2)

The Artifice of Medicine

 

 

       Dr. Beasley’s comments, which are four times as long as my original, lay out his views on RCTs rather than comment on mine.

Sir Hill

       Dr. Beasley takes issue with my characterization of Tony Hill’s Heberden Oration as indicating that a dependence on RCTs as the only way to assess therapeutic efficacy is mad. He says he can’t find the word “mad” in the published article.

       The entire seven pages of the Heberden Oration are available on the Shipwreck of the Singular site. Dr. Beasley says he has gone to the published paper and found this:

       “Any belief that the controlled trial is the only way would mean not that the pendulum had swung too far but that it had come right off its hook. We need not argue, therefore, over the semantics of observation and experiment. What we can more profitably reflect upon is whether the modern controlled trial is a useful adjunct to therapeutics, whether it asks the right question or questions, whether there is any way in to-day’s more sophisticated and computerized setting by which it could be appreciably improved?”

       Reading just this might support his point but earlier in the lecture Hill states:

 

       “The history of science, however, shows that frequently with a new discovery, a new technique, or a new theory of disease, the pendulum at first swings too far. Has this been so with the controlled trial?’

 

       And just before the piece Dr. Beasley quotes, Hill says:

 

       “Given the right attitude of mind, there is more than one way in which we can study therapeutic efficacy.’

 

       Which is followed by Dr. Beasley’s quote:

 

       “Any belief that the controlled trial is the only way would mean not that the pendulum had swung too far but that it had come right off its hook. Et seq.”


       And then, Hill states:

       “Far from weakening the need for the skilled observer, the controlled trial should increase the demands.”

 

       The only way for readers to decide if Dr. Beasley or I are closer to the truth as to what Hill intended is to read the whole lecture.

       It is however appealing to see Dr. Beasley twist himself around the issue of how to refer to Tony Hill.  He ends up referring mostly to Sir Hill.  I have sympathy with this, figuring that if anyone uses this ridiculous Sir stuff, they should substitute Sir for Mr. and move forward on that basis. But Dr. Beasley’s usage sounds more confused than that. Hill is referred to at one point as Sir Anthony Hill – which he wasn’t.

       Tony Hill seems to have been a decent, relatively unpretentious guy, deserving of some recognition in contrast to most English Knights who sycophantically work for recognition, as befits the cream of English Society - rich and thick - as someone referred to them.     

       Why anyone outside England would ever use this honorific is a mystery to me. I would respect Dr. Beasley if he was trying to ridicule the system, but I don’t think he was.

       Dr. Beasley mentions a 1991 edition of Hill’s Principles of Statistics. The early editions of this book through to the 1950s were barely noted by anyone and even after that never achieved much currency. In 1991, Hill was 94 years old and almost certainly not revising his book. In 1984, he wrote to Louis Lasagna mentioning his memory was fading and he couldn’t remember where the idea of randomization came from but checking earlier editions of his book, he thought it must have loosely come from Fisher.

       In the 1965 lecture, Hill mentions that the folk going around most assiduously pushing controlled trials are industry salespeople. This was a surprise to him in 1965 and likely to most people now but this is also true of Evidence-Based Medicine (EBM) now – industry encourages doctors to practice in accordance with the evidence. This should give the lie to what controlled trials have become. Industry does almost all the trials because this is the hoop through which they have to jump to get their drugs on the market and to secure market niches.

       In contrast to pharmaceuticals few of which save lives, in line with Goldman Sachs recent statement that saving lives is not a good pharmaceutical business model, surgeons have made advances in saving lives and they rarely do RCTs even though the specific effects of surgical techniques make them more suited to an RCT than the scattergun effects of drugs like the SSRIs that work on every bodily system. Surgeons don’t need RCTs for approval or to secure markets – saving lives does that.

Company RCTs 1

       My original article was aimed at independent RCTs – not industry RCTs. RCT believers’ figure that everything would be okay if industry were just kept away from running RCTs. My point was this belief is misguided – without human judgement calls the idea that an operational procedure will produce valid knowledge is plain wrong and opens the door up to industry. The mantra that RCTs offer gold standard evidence leads bodies who form Guidelines like NICE (National Institute for Health and Care Excellence) or the Cochrane Collaboration to put more weight on industry trials than they should.

       Dr. Beasley’s response, however, introduces company RCTs and suicide into the frame and I will respond to these points.

       He knows or can be faulted for not knowing that healthy volunteers have become suicidal on several different SSRIs and there have been healthy volunteer suicides on Lilly drugs. He can easily access documents showing that in 1982, Pfizer made clear Zoloft caused, and other SSRIs were well known to cause, reactions like this in healthy volunteers.

       No depression is being released by treatment here – as Dr. Beasley suggests for suicidal events in Lilly’s clinical trials when he is not arguing that we simply have no way of knowing what led to a suicide.

       He seems to be saying, as Lilly did in their pediatric Prozac trials, that suicide is a sign the drug is working. These drugs cause a condition variously called akathisia, agitation, restlessness, hyperkinesis or hyperactivity.  In the pediatric trials, Lilly staff argue this hyperactivity is a good thing and deny a link to suicide even though psychopharmacology is built on reserpine-induced akathisia causing suicide.

       Prior to the development of the SSRIs, there had been 15 randomized trials of tricyclic and related antidepressants in children and adolescents, all negative. An initial trial of fluoxetine was also negative. None of these were high quality trials, but there was hope that a well-done trial might demonstrate a benefit.

       Lilly did two pediatric Prozac trials published as Emslie, Rush, Weinberg et al. 1997 and Emslie, Heiligenstein, Wagner et al. 2002. As the first trial was finishing, FDA introduced a six-month patent extension for submitting studies on children even if negative. Lilly was facing the expiry of their patent on Prozac. A second Emslie trial was run. These two trials were published as roaringly positive. In 2002, FDA licensed claims that Prozac could be antidepressant in children, as did other regulators, even though the FDA reviewer said these trials were negative on their primary endpoint.

       Later in 2002, after licensing Prozac, FDA issued an approval letter for paroxetine in the treatment of pediatric depression. FDA’s letter to GlaxoSmithKline (GSK) agreed with GSK that all three trials submitted (protocols 329, 377 and 701) were negative as regards efficacy (letter available on study329.org). FDA also noted: “Given the fact that negative trials are frequently seen, even for antidepressant drugs that we know are effective, we agree that it would not be useful to describe these negative trials in labelling.”

       Why on earth would FDA do this? 

       Consider this. In 2008, Erick Turner and colleagues noted that 31% of adult trials done as part of a licensing application for SSRIs and related antidepressants viewed by FDA as negative or questionable were published as positive by companies, and the effect size in the published articles was 32% higher than in the FDA reviews (Turner, Matthews, Linardatos et al 2008).

       FDA made no comment about these mismatches, just as it made no comment about the fact that the two Emslie Prozac publications published as positive were in FDA’s view negative. FDA explicitly agreed in print to make no comment about the publication of Study 329.

       An internal GSK document that emerged in 2004 revealed that GSK knew that paroxetine in Study 329 was ineffective but publishing this result would be commercially unacceptable. So, the 2001 publication of study 329 (Keller, Ryan, Strober et al. 2001), which was similar to the Emslie studies in terms of safety and efficacy, contained the “good bits of the study” (document available on study329.org).

       Based on this document, New York State’s Attorney General lodged a fraud action against GSK. The settlement of this lawsuit made it possible to access the study 329 data and demonstrate paroxetine’s lack of efficacy and a doubling of suicidal events compared with the original publication (Le Noury, Nardo, Healy et al 2015; Healy, Le Noury, Wood 2020).

       This document added to a growing crisis about the suicide risk of antidepressants in this age group. FDA convened a Psychopharmacologic Drugs Advisory Committee meeting in February 2004 at which FDA claimed that none of the drugs (bar Prozac) demonstrated efficacy. At a follow-up hearing, FDA accepted the need for a Black Box Warning, primarily because of the accepted lack of efficacy for antidepressants in this age group. The licensing of paroxetine and other drugs was aborted but the approval of Prozac was not rolled back. Most regulators, apart from the Australian one which did not approve Prozac in minors, continue to claim that Prozac is effective in this age group.

       New York’s fraud action opens a strange prospect. It seems possible that if FDA had stated study 329 was negative, they might have opened GSK up to a fraud action and a large settlement fine - as resulted. GSK and all companies are almost certain to have been aware of this when in discussions with FDA. If fraudulent medical literature limits a regulator’s freedom of movement, we have a strange state of affairs.

       The approval of Prozac for depression in children and adolescents and the publication of many ghost-written articles since claiming efficacy and safety for SSRIs swept away a clinical consensus that children did not get melancholic, and support would manage their “distress.” The pediatric usage of antidepressants has rocketed since - they are now the second most commonly taken drugs by adolescent girls, even though they are ineffective and increase the risk of suicide, miscarriages, birth defects and sexual dysfunction.

       This is a public health emergency. FDA’s willingness to license Lilly’s claims that Prozac was antidepressant in pediatric populations was a key step in the evolution of this situation. The divide between what the academic literature on these drugs said before the crisis and still says is one of the greatest known divides in any branch of science since Lysenko.

       While universally negative for “depression,” the pediatric trials show SSRIs have an anxiolytic effect and there are rating scale benefits in anxiety and OCD trials that don’t require companies to twist their data to the extent they do in their depression trials. 

       This anxiolytic rather than depression profile fits with the fact that SSRIs are ineffective for melancholia in all age groups. These drugs that are useless for “proper” depression became “antidepressants” in part to skirt clinical concerns that any new anxiolytic would necessarily produce dependence as the benzodiazepines had.

       It also fits with Arvid Carlsson’s observation that serotonin reuptake inhibiting tricyclics are serenic compared with tricyclics that are not. This observation led to the development of the SSRIs. Dr. Beasley though is not in a good position to accept that we have SSRIs because of clinical observations like this.

Company RCTs 2

       Dr. Beasley has more of an issue with observation than biology.

       One of the Prozac adult trials included a patient entered by Jonathan Cole, who was of the view that Prozac made the patient suicidal. In 1990, Teicher, Glod and Cole published on six cases with challenge, dechallenge and some rechallenge evidence pointing to a link between Prozac and suicide. In 1991, a Beasley, Dornseif, Bosomworth et al. article sought to counteract this claim with a meta-analysis of company placebo-controlled trials.

       For the 1994 Fentress et al. v Shea Communications and Eli Lilly and Company trial, Dr. Beasley was deposed and the following questions from P. Smith (PS) lawyer for the plaintiffs and answers from Beasley (CB) are recorded.* [These Q and As are not continuous but I am happy to send the deposition to anyone interested to check on whether there is a misrepresentation here].

 

PS: You have not asked any investigator that you can recall any particular request about any particular suicide attempt on this latest long-term efficacy trial on Prozac and depressed individuals?

CB: I have indicated that I could not recall doing so. 

PS: Don't you think that would be something that you would recall since this has been something that you've written about extensively, Doctor Beasley?

CB: I'm sorry, but what I can say to you is I don't recall.  I'm not sure that I -- I have no opinion on whether I would or would not recall. 

PS: Well, have you had any difficulty recollecting facts in the past?

CB: No. 

PS: Do you have any problem with your memory for which you're seeking medical care?

 

       Did you know at the time, or do you know now, Doctor Beasley, that Doctor Cole had -- that one of the patients mentioned in Doctor Teicher's article was a patient of Doctor Cole that was being treated on a Lilly Prozac trial?


       If a physician who has been treating patients, a number of patients, on Prozac, reports to you I've had a patient who I believe committed suicide as a result of their taking Prozac, are you going to say he's wrong because from a statistical standpoint you have not come to a conclusion that there's a relationship between suicide and Prozac?

CB: I'm going to say that I personally do not believe he's correct. 

PS: So, you're saying in your opinion that psychiatrist is wrong, is that right?

CB: That's correct. 

PS: He's made a mistake, is that right?

CB: If the data and the statistics are sufficient and robust, the answer is yes.

PS: So, in that instance, you are deferring to statistics over and above that opinion of a treating physician?

CB: I have taken the physician's report to me into account, I place greater faith in controlled data to suggest whether or not there is an association between Prozac and any event, or any other compound for that matter, associated with a particular behavior.


PS: By virtue that you believe that based on your analysis, that there are individuals who truly do become more agitated and more activated on Fluoxetine than on placebo?

CB: There are more individuals who report or are reported to experience these phenomena. 

PS: All right.  And do you have any doubt, then, that based on statistical analysis, that Fluoxetine is causing or more likely to cause agitation in some individuals?

CB: My position is that the statistical association exists.  I don't know what causes it. 

PS: All right.  Do you think the answer may be in the fact that Fluoxetine has an effect on serotonin and that serotonin plays some role in this human response?

CB: I don't know if it does or doesn't. 

PS: What about Fluoxetine is it that's making these people more activated and agitated?

CB: Well, again, your question to me -- you use the word making them, I'm presuming that that's causal. 

PS: Uh-huh.

CB: What I have, again, is a statistical association.  I'm not certain what's causing that.

       I'm sorry that I disagree with you.  At the time, we were looking for data relevant to large data bases. Our feeling is that an individual investigator basically can't read the mind of his patient or really definitely know what are the etiologic contributions.  We clearly had the issue raised, and we wanted to collect as much objective data as possible, and that was the intent of the exercise.  We were certainly willing to make note of any information that the investigator provided.

PS: Other than what the investigator's opinion was?

 

       The Beasley, Dornseif, Bosomworth et al. 1991 BMJ paper claimed Lilly’s trials showed there was no suicide risk on Prozac, and that – based on reduction in the suicide item scores – it in fact reduced the risk of suicide. In 2004, FDA in print panned the argument put forward in the paper as regards suicide reduction, although senior FDA figures and Dr.s Beasley and Gibbons resurrected it in few years later as part of an effort to rollback Black Box Warnings.   

       In the 1991 paper, there was a statistically significant increase in suicidal events on Prozac compared to placebo in the randomized phase of the trials but close to all readers appear to have been fooled by a sleight of hand. A BMJ review of the paper stated that data did not exonerate Prozac – but Richard Smith, then editor of the BMJ,  over-rode this.  The reviewer’s concern was that there was a clear increase in suicidal events on Prozac and Lilly were depending on the fact that it was not statistically significant in order to say there was no issue.  The reviewer and others do not appear to have noted that the company included an event from the washout phase of one of their trials in the overall placebo group. Omit this breach of FDA regulations, to which FDA turned a blind eye, and the data for Prozac show a statistically significant excess of suicidal events on it compared to placebo.

       Other SSRI companies followed this maneuver and variants of it can be seen in the current Covid vaccine trials.

Company Trials 3

       “In Part 1, I agreed with Dr. Healy that RCTs could supply inaccurate information.  Inaccurate information would certainly not foster improved patient care and outcomes, and at worst, might lead to patient harm.  I introduced the term ‘proper RCT’ to describe an RCT that supplies accurate information generalizable to the complete set of patients who might receive the treatment studied in an RCT” (Beasley 2021).

       In the two Prozac pediatric trials, statistical testing was done to an extreme. There are 5,910 significance tests with P-values in the two study reports combined. For efficacy outcomes, 39% were significant in favor of fluoxetine and concealed the lack of significance on the primary endpoints. For safety, if fluoxetine had been as harmless as placebo, 229 (5%) of 4,575 tests for adverse events would have been statistically significant by chance. But there were only 174 (4%). Many tests were run on events that occurred in only one or two patients.     

       My view is that even independent RCTs produce a permanently positive risk benefit ratio – a benefit is being studied intensively, where hazards are not.  A positive risk benefit ratio necessarily follows. RCTs by their very nature, even without company embellishments, are a gold standard way to hide adverse drug reactions.

       For companies and regulators, the risk benefit balance remains permanently positive unless someone conducts a RCT for an adverse event - Lilly drew up but never ran a trial for Prozac and suicide - or perhaps a jury of plain folk say this is nuts and return a verdict against the company.

       My view is that because the 100 other effects a drug has, some more common than the primary effect a company wants to make money out of, are not the focus of the investigation these effects are collected relatively poorly, if only because of time constraints, and this makes statistical significance tests inappropriate for effects other than the primary outcome. 

       The company approach is to use significance tests on events that have been collected by chance and then declare their compound is free of hazards. The a priori null hypothesis for companies is that adverse events are not adverse drug reactions (ADRs), as Dr. Beasley explains.  It is only if they become statistically significant that they are ADRs. 

       Ian Hudson, originally from GSK and later boss of the British regulator Medicines and Healthcare products Regulatory Agency (MHRA), put this clearly when he said under oath in the 2001 Tobin vs Smithkline Beechem trial that it might look like a drug has caused a problem in an RCT but unless the trial proves statistically significantly that the event happens more often on the drug than on placebo, these are just the appearances of an ADR. Taking this approach senior GSK people have unashamedly said there are no adverse drug reactions on paroxetine. Some of the same people though told investigators like me not to ask patients about the sexual effects of paroxetine – something that happened to a statistically significant extent in the healthy volunteer trials of this drug.

       Taking the same approach Dr. Beasley later in his career argued that the only ADR on olanzapine was weight gain. Aware that olanzapine causes marked akathisia and has a very high rate of suicidal events in company RCTs, I asked the company for their suicide data but was stonewalled. I asked the UK Minister of Health how anyone could get informed consent for a patient to take this drug without data on a problem as serious as this and could he get the data for me – but got no answer. A freedom of information request later produced a Lilly document encouraging company personnel to do everything possible to get Healy to prescribe olanzapine – they were not referring to providing me with the data.

       From Lilly documents I later got to see, olanzapine may have the highest suicide rate in RCT history. But around this time, as I understand it, Lilly was threatening to pull out of the UK if olanzapine did not feature at the top of NICE guidelines for the treatment of schizophrenia.

       There is a difference of statistical approaches here.

 

[From Beasley 1994]

 

PS: Have you had any special statistical training, Doctor Beasley?

CB: I have had some elementary statistical courses, I have not had extensive training in statistics. 

PS: When you say elementary statistical courses, do you mean those courses that one would ordinarily take in a college?

CB: That's correct. 

PS: What statistical courses did you have at Yale?

CB: I took psychology statistics, a psychology statistics course.

PS: Any others?

CB: There was a research design course that was somewhat statistically related, and then when I was in medical school, I had a brief course. A mini-course.

 

       In the Tobin trial, a jury of folk from Wyoming, who as the urban myth goes can be assumed to have been to the right of Dick Cheney, found GSK’s statistical approach unconvincing. They were under no doubt that but for the fact he had been put on paroxetine 48 hours earlier Don Schell would not have wiped out his entire family.

Proper Trials

       All views, including those Dr. Beasley espouses about RCTs, appear at points in time. There are the philosophical questions about whether these views are right. There is also the historical question about how views like this arise. Dr. Beasley’s concept of a proper trial,

       “If a ‘proper’ RCT rejects the null hypothesis and results in an interpretation that treatment X reduces symptoms of schizophrenia, treatment X indeed and truthfully causes a reduction in the symptoms of schizophrenia in the entire population for which the treatment might be used.”

       leads to the question about how views like this arise. 

       In the 1970s, companies began to outsource the running of their clinical trials, the writing of the manuscripts purporting to represent the results of those trials, drug development itself and later the public relations defense of products in both public and academic domains.

       While all of these external operations by independent companies are required to meet Good Clinical Trial Practices, Good Medical Writing Practices, Good Laboratory Practices and Good Public Relations Practices, there is growing scope for plausible deniability.  Facing a lawyer asking questions, senior company people can look genuinely blank and can point in Court to the host of Good Practice Standards they adhere to, so that for example the articles on their drugs meet quality standards that academics (including David Healy) don’t meet, in addition to which of course FDA has found no problems with their trials.

       Lilly does not appear to have gone down this route to the extent other companies did – it’s largely Lilly authors on their trials and papers as in the Beasley 1991 paper.

       This may be one reason why depositions of Lilly staff appear to me (albeit not based on blindly collected and statistically analyzed data) to have a greater frequency of people being asked if they had memory problems (see above). There also seems to be a greater deployment of Jesuitical skills with staff saying “No” when asked whether they have seen this document before but giving the impression their “No” is based on certainty that this is a copy of a document they have seen. An alternate response was that they have not seen the document for years, having left it behind in the files of someone else who took up their position when they were moved to different duties.

       For both companies who outsource and those who don’t, views about clinical trials like those Dr. Beasley espouses offer the best possible defense. There is a close family resemblance between the outsourcing by other companies and compartmentalization by Lilly and what happens in company clinical trials which Dr. Beasley illustrates in the paragraph immediately before the one above:

       “An RCT studies a conceptual hypothesis. For example, the conceptual hypothesis is that treatment X reduces the symptoms of schizophrenia. This conceptual hypothesis is formalized in terms of a difference between treatment X and a control treatment (e.g., placebo) in one or more measures. The conceptual hypothesis is the alternative statistical hypothesis. The statistical null hypothesis is the absence of effect of treatment X expressed as a lack of difference between treatment group X and the control group based on the selected measures.”

       The word formalized above is equivalent to operationalized. RCTs on this model don’t study anything – they are an algorithmic, thoughtless, operation that conveniently dismiss the views of patients and clinicians. 

       In the case of adverse reactions to drugs, an operational approach opens up opportunities to recast problems in terms of patient exposure years, as well as putting events that did not happen on placebo under a placebo heading, and to code events so that what is a single problem dissolves into multiple different problems none of which look as though they happened all that often. Operations are technical rather than moral issues. We can apply particular techniques that make the company profitable or even survive, while patients die as a result, without having done anything illegal, immoral or unethical.   

       An operational approach can make it close to impossible to say anything causes anything. As things stand, we don’t even have proper evidence to say alcohol makes people drunk – several thousand years of experience notwithstanding. One of the only things anyone can say is that drugs produced by pharmaceutical companies, confident enough to submit data to regulators, cause benefits. Multiple negative trials can be dismissed as failed trials – as the 30 out of 30 negative trials in depressed minors have been. Another thing we can say is that a bunch of drugs, on prescription-only because we have reason to believe they are more dangerous than alcohol or nicotine, are actually extraordinarily safe causing only minor problems such as weight gain in the case of Zyprexa.

       The Zyprexa Papers, both Jim Gottstein’s 2021 book and the actual papers, along with billion-dollar fines seem to belong to a separate reality.

       I come down on the side of the Gottstein version of reality. Company approaches, like that proposed by Dr. Beasley, produce the appearances of rather than the substance of an investigation. 

       Key to all this is a word Dr. Beasley uses 68 times – data. When I asked Lilly for data, which they refused to share, I was asking for figures. Figures, however, are a proxy for data. What you think about controlled trials hinges on your answer to “what is data.”

       Science hinges on everyone being able to interrogate an experiment.  In the case of clinical trials this means the people who took the drug – not the figures on some rating scale or figures for adverse events. In the case of the man who died of burns, you need to be able to see his full medical record or perhaps interview his wife, which reveals that he poured petrol on himself intending to commit suicide but only died five days later from his burns. The company coded death by burns, not suicide. As Dr. Beasley’s 1991 paper brings home, rating scales let raters record a reduction in suicidal ideation in people about to attempt suicide.

       Interrogating people is in my opinion the only way to approach reality. But pharmaceutical companies do not give us access to the data or the people. 

       Dr. Beasley’s idea of a proper trial prescinds from the history, anthropology and more general reality of trials. Dr. Beasley alludes to this when mentioning the money lost if trials move too slowly due to professional patients, diagnostic services that will let companies check if people actually have the diagnoses they are supposed to have, placebo creep and related problems. The context for this lies in an operationalism of health services that followed on the pharmaceutical company turn to operationalism as part of corporate development. This has recently led the University of Pittsburgh Medical Center, whom most people thought was the biggest employer in Pennsylvania, to declare that in fact they have no employees.

       This changing landscape means that in order to get healthcare some people have to become professional patients. It means pharmaceutical companies can dump clinical trial companies who have inordinately high placebo responses, possibly linked to efforts to game the system by recruiting patients who do not really have the diagnosis in question. Or a pharmaceutical company can suggest a trial company find a way to lower its placebo response rate – without the hiring company having any responsibility for how this might end up being done.

Postscript

       Dr. Beasley introduces a note about QT intervals that seems further removed from my target piece on RCTs than anything else.  Here is some context. 

       Around the time the Prozac pediatric trials were being reviewed by FDA, Lilly submitted a license application for R-fluoxetine. This was withdrawn, as I understand it, in part because of QTc interval problems, which the parent fluoxetine must share. QTc interval problems are an issue with all SSRIs. The effects of fluoxetine on QTc intervals turned up in the pediatric trials.  In response to FDA concerns about a statistically significant increase in mean QTc found in the trials, Lilly argued the initial analysis reflected random variability. FDA’s reviewer responded that, with a P-value of 0.009, the result was, by definition, unlikely to be produced by random variability. Lilly was invited to study this further but never did.

       FDA nevertheless approved pediatric Prozac, while turning down an adult Prozac-isomer. A growing number of children are at risk from this complication, made more likely by the treatment cocktails they are now on, many components of which all have additional QTc lengthening effects. This makes it difficult to blame a specific drug if there is a death. Blame gets outsourced to a doctor who somehow should have known better.

 

References:

Beasley CM Jr. Comment – Part 1. David Healy:  Do Randomized Clinical Trials Add or Subtract from Clinical Knowledge. inhn.org.controversies. July 8, 2021.

Beasley CM, Dornseif BE, Bosomworth JC, Sayler ME, Rampey AH, Heiligenstein JH, Thomson VL, Murphy DJ, Masica DN. Fluoxetine and suicide: a meta-analysis of controlled trials of treatment for depression. British Medical Journal 1991;303:685–92. 

Emslie GJ, Rush AJ, Weinberg WA, Kowatch RA, Hughes CW, Carmody T, Rintelmann J. A double-blind, randomized, placebo-controlled trial of fluoxetine in children and adolescents with depression. Arch Gen Psychiatry 1997;54:1031-7. 

Emslie GJ, Heiligenstein JH, Wagner KD, Hoog SL, Ernest DE, Brown E, Nilsson M, Jacobson JG. Fluoxetine for acute treatment of depression in children and adolescents: a placebo-controlled, randomized clinical trial. J Am Acad Child Adolesc Psychiatry 2002;41:1205-15. 

Gottstein J.  The Zyprexa Papers. Samizdat Health Writer’s Co-operative, Toronto; 2021.  

Healy D, Le Noury J, Wood J.  Children of the Cure.  Samizdat Health Writer’s Co-operative, Toronto; 2020. 

Keller MB, Ryan ND, Strober M, Klein RG, Kutcher SP, Birmaher B, Hagino OR, Koplewicz H, Carlson GA, Clarke GN, Emslie GJ, Feinberg D, Geller B, Kusumakar V, Papatheodorou G, Sack WH, Sweeney M, Wagner KD, Weller EB, Winters NC, Oakes R, McCafferty JP. Efficacy of paroxetine in the treatment of adolescent major depression: a randomized, controlled trial. J Am Acad Child Adolesc Psychiatry 2001;40(7):762-72. 

Le Noury J, Nardo JM, Healy D, Jureidini J, Raven M, Tufanaru C, Abi-Jaoude E. Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence. BMJ 2015;351:h4320.

Teicher MH, Glod C, Cole JO. Emergence of intense suicidal preoccupation during fluoxetine treatment. Am J Psychiatry 1990;147(2):207-10. 

Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008;358:252-60.

  

*Requests for items contained in the “David Healy Papers,” including this deposition, should be directed to Online Archive of California, oac.cdlib.org/findaid/ark:/13030/c81j9c5j/.  The deposition is also available on HealyProzac.com and on Study329.org

 

August 5, 2021