Martin M. Katz: Clinical Trials of Antidepressants: How Changing the Model Can Uncover New, More Effective Molecules

Comment by Walter A. Brown

 

Although in the past 50 years both the US federal government and the pharmaceutical industry have spent billions of dollars seeking new treatments for mental illness, clinicians and researchers agree that no truly novel psychotropic drug has surfaced over this time. The key point here is novel.

Antidepressants are a case in point. The pharmaceutical industry comes up with “new” antidepressants all the time and they are launched with great fanfare. But these “new” antidepressants are invariably me-too variants of older drugs. In some instances, the antidepressants now in use have fewer side effects than the older ones but they are no more effective. And the newer antidepressants share many of the limitations of their forbears. Like the first antidepressants, the newer ones take several weeks to exert their full effects and they are ineffective in a large proportion of patients. The psychiatric community has acknowledged this lack of treatment innovation as a major problem. Although some of the reasons for the absence of innovation have been identified, the remedy is far from clear.

First, as many have lamented, despite great advances in our understanding of the brain, little is known about the specific brain abnormalities giving rise to depression. Thus, there are no obvious targets for which to design new antidepressants. As a result, pharmaceutical companies -a major source of treatment innovation- search for potentially useful “new” drugs by looking for compounds which are similar in structure or effects to the existing ones. This approach does identify drugs which work about as well as the existing ones (me-too drugs) but it can only fail with respect to innovation.

In addition, as Martin Katz suggests in this persuasive monograph, even if a researcher has in hand a compound with novel psychotropic properties, our current system for evaluating psychotropic drugs makes it unlikely that its novel clinical effects would be detected, particularly if they were unexpected.

Mindful of the impediments to new antidepressant development and the high failure rate of contemporary antidepressant clinical trials (only about half the trials of approved antidepressants show them to be significantly better than placebo), Katz tackles several features of clinical trials methodology with an eye toward improving the success, efficiency and scientific value of those trials.

There’s a good bit of wisdom in this brief (66 page) volume. Katz argues, convincingly, that since clinical trials are time consuming and expensive it makes sense to maximize the information that they provide. Instead of the current practice of evaluating outcome simply by the change in total score on a measure of depression severity, like the HAM-D or MADRS, Katz suggests that in addition to assessing changes in the depressive syndrome as a whole, efficacy studies should also include thorough measurement  of the individual components of depression-anxiety, motor retardation, hostility and so forth. Katz points out that analysis of components provides more information on a drug’s spectrum of action and would foster a better understanding of the relationship between a drug’s pharmacologic activity and its behavioral effects. A clinical trial thus modified would go beyond a strictly commercial venture and advance the science of psychopharmacology.  In some instances analysis of components might point to a symptom of depression that is particularly responsive to an experimental drug and thus rescue an otherwise failed trial. If this approach had been followed in the first trials of SSRIs their value as anxiolytics would have been discovered far earlier.

I agree wholeheartedly with Katz’s idea that the information provided by clinical trials and their scientific value would be enhanced by a components analysis. But I would take his concept of maximizing information a bit further. Let’s not forget that the antidepressant activity of the very first antidepressants, imipramine and iproniazid, was discovered when they were being studied for other conditions; imipramine was first tried in patients with schizophrenia (a few got hypomanic and a few showed a reduction in depressive symptoms) and iproniazid induced euphoria in some of the tubercular patients who got it.  It’s difficult to deliberately court serendipity, but clinical trials could incorporate, as a matter of policy, an open minded stance to clinical effects, frequent, meticulous and extensive clinical observation and attention to and follow up of unexpected clinical changes.

Katz also points to data from his own and others’ studies that challenge the widely held belief that it takes several weeks of antidepressant treatment before improvement occurs. He shows that much of the symptom relief brought by antidepressants comes in the first two weeks of treatment and that the type of early response predicts response later down the line. Notably, the absence of improvement in the first two weeks is highly predictive of lack of response at six weeks.  Clinical trials could be less costly and time consuming, Katz suggests, if they were shortened on the basis of early response. Although early response can be detected with conventional severity ratings on the HAM-D, Katz’s work suggests that measurements of components are more sensitive to early clinical change. He points out that prospective studies are required to pin down the relationship between early changes in depressive components and eventual outcome. Such studies would, needless to say, provide information pertinent to clinical practice as well as clinical trial design.

Katz’s final recommendation is to use central ratings of videotaped interviews to assess patients in clinical trials. He provides a number of arguments for the value of this approach in multicenter trials, including reduction of variability among sites and raters, an enhanced capacity to observe and evaluate nonverbal behavior (Katz maintains that it’s easier for one observing the interview than one conducting the interview to assess such behavior) and the capacity to establish an archive of taped interviews for further study. These proposed advantages of video based ratings make sense on intuitive grounds, and Katz points to data generated by him and his colleagues that suggest these ratings are reliable and more sensitive to clinical change than conventional ratings. Nevertheless, given the logistical hurdles and expense of this approach, data showing conclusively that it provides an advantage in reliability, validity and outcome is required before implementation is warranted.

Katz gives a nod to Ketamine, but throughout his book he refers to monoaminergic systems, serotonin, norepinephrine and neurotransmitters as providing the neurophysiologic basis for both depressive symptoms and drug actions. Given the ever vanishing validity of the monoamine hypothesis, this book would rest on firmer ground if it stuck to psychopathology and eschewed unproven neurochemistry. As Katz says: “The essence of what is proposed here is that we convert the ‘clinical trial’ into a ‘scientific, clinical study’ aimed at achieving both the practical, primary aim of determining whether the new drug is efficacious for the targeted disorder, and the secondary scientific aims of describing the nature and timing of the full range of clinical actions the drug has on the major aspects of the depressive disorder.” This conversion can be accomplished without recourse to pathophysiological theories.

A few spots need copyediting. There are some useful appendices, including one which lists the instruments used to measure the depressive components.

 

Walter A. Brown

July 7, 2016