Ken Gillman: Medical science publishing: A slow-motion train wreck
Edward Shorter’s comments
I am usually an admirer of Ken Gillman's brilliant polemical pen and this piece, though tip-toeing close to the line of intemperate diatribe, nonetheless displays flashes of brilliance. I’m puzzled by the introductory indictment of publisher Robert Maxwell. Surely, his Pergamon Press couldn’t have been solely responsible for the wreckage-strewn trail that Gillman identifies. There is somewhere a lovely photo of Maxwell talking to Seymour Kety, the scientific director of NIMH. If Gillman is right, poor Kety was oblivious to being “led like a lamb to the slaughter.” I recall Kety as being quite astute. Could Maxwell have infected even him or is Gillman’s analysis vastly overwrought?
Some of the analysis is right-on, some of it misses the point.
Gillman's gist is that scholarly publishing in Psychiatry is in trouble because of “sub-standard refereeing” (the “Bigs” being too busy to referee properly) and “sub-standard literature searches” which, unaccountably, have somehow managed to miss some of Gillman’s own work. The third leg in this stool is the “unqualified and compromised editors.” I must say, I know many of these editors and they are among the brightest bulbs in the field. There are hundreds of editors, of course, as indeed there seem to be hundreds of journals, and not all can be top drawer, but still. . .
At the end Gillman throws in the by now tediously familiar critique of ghosting. Yes, we know about that. It’s awful. So, up to this point, we have a searing indictment that is somehow silent on the real problems, as I see them.
The contrast between the German scholarly journals in Psychiatry around 1900 and those in the US of today is stunning. The papers of 1900 bear scarcely a numeral, save the page number, and the pieces themselves do seem to go on endlessly. But the authors took on the big questions: what is the difference between mania and catatonic agitation, the difference between melancholic stupor and other kinds of stupor, and so forth. These are central issues. There are about five Kurt Schneider papers that really changed the entire understanding of psychopathology — and those didn’t have many numbers in them either.
By contrast, I can barely pick up the American Journal of Psychiatry without encountering on every page a sea of numbers. My eyes glaze over. Can we really advance the understanding of human psychopathology with papers that look as though they are the authors’ computer printouts? The basic problem here is reducing Psychiatry to a quantitative science, much like Physics. But the complexities of the brain and mind are not comparable to Physics. Psychiatry is comparable to Sociology in seeking a quantitative backbone for the major roll-outs. But thinking we can prove assertions to the third decimal point is delusive. The field does not yield itself readily to quantification, even though everybody relishes the feeling that they are doing “science” with this forest of numbers. The use of questionnaires rather than close observation in drug trials is a case in point. We tot up the numbers and there’s-yer-answer. Roland Kuhn, who discovered the clinical effectiveness of imipramine in melancholic depression, despised numbers.
Second point: today's literature seems to take on largely trivial questions. This is like the belief of scientists around the time of José Ortega y Gasset that “all the big problems have been solved.” Yet, have they been? One longs to see papers that relentlessly disassemble the diagnostic confusions of the DSM. How many depressions are there? Are there really 10 separate “anxieties”? (Emil Kraepelin thought there were none.) On the therapeutic side, are the SSRIs really “antidepressants,” and if not, what in the tremendous storehouse of psychopharmacology’s past deserves a re-think? Maybe the golden oldies wouldn’t be patentable, but hey, perhaps the NIMH could help us here. Don’t forget, we once had the Psychopharmacology Service Center that funded academic drug trials across the continent.
Third point: the problem with journals today is not that there are too many poor ones, “funded by drug companies as a channel for getting dodgy papers published.” Industry doesn’t put ghostwritten papers in obscure pay-to-play journals that no one reads. The industry-funded, ghosted and manipulated papers absolutely end up in tip-of-the-lance publications. And this is because some editors — not necessarily the best ones — fear the withdrawal of advertising if they behave skeptically. For Big Pharma, only the top-drawer is good enough. And you, as the author of a critical piece, will beat your head against the wall if you try to get it published in a first-line journal. These are realities. Gillman, of course, is aware of this, but he prefers to tilt against other targets that are important though secondary.
These issues — inappropriate quantification, sinking into trivia and the hijacking of our scholarly literature — should be in the forefront of our critical agenda. Gillman seems to drive straight at them, but then swerves off the road.
June 6, 2019