Ken Gillman: Medical science publishing: A slow-motion train wreck*
I respectfully dedicate this commentary to the memory of Barney Carroll. Those close to him, and those who knew him, have good reason to look on his life’s legacy with pride, pleasure and admiration.
Bernard J Carroll (b 1940; q 1964; MD, PhD), died from cancer on 10 September 2018.
Abstract
This commentary traces those influences that have adversely affected medical science publishing, starting with the history of the initiation of the modern model of medical publishing, by the psychopathic fraudster Robert Maxwell, of Pergamon Press, back in the 1960s. The number of published journals in medical sciences has increased greatly in the last few decades, to the point where even the wealthiest North American libraries cannot afford the subscriptions. The number of competent people, with time to referee the papers offered for publication, has become less and less as career promotion and time pressures on academics have become steadily greater. The amount of behind-the-scenes manipulation of supposed knowledge continues unabated (ghost-writing); agnoiology and agnotology thrive following their seeding by big tobacco decades ago. The inevitable net result is that standards of editorship, papers, and refereeing, have all decreased to the point where so many papers are so unremarkable that 50% of them never ever get cited subsequently in scientific literature. Many are probably never even read by anyone. The accuracy and relevance of the bibliography in papers has also decreased greatly; most referees no longer check the papers cited in publications, and many cited papers have little to do with the work in which they are cited. Despite much talk about training, and auditing, for all of these crucial aspects of scientific publishing, nothing substantial has eventuated — and, there is no assessment or over-sight of the ‘over-seers’. This situation has continued to worsen relentlessly.
Doctors have always been good at denying that they are affected by drug company advertising and now they have also become adept at denying the decaying quality of journals.
It is suggested that the time is near when it will be more logical and efficient for papers to be archived by institutions, and their merit assigned post-‘publication’, by various computer-generated algorithms such as those that have been developed by companies like Google. That will free-up billions of dollars, currently paid to rich publishers who add little of value to the scientific endeavour.
Introduction
As an independent researcher who is outside of the University and academic system, I perhaps see things with a different perspective from those who are captives of the system — captives they are, and ‘yoked to the plough’. In the decade since my retirement from clinical medical practice I have published a number of papers reviewing various aspects of neuro-pharmacology. As someone with an ‘H-index’ of 26, and more citations (>3,000) than a great majority of professors (a typical professor’s H-index is around 14, with total citations ~1,000 (Doja, Eady, Horsley, Bould, et al. 2014), and one who has published in journals in many different disciplines, I speak from a position of considerable breadth of experience.
NB. Citation statistics are that half of all published manuscripts are never cited, if you get 10+ citations you are in the top 24% and in the top 2% at 100+ (Patience, Patience, Blais and Bertrand 2017).
Consider this, if individual doctors had to pay a subscription for the journals they wished to read (that is exactly what I used to do in my early days), the number of journals published would drop 1000-fold overnight.
Individual doctors simply do not subscribe to journals anymore, and virtually nobody reads them, or even anything more than the title of selected papers, as revealed by a survey in this recent blog:
www.medpagetoday.com/blogs/revolutionandrevelation/72029.
PubMed lists about 30,000 medical journals, and there are thousands more that are not listed.
No conspiracy theory is required to explain why there are more journals that are less-often read, it has simply evolved naturally from the basic building-blocks of the profit motive (in its naked self-regulation neo-liberal expression) and the requirement for academics to ‘publish or perish’, and the arms-length payment to publishers (most who read journals have no idea how much libraries pay for them**). The mechanisms described by Jay, which have biased western ‘democracies’ towards something more like plutocracies, are similar (Jay 2009). Add-in the collective delusion of the supposed advantage and efficiency of goal-directed-industry-focused, short-term self-funding research, and presto! A perfect recipe for the decline of good science — and a precipitous decline it has been, and continues to be.
** This is the same as doctors prescribing drugs and having no idea (and often not a care) about how much they actually cost us all as taxpayers.
History is the key
I am sure few doctors or academics have even the faintest idea that the principles and foundation of the medical publishing system we now ‘enjoy’ was established by a Second World War spy who reinvented himself after the war and was a psychopathic fraudster on a grand scale.
I refer to the founder of ‘Pergamon press’, the egregious Robert Maxwell. That was a pseudonym of this Jewish-Czech who came to London, via Berlin, at the end of the war, and established an insatiable taste for the good life, which he assuaged with his duplicitous ingenuity, and consolidated with his extraordinary bullying and dishonesty, which was on a scale that even characters like Bernie Maddof would pay obeisance to. Accounts of his bare-faced dishonesty (as only a psychopath can pull-off) can readily be found elsewhere. Suffice it to say that when he drowned ‘falling’ off his luxury yacht in 1991, he was about to be questioned or charged with criminal offences, including war-crimes, and with the embezzlement of what now would be equivalent to several billion dollars.
The best ‘one-liner’ I saw called him ‘the bouncing Czech'.
Our interest in this unsavory character stems from his typically ingenious and deceitful pioneering and setting up of the current model of medical publishing, which did not exist prior to his entrepreneurship — most science had previously been published by learned societies, and for their members.
In the early days of Pergamon Press, the 1960s, there were soon problems, and Maxwell was ousted from the board. At that time, he was described, presciently, by British Government fraud investigators as ‘unsuited to run a public company’; but he nevertheless won back control of Pergamon and continued to ‘get away with it’ for another 30 years! Cojones in spades.
The size of the beast
Science publishing is a mega-business with global revenues around $ ten billion, and with profit margins bettering Apple, Google, and Amazon — an investment gem! And Maxwell was 140 kg in his later years — both bloated with profit.
Recently, Buranyi (2017), in an excellent article about Maxwell, quotes one eminent scientist as saying:
I have to confess that, quickly realising his predatory and entrepreneurial ambitions, I nevertheless took a great liking to him.
That sounds like a pretty young virgin being inducted into a brothel which she believes is a beauty parlour. It is clear that he seduced a great many scientists, especially in the medical field (easily stroked egos abound).
There was a toxic combination of a charming ruthless psychopath manipulating naïve and compliant academics — it is hard not to think of the expression ‘like lambs to the slaughter’. Right from the start Maxwell was overwhelming these people with lavish gifts of wine, cigars, and luxury trips — a well-proven strategy.
When people learn that I write scientific articles and referee papers for scientific journals they say something like ‘that must be useful income in your retirement’: they are astonished when I say that nobody gets paid anything for any of this work. Therefore, without befuddling everyone with too many complexities, I need to indicate the basics of the brilliant business model that Maxwell was responsible for instituting, and which was enthusiastically emulated by other publishers, who were frantically playing catch up with him in the 1960s and ‘70s.
Basically, he smartened up the presentation and marketing, took key people, whom he appointed as editors and board members of the numerous new journals he created, on trips around the Greek islands whilst they ‘found the right strategy for the journal’, assisted by ‘leggy blonde secretaries’ (as one informant put it).
NB. His daughter, Ghislaine, was convicted of soliciting a minor for prostitution (for her lover, the infamous billionaire businessman Epstein). One wonders where she learnt that tactic! She was his favourite child [he named his yacht after her]: but, by all accounts, he was a right-royal bastard to most of his numerous offspring, and also to his wife, (after whom he did not name his yacht). I wonder if Ghislaine ‘solicited’ for him, before Epstein?
And the next step? Having tarted up the packaging a bit, at little cost, he then sold the material back to the scientists, and research institutions that had funded the work in the first place, via the libraries which all academic institutions maintain to provide material as part of their educational role. Almost all of the time and work necessary to achieve this was contributed free by the academics themselves! How naïve can you get? the answer to that I suppose depends on how insecure you are and how vain you are.
In no time at all he had created a merry-go-round and had everyone by the short-and-curlies, and could simply increase the price remorselessly. There were endless permutations and combinations which he ingeniously engineered to give enormous leverage to his product. I won’t go into those here, but I am sure somebody doing a Ph.D. at a business school somewhere has written about it, because Maxwell was a clever and ruthless man. Many psychopathic characters are rather cowardly, but Maxwell was, I suspect, one of the less common breeds of those who actually had considerable physical courage, albeit tinged with recklessness and ruthlessness, as perhaps his incipient war-crimes charges might have revealed, had he lived to face them.
Another key element to understand, and this is something that Maxwell did understand, but that, in the early stages, the competition did not, is that the price moderating effect of competition does not come into it. This is because every time you create a new journal, like the ‘International Global Journal of recent advances in current Big Toe Surgery’, that is a new niche which creates further space for more unnecessary publications and further fuels the fire of publish-or-perish — it does not reduce the market for the previously existing journal of ‘Foot surgery’. However, the key bonus is that it does mean that academics in this field can feel more important (and be appointed as an editor or board member of one or more of the journals) and that they have their specialist status as ‘big toe surgeons’ agrandified. The library budget takes another hit!
Façade triumphs over accuracy and substance
Sed quis custodiet, Ipsos custodes (Juvenal 115 CE)
The most important duty of the editors and referees of journals is to guard the quality and probity of the scientific literature.
As an expert in the field of serotonin toxicity I am well placed to comment on the longstanding and absurd situation where the superficial appearance of scientific publications receives undue attention, whilst the accuracy of citations used to justify the rationale of the text goes largely unexamined. This is, without question, the elephant-in-the-room in relation to the glaring deficiencies of the journal and peer-review process which is presided over largely by persons of uncertain suitability and competence. I have not seen any discussion on this topic in the many comments about the usefulness of peer review. Strange, strange indeed. Sed quis …
Editors and referees: selection, training, auditing?
First, the answers to those easy questions.
Selection, random (now often computer-algorithm generated)
· Training, none
· Education, none
· Auditing, none
· The same applies to the editors themselves
As some wag recently commented, there are few important jobs in society for which you need no competence, no experience, no special qualifications; one is refereeing for journals, and the other is representing ‘the people’ in Parliament (Aisen 2002; Altman 1994; Ferguson, Marcus and Oransky 2014; Garcia-Larrea 2016; Lundh, Barbateskovic, Hrobjartsson and Gotzsche 2010; Ray 2002; Smith 2006; Siler, Lee and Bero 2015; Smith 2006; Tyrer 2015).
Outside of a small, and continually diminishing, percentage of ‘top journals’ much refereeing is little short of a joke — as Burns argues in ‘Academic journal publishing is headed for a day of reckoning’ (Burns 2017).
Whilst the following examples are from my own experience, I doubt not that many authors would tell similar stories: e.g. only last year, it was shown that three of the top medical journals had each rejected every single one of the 14 top-cited articles of all time, in their discipline. Epic fail.
My most highly cited paper, on TCAs, was submitted to the most eminent journal in the relevant field (the British Journal of Pharmacology); so, one would hope for top quality refereeing. That paper is now a benchmark paper in the field and has been cited nearly 400 times, at least 10 times more than any other comparable paper.
It was initially rejected out of hand. The two referees made only a few derisory lines of comment. After my discussion with the editor, two new referees — independent of drug companies — were recruited (that editor was good, but most editors would not even have bothered to reply to a protest such as the one I made). One referee was succinct and simply said it was an excellent paper that should be published. The other referee started his comment with ‘whilst this is a good paper it suffers from a number of serious errors’. He then went on to list more than a page of what he considered to be punctuation errors and the like (some were ‘correct’: but he made no sensible comment that was remotely scientific; perhaps he was a frustrated school-master). The psychiatrist in me laughed: it was sad, and likely he suffered from obsessive-compulsive disorder. However, for people whose career depends on getting things published this is far from a joking matter. The driving force behind this is that referees frequently get an ego boost by being asked to give an ‘expert’ opinion. They therefore feel obliged to make some sort of comment, especially when their personality does not allow them to be magnanimous and gracious, as was the other referee.
A crucially important aspect of any paper, apart from it being rational and logically coherent, is that the papers it cites, in support of the various facts and points made, should be relevant and correctly interpreted. This is where, in my experience of publishing in many different disciplines, the system completely breaks down. Few referees are sufficiently knowledgeable about the field in question to carry out the important task spotting misinterpreted and irrelevant references; indeed, few referees check the references at all (as an irrepressible maverick I cannot resist slipping in a few deliberate mistakes, just to test people). Also, the comments one sees of the other referees make it plain that most of them have no idea what the appropriate references that should be given actually are. It is also rare to see appropriate references suggested by referees when these have been omitted by the authors. Hence, I am confident in asserting that few referees check the references that are given.
An appalling and disgraceful example of this was dissected in detail on my website recently — that was a review paper from the supposedly prestigious Maudsley Hospital in London (often referred to as a ‘tertiary’ institute), to which their professor Taylor put his name. The paper bears the stamp of somebody whose first language is not English, and Taylor obviously did not take enough interest in the manuscript to correct that, indeed one wonders just how much he really did have to do with it. He should be ashamed. One referee of their paper, whom I criticized equally frankly for failing to correct their serious errors, wrote me a response in which he justified his lax standards by saying, ‘I accepted that review or that article just because I have feeling that everything on SS should be welcomed.’ [sic]. So much for the standards and probity of science: with gatekeepers like that why have a gate at all?
When we cannot rely on material from ‘Russell group’ establishments then things have truly reached rock bottom.
Now then, you might think that the accuracy and relevance of references is not consequential. How wrong you would be. A central pillar of the validity of the scientific literature is that cited references are actually relevant and good papers. Referees are not making sure that good and appropriate references are used and thus the whole basis of the metrics used to assess authors and journals is now becoming a complete nonsense.
Let me explain just a little more about this — again I have to use my own experience because such information is usually in a confidential and personal domain. One just does not know what actually happens in other instances. Because I am a world expert in my particular field of serotonin toxicity I know that a majority of the material published, relevant to this field, fails to cite the appropriate quality references. Some of these are mine, but many of them are also by the few other eminent researchers in the field. The fact that so many papers do not cite these key references means that these references do not attain the priority in the field that they deserve. Contrariwise, many trivial papers that should never have been cited at all get cited multiple times, merely because they mention something popular or controversial. As a referee, one often gets the impression that references have been scattered through manuscripts as if they had been thrown over them like confetti.
You do not have to be a mathematician to understand that such practices rapidly make a complete farce of most publication metrics. There are papers in my field that should have been cited five or even 10 times more frequently than they have been, and this failure is largely caused by a combination of sub-standard refereeing and sub-standard literature searches.
Here are other points from the cited papers:
· Lack of agreement between reviewers
· No checking of reviewers regarding financial conflict of interest
· Failure to detect errors/fraud, lack of transparency, lack of reliability, potential for bias, potential for unethical practices, lack of objectivity
· Lack of recognition and motivation of reviewers
· No rating of reviewers’ performance.
One ex-editor (Smith 2006) stated:
‘despite being central to the scientific process [refereeing] was largely unstudied until various pioneers—including Stephen Lock, former editor of the BMJ, and Drummond Rennie, deputy editor of JAMA— urged that it could and should be studied. Studies so far have shown that it is slow, expensive, ineffective, something of a lottery, prone to bias and abuse, and hopeless at spotting errors and fraud.’
Failure to do quality literature searches
A closely related problem is that these mistakes with references to published papers are generated by the failure of the doctors who write these papers to do quality literature searches. Such searches would lead them more successfully to the appropriate benchmark papers in the field. An important reason for these failures is the shortage of, and failure to use, professional librarians. It is well recognised that a simple search using a database like PubMed (which is all most doctors do) finds barely half of the relevant material — yet the great majority of published papers use that inadequate strategy.
Editors: Unqualified and compromised
No, you do not get a prize for guessing that not even editors need any relevant qualifications. Some of them do not get paid at all, some get paid a token part-time salary, few are remunerated in the way you would expect for what should be a serious highly-skilled full-time job.
The ‘Retraction watch’ website estimate ‘two-thirds of editors at prominent journals received some type of industry payment over the last few years – which, at many journals, editors are never required to disclose. retractionwatch.com/2017/11/08/editors-top-medical-journals-receive-industry-payments-report/
A widely aired criticism, when I was a young man 40 years ago, was that editors did not ensure there was statistical evaluation of research — astonishingly, that situation remains the same to this day: of 114 ‘top’ journals examined, only a third had statistical review for accepted manuscripts!
You could say that fact alone is all you need to know to decide that you agree with Prof Ioannidis, that most medical research is wrong (Ioannidis 2005). Many years ago, the eminent Oxford statistician, Altman, stated ‘we need less research, and better research (Altman 1994) — the exact opposite has eventuated.
Needless to say, few editors indeed have statistical knowledge, never mind expertise. Their theoretical role of vetting papers before they are sent to referees for an opinion seems to have been largely abandoned: I have been sent plenty of papers from editors that any competent and diligent editor should have rejected out of hand ‘at a glance’, before imposing it on a referee’s time. One imagines most of the editors just do not bother.
My guess is that many journals, even those from ‘reputable publishers’ (obviously, when you have finished reading this you will realize that the preceding phrase is an oxymoron) actually use a computer algorithm to generate suggested referees, which are then probably sent out automatically without the editor doing anything at all. As a psychiatrist I once published a paper in an infectious diseases journal, because one of the antibiotics introduced some years ago acts as an MAOI and was therefore a risk for precipitating serotonin toxicity. The publication was only a letter. Yet, within a matter of months, I was getting requests to referee papers for other journals in the infectious diseases field! Obviously, anybody with half a brain who looked at my publication record would have known that was totally inappropriate. You might think it does not matter because such inappropriate requests get refused, but you would be wrong – many doctors cannot resist the siren-call of being lauded as an expert any more than they can resist a dinner invitation from a drug company. I receive so many of these inappropriate requests, and replying with ‘unsubscribe’ frequently does not stop them coming, that I now routinely reply with obscenities: that seems to be more effective.
I will not dwell on it here, but it is glaringly obvious that completely bogus refereeing of one sort or another is a spreading epidemic (Ferguson, Marcus and Oransky 2014).
Not only is there bogus refereeing, but there are completely bogus Journals (be suspicious of any journal with a title like ‘the international global journal of recent advances, current trends, in…’. I am not joking, they exist. Some have been funded by drug companies as a channel for getting dodgy papers published — a few of these have been rumbled, but I am sure there are others in circulation yet to be discovered. In the grey area between these two are journals that accept pay to publish papers; there has been a massive expansion in the number of these kinds of journals recently, and they cover the whole spectrum from careless and dubious through to completely bogus.
Time and numbers
The basic problem is simple and undeniable. Academics have multiple demands on their time, over and above their normal work, involving teaching, mentoring, sitting on committees, weeks spent working up grant applications, doing their own research, and probably near the bottom of the list, doing refereeing for journals (unpaid and largely unrewarded).
On the other hand, the number of journals has proliferated exponentially over the last decade or two. Increased demand, reduced supply –result, inevitably decreasing standards. It is perfectly obvious that even the best journals are going to struggle to find competent referees, which is precisely why many now ask authors to suggest referees for their own papers. That obviously opens yet another door for favoritism and cheating which it is quite clear many people are marching through without a backward glance.
Beyond ghost-writing: ghost-managed medicine
Back in 2009, the Institute of Medicine recommended the prohibition of ghost-writing: editors have not instituted systematic assessment of ghost-writing since then (Lacasse and Leo 2010). I am sure if they tried to they would be sacked.
As yet another ex-editor (Barbour 2010) stated: it ‘threatens the credibility of medical knowledge and medical journals’. And, to parody Saki: The editor was a good editor, as editors go; and as editors go, she went. Ginny Barbour was sacked — in fact now good editors do not get appointed in the first place. Most of them seem little more than puppets or figureheads.
See also:
· Ghost marketing: pharmaceutical companies and ghost-written journal articles (Moffatt and Elliott 2007)
· Legal remedies for medical ghost-writing: imposing fraud liability on guest authors of ghost-written articles (Stern and Lemmens 2011)
· Systematic review on the primary and secondary reporting of the prevalence of ghost-writing in the medical literature (Stretton 2014)
· Ghost-writing revisited: new perspectives but few solutions in sight (PLoS Editors 2011)
The huge amount of behind-the-scenes manipulation of supposed knowledge continues unabated (ghost-medicine); agnoiology (the study of ignorance) and agnotology (culturally-induced ignorance or doubt — not yet in the OED) are thriving following their seeding, and fertilizing with bundles of cash, by ‘big tobacco’ decades ago (Michaels 2010). As Sismondo’s book details ‘Contract research organisations’ and ‘publication planners’ essentially populate and orchestrate most of the medical knowledge space (see his just published a book (Sismondo 2018)) and other refs (Lacasse and Leo 2010; Stern and Lemmens 2011; Stretton 2014; Sismondo 2017; Sismondo and Doucet 2010; Barbour 2010).
In short: the whole medical knowledge space is macro-managed by those with the money, largely of course for their own benefit — a direct analogy to plutocratic politics.
Salami-publish, or perish: self-imposed burden
Yet another way in which academics have tripped over on their own trousers and shot themselves in the foot is by creating the self-imposed system whereby there are greater rewards for publishing a number of small papers compared to one substantive work — it is ridiculous. A system has been created that rewards quantity rather than quality and at the same time costs us all a fortune by catalysing the creation of yet more silly journals that have to be paid for — academics may protest that the peer review system stops that happening, but the information herein clearly demonstrates that is a dangerous delusion on a grand scale. It merely fuels the production of yet more un-needed and poor-quality papers covering different varieties of big toe surgery.
It is painfully clear that academics are wasting an immense amount of time and resources producing third rate publications which nobody is ever going to read, and which certainly are not going to have any significant impact on the world of science: and all because they create a rod for their own backs through the current ‘publish or perish’ mentality. It is stupid, vain, and pathetic, and it is high time it was stopped. It simply fuels the fire and provides profits for publishers who add little significant value to the process.
Post-archiving assessment
A post-publication assessment of the value of work has already emerged automatically from the cross correlation of information that is now compiled. It just needs to be formalized, augmented, fine-tuned and organized. Just as Google know what you have looked at, and for how long, and in what part of a physique you prefer the curves or bulges in the objects of your adoration to be, so it is possible to automatically register the expertise of the viewing predilections of any reader, and compute a metric to judge work accordingly. The Citation index is merely the first crude and clumsy step in this direction: for instance, it does not factor in whether the paper was cited for good reasons or bad reasons. Neither does it take account of whether the citation was related to a substantial paper by a reputable researcher in the field, or just in a letter to journal. And so on.
Post-archiving assessment already exists and works well: in mathematics, physics, and computer science they post pre- and post-reviewed versions of their work on servers such as arXiv @ about $10 per article! System of community post-archiving review can be added on top of any computer algorithms that are used. One major benefit of this system is that it focusses on the article, not the journal (Van Noorden 2013).
It is not rocket-science: if I, as an expert on ST, spend an hour reading a paper relevant to ST, (and Google knows I have in H- index of 26), then it will weigh that information as being of greater value than if a first-year pharmacy student spends an hour reading the same paper. It would be simple to build-in more sophisticated ratings of the quality of material, which might be weighed in value using agreed algorithms to make it more discriminating. Universities etc. do, or could, hold repositories of work that they considered worthy of consideration for indexing (many already do). These would be examined by ‘bots’ like those used by Google. It does not take much thought or imagination to see how such a system would make standard journals redundant rapidly. All the other types of content that are put in journals of various different sorts could be absorbed into such a system without undue difficulty. Apart from universities, grant agencies and funding charities could hold archives of whatever work they wish.
Such a system would have the advantage that any kind of fraudulent work would be revealed and would disappear from the citation literature like a flower wilting in a drought, contrary-wise important work would ascend to its appropriate ranking efficiently. Bad work would soon acquire the equivalent of the mark of the cross of the plague and few would see it or read it. All those participating would have a unique identity code, both as authors and commentators. All the computer software and protocols to achieve this already exist to a degree that would make fraud and gaming the system extremely difficult.
The hundreds of millions of dollars thus saved could be directed to libraries, librarians, and a few appropriate regulatory and overseeing bodies which could run the system in a democratic and transparent manner. It is difficult to see why this would not be infinitely superior to the current system. Cheaper, better, quicker, more transparent, hard to cheat or ‘game’: what’s not to like?
Eisen, a founder of PLoS One, has said: ‘but my frustration lies primarily with leaders of the science community for not recognizing that open access is a perfectly viable way to do publishing’. Academics themselves could institute such a system and it would be hugely empowering throughout the whole educational world, and it would make all research freely available to everyone.
The big publishers have not refrained from self-interested bullying tactics: a group of them Reed-Elsevier, Wiley-Blackwell, Springer, Taylor & Francis, American Chemical Society, Sage Publishing formed the CRS to pressure scientist social networking site Researchgate into taking down 7 million ‘unauthorized’ copies of their papers.
That prompted scientists to consider not publishing in its journals. This backlash caused a reversal position. Indeed referees should decline to assist bullying publishers like Elsevier, who use their power to protect their profits and inhibit access to science knowledge.
It certainly does seem that academics themselves are the only significant stumbling block to the institution of such systems. I once described the therapeutic timidity of my colleagues as being pusillanimous, and that is a word which seems relevant in this context. What will give them the courage and motivation to do something? Perhaps some of the big charitable funding agencies like the Wellcome trust have sufficient to gain from this to become a driving force?
Convinced?
Tempora mutantur et nos mutamur in illis
It is time for a change. High time.
If the above does not convince you that journals have become a train wreck, then how alike are you to the virgin in the beauty parlor at the beginning of my commentary? You just have not realized that you have been ‘ravaged’ yet.
We might hope that a degree of redemption comes from the fact that good researchers at good institutions possess some esoteric and arcane knowledge concerning which few remaining journals are actually good — but the cynic in me cries out ‘that is a triumph of hope over experience’. Even if there is a grain of truth in that, what inevitably remains the case is that most of the research that they rely on and cite is from journals that have unacceptably low standards in respect of all the points discussed above. And, there is a steadily weakening relationship between journal impact factors and individual papers’ citations rates. This is a result of more people searching via ‘Google’ (or similar), rather than looking at actual journals. Indeed, most people would not now recognize the cover of the journal from which the paper they are citing actually comes from (Lozano, Larivière and Gingras 2012).
The ’infection-rate’ of journals is now well beyond the limits of ‘herd immunity’. In an epidemic everyone is at risk of infection. ‘Therefore, send not to know …’
And, to the extent that an arcane knowledge of which journals are ‘good’ exists, then that is effectively post-publication reviewing and secret metrics.
As the amount of background research and reading I have done to write this commentary has increased, I have found myself thinking more and more frequently of the parallels between heroin addicts and academics, the general behavior of publishers and drug pushers, and the need for gratification from addicts and academics getting their publication fix.
Conclusion
There is strong evidence that the whole publishing enterprise is plagued with faults, incompetence, bias, dishonesty, and inefficiency, whilst costing a fortune, most of which goes into the pockets of major publishing houses who make huge profits without benefiting medicine much at all. This is the poisonous legacy of the psychopathic fraudster Robert Maxwell.
Referees should decline to assist bullying publishers like Elsevier, who use their power to protect their profits and inhibit access to science knowledge
It is suggested that the time is here when it will be more logical and efficient for papers to be archived by institutions, and their merit assigned after posting. This can be achieved reliably and transparently using various methods including computer-generated algorithms such as those that have been developed by companies like Google. That will free-up for redeployment large amounts of money, currently being paid to rich publishers who add little of value to the scientific endeavour. This money can be redirected to provide various benefits to improve library services and remuneration to those academics who actually do the real work that adds, or should add, real value to the scholarly world.
References:
Aisen ML. Judging the judges: keeping objectivity in peer review. J Rehabil Res Dev 2002; 39: vii-viii.
www.ncbi.nlm.nih.gov/pubmed/11926332.
Altman DG. The scandal of poor medical research. BMJ 1994; 308: 283-4.
www.ncbi.nlm.nih.gov/pubmed/8124111.
Barbour V. How ghost-writing threatens the credibility of medical knowledge and medical journals. Haematologica 2010. 95: 1-2.
www.ncbi.nlm.nih.gov/entrez/query.fcgi
Buranyi S. Is the staggeringly profitable business of scientific publishing bad for science? Guardian, 2017: p. www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science.
Burns P. Academic journal publishing is headed for a day of reckoning. theconversation.com/academic-journal-publishing-is-headed-for-a-day-of-reckoning-80869, 2017.
Doja A, Eady K, Horsley T, Bould MD, et al. The h-index in medical education: an analysis of medical education journal editorial boards. BMC Med Educ 2014; 14: 251.
www.ncbi.nlm.nih.gov/pubmed/25429724.
Ferguson C, Marcus A, Oransky I. Publishing: The peer-review scam. Nature 2014; 515: 480-2. www.ncbi.nlm.nih.gov/pubmed/25428481.
Garcia-Larrea L. Twenty years after: Interesting times for scientific editors. Eur J Pain 2016; 20: 3-4 onlinelibrary.wiley.com/doi/epdf/10.1002/ejp.831.
www.ncbi.nlm.nih.gov/pubmed/26711620.
Ioannidis JP. Why most published research findings are false. PLoS Med, 2005. 2(8): p. e124.
www.ncbi.nlm.nih.gov/entrez/query.fcgi
Juvenal. Sed quis custodiet, ipsos custodes. Satire, Latin/Roman, c. 115 CE; 55-140: p. Satire VI line 347.
Jay A. A New Great Reform Act. CPS, 2009: p. www.cps.org.uk/files/reports/original/11102711530420090711PublicServicesANewG.
Lacasse JR, Leo J. Ghostwriting at elite academic medical centers in the United States. PLoS Medicine, 2010. 7(2): p. e1000230.
Lozano GA, Larivière V, Gingras Y. The weakening relationship between the impact factor and papers' citations in the digital age. Journal of the American Society for Information Science and Technology 2012; 63: 2140-5.
Lundh A, Barbateskovic M, Hrobjartsson A, Gotzsche PC. Conflicts of interest at medical journals: the influence of industry-supported randomised trials on journal impact factors and revenue - cohort study. PLoS Med, 2010. 7(10): p. e1000354.
www.ncbi.nlm.nih.gov/entrez/query.fcgi
Michaels D. Doubt is their product: how industry's assault on science threatens your health. 2010.
Moffatt B, Elliott C. Ghost marketing: pharmaceutical companies and ghostwritten journal articles. Perspect Biol Med 2007; 50: 18-31.
/www.ncbi.nlm.nih.gov/entrez/query.fcgi
Patience GS, Patience CA, Blais B, Bertrand F. Citation analysis of scientific categories. Heliyon, 2017. 3(5): p. e00300.
www.ncbi.nlm.nih.gov/pubmed/28560354.
PLoS Medicine Editors. Ghostwriting revisited: new perspectives but few solutions in sight. PLoS Medicine, 2011. 8(8): p. e1001084.
Ray JG. Judging the judges: the role of journal editors. QJM 95: 769-74.
www.ncbi.nlm.nih.gov/pubmed/12454319.
Siler K, Lee K, Bero L. Measuring the effectiveness of scientific gatekeeping. Proc Natl Acad Sci USA, 2015; 112: 360-5.
www.ncbi.nlm.nih.gov/pubmed/25535380.
Sismondo A. Ghost-Managed Medicine: Big Pharma’s Invisible Hands. 2018: p. www.matteringpress.org/wp-content/uploads/2018/07/Sismondo-Ghost-managed Medicine-2018-1.pdf.
Sismondo S. Ghost management: how much of the medical literature is shaped behind the scenes by the pharmaceutical industry? PLoS Med, 2007. 4(9): p. e286.
www.ncbi.nlm.nih.gov/pubmed/17896859.
Sismondo S, Doucet M. Publication ethics and the ghost management of medical publication. Bioethics 2010. 24: 273-83.
www.ncbi.nlm.nih.gov/entrez/query.fcgi.
Smith R. The trouble with medical journals. J R Soc Med 2006; 99: 115-9.
www.ncbi.nlm.nih.gov/pubmed/16508048.
Stern S, Lemmens T. Legal remedies for medical ghostwriting: imposing fraud liability on guest authors of ghostwritten articles. PLoS medicine, 2011. 8(8): p. e1001070.
Stretton S. Systematic review on the primary and secondary reporting of the prevalence of ghostwriting in the medical literature. BMJ open, 2014. 4(7): p. e004777.
Tyrer P. A handmaiden to science: the role of the editor in psychiatric research. Acta Psychiatr Scand 2015. 132: 428. www.ncbi.nlm.nih.gov/pubmed/26366877.
Van Noorden R. Open access: The true cost of science publishing. Nature, 2013. 495(7442): p. 426-9. www.ncbi.nlm.nih.gov/pubmed/23538808.
*This document was originally published on Ken Gillman’s website Psycho Tropical Research (https://psychotropical.info/publications/), Commentaries 2018:11;1-19, and has been edited and formatted to conform to INHN posting standards.
March 21, 2019