Skepticism about science and medicine

In search of disinterested science

Archive for June, 2017

Has all academic publishing become predatory? Or just useless? Or just vanity publishing?

Posted by Henry Bauer on 2017/06/14

A pingback to my post “Predatory publishers and fake rankings of journals” led me to “Where to publish and not to publish in bioethics – the 2017 list”.

That essay brings home just how pervasive has become for-profit publishing of purportedly scholarly material. The sheer volume of the supposedly scholarly literature is such as to raise the question, who looks at any part of this literature?

One of the essay’s links leads to a listing by the Kennedy Center for Ethics of 44 journals in the field of bioethics.  Another link leads to a list of the “Top 100 Bioethics Journals in the World, 2015” by the author of the earlier “Top 50 Bioethics Journals and Top 250 Most Cited Bioethics Articles Published 2011-2015

What, I wonder, does any given bioethicist actually read? How many of these journals have even their Table of Contents scanned by most bioethicists?

Beyond that: Surely the potential value of scholarly work in bioethics is to improve the ethical practices of individuals and institutions in the real world. How does this spate of published material contribute to that potential value?

Those questions are purely rhetorical, of course. I suggest that the overwhelming mass of this stuff has no influence whatever on actual practices by doctors, researchers, clinics and other institutions.

This literature does, however, support the existence of a body of bioethicists whose careers are tied in some way to the publication of articles about bioethics.

The same sort of thing applies nowadays in every field of scholarship and science. The essay’s link to Key Journals in The Philosopher’s Index brings up a 79-page list, 10 items per page, of key [!] journals in philosophy.

This profusion of scholarly journals supports not only communities of publishing scholars in each field, it also nurtures an expanding community of meta-scholars whose publications deal with the profusion of publication. The earliest work in this genre was the Science Citation Index which capitalized on information technology to compile indexes through which all researchers could discover which of their published work had been cited and where.

That was unquestionably useful, including by making it possible to discover people working in one’s own specialty. But misuse became abuse, as administrators and bureaucrats began simply to count how often an individual’s work had been cited and to equate that number with quality.

No matter how often it has been pointed out that this equation is so wrong as to be beyond rescuing, this attraction of supposedly objective numbers and the ease of obtaining them has made citation-counting an apparently permanent part of the scholarly literature.

Not only that. The practice has been extended to judging the influence a journal has by counting how often the articles in it have been cited, yielding a “journal impact factor” that, again, is typically conflated with quality, no matter how often or how learnedly the meta-scholars point out the fallacies in that equation — for example different citing practices in different fields, different editorial practices that sometimes limit number of permitted citations, the frequent citation of work that had been thought important but that turned out to be wrong.

The scholarly literature had become absurdly voluminous even before the advent of on-line publishing. Meta-scholars had already learned several decades ago that most published articles are never cited by anyone other than the original author(s): see for instance J. R. Cole & S. Cole, Social Stratification in Science (University of Chicago Press, 1973); Henry W. Menard, Science: Growth and Change (Harvard University Press, 1971); Derek de Solla Price, Little Science, Big Science … And Beyond (Columbia University Press, 1986).

Derek Price (Science Since Babylon, Yale University Press, 1975) had also pointed out that the growth of science at an exponential rate since the 17th century had to cease in the latter half of the 20th century since science was by then consuming several percent of the GDP of developed countries. And indeed there has been cessation of growth in research funds; but the advent of the internet has made it possible for publication to continue to grow exponentially.

Purely predatory publishing has added more useless material to what was already unmanageably voluminous, with only rare needles in these haystacks that could be of any actual practical use to the wider society.

Since almost all of this publication has to be paid for by the authors or their research grants or patrons, one could also characterize present-day scholarly and scientific publication as vanity publishing, serving to the benefit only of the author(s) — except that this glut of publishing now supports yet another publishing community, the scholars of citation indexes and journal impact factors, who concern themselves for example with “Google h5 vs Thomson Impact Factor” or who offer advice for potential authors and evaluators and administrators about “publishing or perishing”.

To my mind, the most damaging aspect of all this is not the waste of time and material resources to produce useless stuff, it is that judgment of quality by informed, thoughtful individuals is being steadily displaced by reliance on numbers generated via information technology by procedures that are understood by all thinking people to be invalid substitutes for informed, thoughtful human judgment.

 

Advertisements

Posted in conflicts of interest, funding research, media flaws, scientific culture | Tagged: , , | 3 Comments »

How to interpret statistics; especially about drug efficacy

Posted by Henry Bauer on 2017/06/06

How (not) to measure the efficacy of drugs  pointed out that the most meaningful data about a drug are the number of people needed to be treated for one person to reap benefit, NNT, and the number needed to be treated for one person to be harmed, NNH.

But this pertinent, useful information is rarely disseminated, and most particularly not by drug companies. Most commonly cited are statistics about drug performance relative to other drugs or relative to placebo. Just how misleading this can be is described in easily understood form in this discussion of the use of anti-psychotic drugs.

 

That article (“Psychiatry defends its antipsychotics: a case study of institutional corruption” by Robert Whitaker) has many other points of interest. Most important, of course, the potent demonstration that official psychiatric practice is not evidence-based, rather, its aim is to defend the profession’s current approach.

 

In these ways, psychiatry differs only in degree from the whole of modern medicine — see WHAT’S WRONG WITH PRESENT-DAY MEDICINE  — and indeed from contemporary science on too many matters: Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, Jefferson (NC): McFarland 2012.

Posted in conflicts of interest, consensus, media flaws, medical practices, peer review, prescription drugs, scientific culture, unwarranted dogmatism in science | Tagged: , | Leave a Comment »