Skepticism about science and medicine

In search of disinterested science

Archive for the ‘scientific literacy’ Category

What science says about global warming and climate change

Posted by Henry Bauer on 2017/07/06

There is strong evidence that global temperatures are not significantly dependent on the amount of carbon dioxide in the atmosphere (Climate-change facts: Temperature is not determined by carbon dioxide).

That’s what science — the evidence, the facts — says.

Nevertheless, the overwhelmingly widespread belief among public and governments is the opposite, believing carbon dioxide to be the single most important determinant of global temperature and climate.

How could such a disparity between fact and public belief come about?

President Eisenhower foresaw the possibility half a century ago:
“in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite” (Farewell speech, 17 January 1961).

Such influence of a scientific-technological elite is possible because “science” has become believed in superstitiously: on authority, not because it offers sound evidence and logic (Superstitious belief in science). A number of popular misunderstandings about science conspire to maintain this state of affairs, notably a failure to appreciate how drastically different scientific activities became following World War II, different from earlier times; science nowadays is not self-correcting and it does not follow the so-called scientific method. A full discussion of those points is in my just-published Science Is Not What You Think — How it has changed, Why we can’t trust it, How it can be fixed.

The “fix” refers to the possible establishment of a Science Court to adjudicate expert differences over technical issues. That was first suggested more than half a century ago when the experts were at loggerheads and arguing publicly over whether power could be generated safely using nuclear reactors.
More recently, some legal scholars have pointed out that such an institution could help the legal system to cope with cases where technical issues play an important role.
Beyond that, I suggest that a Science Court is needed to force the prevailing “scientific consensus” to respond substantively to critiques like those made by the many critics of human-caused global warming and climate change.

Posted in consensus, global warming, politics and science, science is not truth, science policy, scientific literacy, scientism | Tagged: , | 1 Comment »

Superstitious belief in science

Posted by Henry Bauer on 2017/05/16

Most people have a very misled, unrealistic view of “science”. A very damaging consequence is that scientific claims are given automatic respect even when that is unwarranted — as it always is with new claims, say about global warming. Dramatic changes in how science is done, especially since mid-20th century, make it less trustworthy than earlier.

In 1987, historian John Burnham published How Superstition Won and Science Lost, arguing that modern science had not vanquished popular superstition by inculcating scientific, evidence-based thinking; rather, science had itself become on worldly matters the accepted authority whose pronouncements are believed without question, in other words superstitiously, by society at large.

Burnham argued through detailed analysis of how science is popularized, and especially how that has changed over the decades. Some 30 years later, Burnham’s insight is perhaps even more important. Over those years, certain changes in scientific activity have also become evident that support Burnham’s conclusion from different directions: science has grown so much, and has become so specialized and bureaucratic and dependent on outside patronage, that it has lost any ability to self-correct. As with religion in medieval times, official pronouncements about science are usually accepted without further ado, and minority voices of dissent are dismissed and denigrated.

A full discussion with source references, far too long for a blog post, is available here.

Posted in conflicts of interest, consensus, denialism, politics and science, science is not truth, scientific culture, scientific literacy, scientism, scientists are human, unwarranted dogmatism in science | Tagged: | Leave a Comment »

What is scientific literacy good for?

Posted by Henry Bauer on 2016/01/03

The way scientific literacy is defined and measured makes no sense — see Scientific Literacy and Myth of the Scientific Method (1992/1994 and still in print, which surely says something about the validity of its arguments).
Scientific literacy is measured by what people know about things like atoms and about “the scientific method”, in effect by how well they could function within science; whereas scientific literacy should surely mean what non-scientists need to know about the role of science in society: when to believe the experts and when not to. About medicine, by analogy, we don’t need to know how drugs work, say, we just need to know where to find data about how long they have been in use and what their side effects are and whether there’s already a law suit against the manufacturer that is still actively advertising it (quite a common circumstance; see anticoagulants Pradaxa and Xarelto and anti-diabetes Invokana at the moment (2015-16).

It turns out that current measurements of scientific literacy yield results that should be highly embarrassing to the expert gurus on this topic.

For example, people who score high on “scientific literacy” do poorly on distinguishing pseudo-science from science — Chris Impey, Sanlyn Buxner, Jessie Antonellis, Elizabeth Johnson, & Courtney King, “A twenty-year survey of science literacy among college undergraduates”, Journal of College Science Teaching, 40 (#4, 2011) 31-7.

When it comes to human-caused climate change, perhaps the measures of “scientific literacy” are pretty meaningful after all, because the most scientifically literate according to these tests are least likely to believe that human generation of carbon dioxide is responsible for climate change:
“Climate skepticism not rooted in science illiteracy: Cultural values, not knowledge, shape global warming views, a study finds” (Janet Raloff, 29 May 2012)

“New study: Numerical and Science Literacy cause Climate Change Skepticism” (1 June 2012)”

“Study: Climate skeptics and proponents score highest on climate science literacy…but are the most polarized” (Anthony Watts, 23 February 2015)

As I had pointed out in the first entry on this blog (A politically liberal global-warming skeptic?), most people’s views about human-caused climate change are determined by their political affiliation and not by their understanding of science or familiarity with the evidence.


Posted in global warming, media flaws, politics and science, scientific literacy | Tagged: | 6 Comments »

Meaningless research

Posted by Henry Bauer on 2014/07/17

Medicine and doctors are often symbolized by reference to Greek mythology:


Nowadays, though, much of so-called medical research would be better represented by p-values rampant over a field of nonsensical “associations”.

I had recently drawn attention (Statistics literacy) to a fine article about the lack of understanding of statistics that pervades the medical scene (Do doctors understand test results?). It explains how the risk of false-positive tests should be — but is not — understood by doctors and communicated to patients; and how relative risks are cited instead of absolute risks — a confusion that Big Pharma does much to promulgate because it helps to sell pills.
Another set of misunderstandings underlies much of the “research” that is picked up by the media as significant news: data mining coupled with the virtually universal mistake of taking correlations as indicating causation. A recent case in point concerns Alzheimer’s Disease (AD):

”Sleep disorders may raise risk of Alzheimer’s, new research shows
Sleep disturbances such as apnea may increase the risk of Alzheimer’s disease, while moderate exercise in middle age and mentally stimulating games, such as crossword puzzles, may prevent the onset of the dementia-causing disease, according to new research to be presented Monday”.

Note that “may increase” and “may prevent” both imply causation, that sleep disorders may actually cause AD while physical and mental exercise may bring about (cause) protection. But the evidence is only that there is some sort of correlation; and it’s vitally important to keep in mind that correlation doesn’t mean an association every time, it means only that two things are found together apparently more often than one might expect purely as a result of chance.

Note too that “may” is an often-used weasel word to insinuate something while guarding against being held accountable for actually asserting it; e.g., big campaign contributions may influence politicians; conflicts of interest may influence researchers; and so on.

Note as well “to be presented”, which illustrates publicity-seeking by researchers and the complicity of the media in that, as they ignore the fact that at this moment the matter is nothing more than hearsay: science is supposed to published and evaluated before being taken at all seriously.

As to “purely as a result of chance”, everyone should understand, but few do, that the almost universally used method of calculating these probabilities as “p-values” is itself quite misleading; see “Statistics can lie, but Jack Good never did — a personal essay in tribute to I J Good, 1916-2009”.
The take-away lesson is that what researchers claim as “statistically significant” is often not at all significant, indeed it may be entirely meaningless; yet it will still be picked up by the media and ballyhooed as the latest breakthrough.

Here’s how researchers in medically related fields (and elsewhere too, of course) can generate publications effortlessly and prolifically while disseminating misleading notions:
Select a topic — AD, say.
Collect a data set of people who have that condition, the data including every conceivable characteristic: age (in several categories such as young, early middle age, middle age, late middle age, young-old, fairly old, quite old); exercise habits (light, moderate, heavy); alcohol consumption (light, moderate, heavy); diet (many variables — fat, meat, dairy, vegan, gluten-free, etc.); other medical conditions and history (many indeed); race and ethnicity; any others you can think of (urban or rural, say; employment history and type of employment — veteran; blue or white collar; innumerable possibilities); tests by MRI, complete blood analysis, etc.
Feed all the data into a computer, and set it to find correlations.

Purely by chance, at least 1 of every 20 possible pairings will produce a “p ≤ 0.5 statistically significant” result. Since p ≤ 0.5, this is publishable, especially since the written report fails to emphasize that this resulted from a random sweep through 20 times as many pairings.

Naturally such results are quite likely to make little sense, since they are random by-chance associations. For example:
“[M]oderate physical exercise in middle age could decrease the risk that their cognitive deficits progress to dementia. . . .
Oddly, however, the association did not hold for people who engaged in light or vigorous exercise in middle age or for any level of physical activity later in life.
On a similarly counterintuitive note, another study suggested that high blood pressure among people at least 90 years old — “the oldest old” — may protect against cognitive impairment. . . . although hypertension is believed to increase the risk of Alzheimer’s and dementia for middle-aged people, the risk may shift with time” [emphases added].

The reason these results seem so incongruous and counterintuitive is, of course, that they were never genuine results at all, just “associations” that occurred when looking for correlations among a whole host of possibilities.

The notion that moderate exercise but not light or heavy exercise might actually be a significant cause of something like Alzheimer’s is not entirely beyond the realm of possibility, I suppose. Still, it seems sufficiently farfetched that I would hesitate — or be ashamed — even to mention the possibility until it had been reproduced in quite a few studies.
On the other hand I’m perfectly willing to see an association between high blood pressure and good cognition in the elderly, since good cognition depends on plentiful oxygen which depends on a good blood flow; and since arteries become less flexible with age, more pressure is needed to achieve that.
On a further hand, though, the notion that high blood pressure increases the risk of dementia in middle age strikes me as sufficiently absurd as to be dismissed pending the strongest most direct possible evidence; it “is believed” by whom?

Common sense cries out to be applied whenever a p ≤ 0.5 association is touted as meaning something. Try it out on the suggestion that “A daily high dose of Vitamin E may slow early Alzheimer’s disease”. Think about the caveats in that piece, and the trivial magnitude of the reported possible effect.

That respected mass media feature such garbage may well be quite harmful. I would expect some number of people will start taking vitamin E supplements immediately, whether or not there are any indications that they are not getting enough of it already. Not to speak of all the >90-year-olds desperately trying to raise their blood pressure and puzzled about how to decide how much exercise at their age is “moderate” but not “light” or “heavy”; and all the 70-to-90-year-olds wondering at what age high blood pressure stops causing Alzheimer’s and starts protecting against it.


Posted in media flaws, medical practices, science is not truth, scientific literacy | Tagged: , , , | Leave a Comment »

Statistics literacy

Posted by Henry Bauer on 2014/07/13

Doctors are on average ignorant about statistics that are directly relevant to their practice and their advising of patients and their ability to understand the tricks played by drug companies and their representatives. A comment to my HIV/AIDS blog mentioned an excellent article, “Do doctors understand test results?”,  that everyone should read and re-read and learn from, because it is not only doctors who are woefully ignorant about statistics.

Everyone would benefit from understanding the difference between relative risk and absolute risk, and between survival rate and mortality,  for example.

Posted in medical practices, scientific literacy | Tagged: , , | 1 Comment »

TED and TEDx reinvent the wheel — and get it all wrong (or, Ignorant punditry about science and pseudo-science)

Posted by Henry Bauer on 2014/05/30

For something like a century, scientists, philosophers of science, and many other scholars have grappled with this question: What criteria, principles, rules, or behavior characterize science by contrast to all other things? What exactly is “not-science”, in other words? What exactly is “pseudo-science”?

The upshot of these many decades of suggestions and discussions and argumentation among the most well-informed specialists is that


The classic summation of failed attempts is Larry Laudan’s “The demise of the demarcation problem” [1]. Even those who don’t agree that the issue is at a dead end [2] attempt to find a practical distinction by means of “family resemblances” or “fuzzy logic”, thereby acknowledging that the distinction can only be approximate, probabilistic, never a definitive one: no hard-and-fast, unequivocally valid set of criteria is able to identify an instance of “pseudo-science” without delving into the particularities of methods, evidence, and inference specific to that instance. Of course you can declare something wrong if you can show the methods to be inappropriate or incompetent, or that the claimed evidence is fudged or faulty or incomplete, or inferences are drawn against logic. But you don’t need a general, universal definition of “pseudo-science” to do that.

By hindsight, it even seems obvious that no universal definition of “science” could be found. It would have to be based on what everyone agrees constitutes science: biology, chemistry, geology, physics, etc. — not to speak of the behavioral and social sciences. Inferring from those real-world enterprises the “essence” of science means educing or inducing universal characteristics from empirical observations. But philosophy has long understood that induction from empirical observation or experience can never be guaranteed to yield universally applicable generalizations. (The classic illustration is that empirical observation yielded the principle that all swans are white, which was confounded upon the discovery of black swans in Western Australia.)

Moreover, a universally applicable definition of science would not change over time, whereas the activities that everyone calls “science” have changed drastically over time [3]. Most pertinent: some matters once accepted as proper science later became generally regarded as not-science or even pseudo-science, and some matters once pooh-poohed as pseudo-science later became accepted as quite proper mainstream science, for example, electromagnetic phenomena in biology [4].

The term “pseudo-science” can only mean something that pretends to be science but isn’t; and since there is no valid definition of “science”, there is equally no valid definition of “pseudo-science” by which it could be recognized.

Nevertheless, it remains quite common in public discourse that practicing scientists as well as professional and amateur pundits use the epithet “pseudo-science” to malign specific claims (say, the existence of Loch Ness Monsters or of Bigfeet) or even whole fields of activity (parapsychology, “cold fusion”, cryptozoology, ufology, etc. etc.)
The basis for such maligning and pooh-poohing is that the topic has been found wanting by the prevailing consensus in mainstream science. But that basis is fatally flawed: the history of science tells of one after another mainstream consensus being itself found wanting and replaced, often by something that the mainstream had earlier resisted vigorously or ignored studiously [5-8].

The state of the intellectual art about this has been quite plain for decades. But this intellectual art is the domain of history of science, philosophy of science, sociology of science, and the comparatively young interdisciplinary umbrella of STS (Science & Technology Studies), of which most scientists, journalists, and pundits generally are woefully ignorant; an ignorance that extends perforce to the public media generally, and to Internet punditry, very much including Wikipedia and its ilk, to an extent that would be highly embarrassing if those people and groups knew even a smidgeon of what they ought to before blathering about “pseudo-science” or “science”.

There is so much of this ignorant blathering that I usually ignore it, but that blissful state was interrupted when I became aware of a recent instance from the prominent and prestigious TED  and its franchised TEDx ventures, which bill themselves as promoters of high-quality seminars — “Ideas worth spreading . . . the power of ideas to change attitudes, lives and, ultimately, the world”.

What TED and TEDx spread about science and pseudo-science is ignorant rubbish (A letter to the TEDx community on TEDx and bad science). As with other charlatans, they know how to cover their tracks: They acknowledge reality correctly in sweeping general statements and then try unobtrusively to get around it:
“What is bad science/pseudoscience? There is no bright and shining line between pseudoscience and real science”.
RIGHT. But that valid statement is followed immediately with tiptoeing away from validity:

“Needless to say, this makes it all terribly hard to detect and define”.
NO: it makes it IMPOSSIBLE to detect as a genre or class or supposed exemplar of a genre or class. The only way to evaluate any counter-mainstream claim is to dig into the specific particularities, and then to concede that any contemporaneous judgment of plausibility or potential validity can only be probabilistic. That’s the clear lesson of centuries of history of science and a century or so of scholarly preoccupation with this issue [4].

The TED ignoramuses then proceed to offer “guidelines” for what constitutes “good science”. All of those “guidelines” are plainly misguided, reflecting a childishly naïve, uninformed view of science:

“It makes claims that can be tested and verified”
Every scholarly source since Popper’s proposal of “falsifiability” has been clear about the impossibility of verification — there can never be a guarantee against the future appearance of a “black swan”.

“It has been published in a peer reviewed journal (but beware… there are some dodgy journals out there that seem credible, but aren’t.)”
As Ziman pointed out [9], something like 90% of the primary research literature is wrong to some degree (in physics, but that’s the epitome of science and it may well be worse in other fields)

“It is based on theories that are discussed and argued for by many experts in the field”
History teaches that all the experts can be wrong — and are wrong in the longest run.

“It is backed up by experiments that have generated enough data to convince other experts of its legitimacy”
That the experts agree is no reason to believe them, in part because in the long run they’re usually wrong [5-8]. Here’s a nice way of putting it [10]:
“Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you’re being had. . . .
Consensus is invoked only in situations where the science is not solid enough.
Nobody says the consensus of scientists agrees that E=mc2. Nobody says the consensus is that the sun is 93 million miles away. It would never occur to anyone to speak that way”.

“Its proponents are secure enough to accept areas of doubt and need for further investigation”
Few mainstream scientists exhibit that quality, as anyone familiar with actual scientists or with the history of science or the sociology of science knows

“It does not fly in the face of the broad existing body of scientific knowledge”
The most significant advances are those that do contradict contemporary views; they spark scientific revolutions and become praised only by hindsight [5-8]

“The proposed speaker works for a university and/or has a PhD or other bona fide high level scientific qualification”
Any number of incompetents and kooks have such qualifications, as even a brief participation in a research community makes evident.
It is an endless source of astonishment to me that totally uninformed, ignorant people feel so free to hold forth with arrogant assurance, as TED does on the issue of science and pseudo-science. Don’t the TEDdies and their ilk ever stop to wonder where their knowledge comes from? “Knowledge” that is actually abysmal ignorance?


[1] Pp.111-27 in Physics, Philosophy and Psychoanalysis, ed. R. S. Cohen & L. Laudan, Dordrecht: D. Reidel, 1983
[2] For example, Philosophy of Pseudoscience: Reconsidering the Demarcation Problem, ed. M. Pigliucci & M. Boudry, University of Chicago Press, 2013
[3] Henry H. Bauer, Three Stages of Modern Science, Journal of Scientific Exploration, 27 [2013] 505-13; From dawn to decadence: The three ages of modern science
[4] Henry H. Bauer, Science or Pseudoscience: Magnetic Healing, Psychic Phenomena, and Other Heterodoxies, University of Illinois Press, 2001
[5] Bernard Barber, Resistance by scientists to scientific discovery,  Science, 134 (1961) 596-602
[6] Ernest B. Hook (ed)., Prematurity in Scientific Discovery: On Resistance and Neglect, University of California Press, 2002
[7] Thomas S. Kuhn, The Structure of Scientific Revolutions, University of Chicago Press, 1970
[8] Gunther Stent, “Prematurity and uniqueness in scientific discovery”, Scientific American, December 1972, 84-93
[9] John Ziman, p. 40 in Reliable Knowledge: An Exploration of the Grounds for Belief in Science, Cambridge University Press, 1978
[10] Michael Crichton, “Aliens Cause Global Warming”, Caltech Michelin Lecture, 17 January 2003

Posted in media flaws, science is not truth, scientific literacy, scientism | Tagged: , , , | 3 Comments »

60 MINUTES on aging — correlations or causes?

Posted by Henry Bauer on 2014/05/06

The TV program 60 Minutes on 4 May 2014 reported on studies of people older than 90, for clues to what allowed them to live so long. It was clear that neither the reporters nor the doctors understand the difference between correlations and causes.

Dementia was a prime concern, Alzheimer’s in particular. The mainstream dogma was taken for granted, that Alzheimer’s is defined and caused by amyloid plaques and tangles in the brain.
It turns out, however, that behavioral dementia does not correlate with either of those: people showing no behavioral signs of Alzheimer’s might have quite a lot of tangles and plaques, while people with behavioral dementia might have little or none of either or both.
One doctor did at one point say that one possibility might be that the tangles were not causes of dementia, but no one questioned that amyloid defined Alzheimer’s despite the rather clear evidence that they are not correlated.
Lack of correlation is a good sign of lack of causation. If plaques and tangles cause Alzheimer’s dementia, then they should be present in all cases of AD. They are not. Yet the belief persists despite disproof.

At the same time, the misguided view that correlation indicates causation pervaded the program. It was said that maintaining weight or even gaining a bit made for — is a cause of, in other words — longevity; that vitamin supplements do not; that moderate alcohol and coffee intakes make for longevity, as does exercise and social activity.
BUT: Assume just for the sake of argument that healthy longevity is determined solely by genetics. Then exactly the same correlations would be observed. People with good genes would live longer, socialize more, exercise more, eat and drink with fewer restrictions . . . .
So those correlations in themselves say absolutely nothing about what might have actually caused the long healthy lives of the studied people.
Correlations never prove causation.

Some autopsies revealed that dementia and death were sometimes associated with signs of numerous mini-strokes that might have been so slight as to be not even noticed. The prevalence of such strokes was less in people with higher blood pressure, the opposite of expectation based on current medical dogma.
The lack of correlation between high blood pressure and stroke indicates, of course, that high blood pressure may not be a significant cause of stroke, but this conclusion was not drawn.

The chief doctor in the program was quick to add that high blood pressure is still a risk factor for younger people, showing ignorance of actual data and brainwashing by contemporary medical dogma and shibboleths. It’s been known for a century that blood pressure increases with age — normally, naturally. That people over 90 have “high” pressure should not have been a surprise, nor that they had fewer strokes: IF one lives to a significantly old age, it means that one has not had too many strokes, and living that long means that one’s blood pressure will naturally, normally, be what is nowadays called, out of ignorance, “high”.

Posted in medical practices, scientific literacy | Tagged: , , , | Leave a Comment »

Peer review and consensus (Scientific literacy, lesson 2)

Posted by Henry Bauer on 2013/01/04

The conventional wisdom persistently points to peer review in science and medicine as the safeguard of quality and reliability. But peer review reflects the characteristics of the peer reviewers, and if the latter are biased or lazy or incompetent, then so will be the peer review, and neither quality nor reliability are safeguarded (Scientific Literacy and the Myth of the Scientific Method, University of Illinois Press, 1992).

Consider how trustworthy — or not — peer review would be in other human activities, say in politics. Imagine it was decided that the remuneration of Congressional representatives would depend on how well they had furthered the best interests of the nation, and that the judgment were made via peer review awarding grades of A through E. Not uncommonly, Republican peer reviewers would grade Democrats rather lower than Democrat peer reviewers would, and vice versa, wouldn’t they? Not in every individual case, of course, but statistically, on average.

Quite generally, when peer reviewers are competitors or colleagues of those being reviewed, one should not expect purely disinterested evidence-based judgments to be made.

Scientists are human and subject to all the psychological and social weaknesses to which humans are prone (Scientific literacy in one easy lesson). It displays scientific illiteracy to imagine that peer review in science or medicine guarantees the integrity, objectivity, and quality of funding and hiring and publication decisions.

The reliability or otherwise of science itself runs rather in parallel with the reliability or otherwise of peer review. In the first era of modern science, conflicts of interest were largely personal (From Dawn to Decadence: The Three Ages of Modern Science), and the peer review exerted by the scientific community as a whole could average out biases and individual animosities and friendships — all the more so because the peer review was quite informal and amounted largely to testing and discussing claims already made public.

In the second era, careerism could affect peer review and the progress of science, but the stakes were not very high, and again the overall judgment of the scientific community was relatively disinterested, averaging out personal biases. It was still relatively informal.

In the present era, however, judgments made by peer review emerge from a highly formalized and bureaucratic system and not from the sum of individual opinions from essentially all the pertinent members of the scientific community, which had been the case in earlier times when review was largely post-publication, whereas now it is pre-publication. Furthermore, today’s competition is cutthroat and the stakes can be very high. For these reasons, contemporary peer review is not a sound way of judging what is the best science or the best scientists. Just as fraud in science has become a major preoccupation in the last few decades, so have issues about the reliability of peer review, and for the same reasons.

One dramatic episode saw the effective demise of the journal Medical Hypotheses. The overt reason given for emasculating the journal was that it had functioned under editorial rather than peer review, but the actual reason was that the journal had published well documented articles debunking a mainstream dogma (Chapter 3, “A Public Act of Censorship: Elsevier and Medical Hypotheses”, in Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth).

Journal editors are in prime position to evaluate the pros and cons of peer review. It takes little time as editor to learn that for any given submitted manuscript, one can obtain favorable reviews or unfavorable reviews by appropriate choice of peer reviewers. The integrity of science depends on the integrity of editors and others who are forced by the prevailing system to resort to peer review to justify what they do. But there is nothing automatically sound about peer review:

“Classical peer review: an empty gun”, says Richard Smith, former editor of the British Medical Journal and chief executive of the BMJ Publishing Group, now on the board of the Public Library of Science and editor of Cases Journal, which uses only a minimal peer-review system. Smith cites Drummond Rennie, deputy editor of the Journal of the American Medical Association (JAMA) and “intellectual father of the international congresses of peer review that have been held every four years since 1989”:

If peer review was a drug
it would never be allowed onto the market”.
“Peer review would not get onto the market
because we have no convincing evidence of its benefits
 but a lot of evidence of its flaws. . . .
[P]eer review . . . is . . .
an ineffective, slow, expensive, biased, inefficient,
 anti-innovatory, and easily abused lottery:
 the important is just as likely to be filtered out
 as the unimportant
—— Breast Cancer Research, 12 [suppl. 4, 2010] S13

Richard Horton, editor of The Lancet, has pointed out that

Peer review … is simply a way
 to collect opinions from experts in the field.
Peer review tells us about the acceptability,
 not the credibility, of a new finding
—— Health Wars: On the Global Front Lines of Modern Medicine,
New York Review Books, 2003, p. 306

Peer review cannot even be relied on to detect purely technical errors, as Richard Smith (above) found by deliberately inserting errors into manuscripts before they were reviewed. The sorry incompetence of statistical analysis in the medical scientific literature has been exposed innumerable times over the course of two or three decades with no sign of improvement; see for example D. G. Altman, “The scandal of poor medical research”, British Medical Journal, 308 (1994) 283, and nearly a decade later, “Poor-quality medical research: what can journals do?”, JAMA, 287 (2002) 2765-2767; also John P. A. Ioannidis, “Why most published research findings are false”, PLoS Medicine, 2 (2005) e124, and John P. A. Ioannidis & Orestis A. Panagiotou, “Comparison of effect sizes associated with biomarkers reported in highly cited individual articles and in subsequent meta-analyses”, JAMA, 305 (2011) 2200-10.

Some of the flaws of peer reviewing were specifically criticized by stem-cell researchers (“Peer review trickery?”, Jef Akst. The Scientist [2nd February 2010]). [I suggest that it is likely no coincidence that stem-cell research is presently a particularly “with-it”, faddish, competitive field].
The editor of Nature, Philip Campbell, disagreed by outlining how carefully he and his editorial team try to guard against bias (“Peering into review”, Nature Medicine, 16 [#3, 2010] 239). But Campbell’s response comes from a thoroughgoingly Establishment source, debilitated by cognitive dissonance from recognizing its own fallibility; the editorial is worth reading carefully as an exemplar of self-serving generalities and rhetorical obfuscation. Nature’s inability to see itself as others see it has been illustrated amply on other occasions, for example, when it insisted in 1997 that there was no reason to require authors to reveal financial conflicts of interest, and subsequently changed that policy only reluctantly, in 2001, revealing by its commentary that Nature still did not understand that conflicts of interest inevitably tend to induce bias (p. 159, Dogmatism in Science and Medicine).  An inevitable tendency produces statistically significant effects, no matter that the tendency might have no effect in any given instance. That Nature fails to grasp that straightforward fact is surely the consequence of cognitive dissonance and not stupidity. I suggest that Nature’s universally acknowledged role as one of the two (with Science) most authoritative publications in all of science renders it incapable of appreciating the fundamental flaws that characterize the modern age of science (p. 67 in Dogmatism in Science and Medicine; From Dawn to Decadence: The Three Ages of Modern Science).

Innumerable studies have recognized that peer review discriminates against originality and novelty. There are at least three quite classic sources:
Bernard Barber, “Resistance by scientists to scientific discovery”, Science, 134 (1961) 596-602.
Gunther Stent, “Prematurity and uniqueness in scientific discovery”, Scientific American, December 1972, 84-93.
Ernest B. Hook, (ed)., Prematurity in Scientific Discovery: On Resistance and Neglect, University of California Press, 2002.

Despite all this, peer reviewing remains the routine practice almost everywhere. One reason may be that no acceptable alternative seems in sight. Another is that the present system is satisfactory to established institutions, for it safeguards their established status. Furthermore peer review enables decision-makers to avoid taking personal responsibility for their decisions, as when editors make their choices of reviewers. A less obvious reason is that those who maintain, use, nurture, and defend the system are largely scientifically illiterate. Typically they came from within the ranks of practicing scientists and have not become acquainted with the insights acquired over decades by scholars of STS.

*                    *                    *                    *                    *                    *                    *                    *

“Consensus” is commonly cited as certifying the reliability of pronouncements about science just as is peer review. That is unsound for the same reasons. The mainstream view, the “scientific consensus”, is established through processes governed by peer review. When peer review is flawed, so is the consensus. One can hardly speak more truly than Michael Crichton on this:

I regard consensus science
 as an extremely pernicious development
that ought to be stopped cold in its tracks.
Historically, the claim of consensus has been
 the first refuge of scoundrels;
 it is a way to avoid debate
 by claiming that the matter is already settled.
Whenever you hear the consensus of scientists
 agrees on something or other,
 reach for your wallet, because you’re being had….
 Consensus is invoked only in situations
 where the science is not solid enough.
Nobody says the consensus of scientists agrees that E=mc 2 .
Nobody says the consensus is that the sun is 93 million miles away.
It would never occur to anyone to speak that way
— “Aliens cause global warming”,
  Caltech Michelin Lecture, 17 January 2003.

Posted in consensus, peer review, scientific literacy, scientists are human | Tagged: | Leave a Comment »

Scientific literacy in one easy lesson

Posted by Henry Bauer on 2012/11/30

Scientific literacy is usually thought of as knowing what atoms and molecules are, and that the Earth is spherical and orbits around the Sun, and that we stick to Earth and the Moon sticks to Earth and the Earth sticks to the Sun because of the force of gravity, and that the Earth came into being about 4 or 5 billion years ago, and that the whole Universe came into being about 13 to 15 billion years ago, and that all living things are related to one another and descended from common ancestors; and so on and so forth. Scientific literacy is equated with knowing scientific facts and theories, in other words.
All those things can be very interesting, but they are largely irrelevant to the role that science plays in society. That role, however, is something that every citizen ought to understand, and most particularly those citizens who are in policy-influencing or policy-making positions. Scientific literacy ought to mean understanding what role science can properly play in the wider society.
Knowing scientific facts and theories need not be the same as believing them to be unquestionably true. In fact it shouldn’t be the same, yet all too often it seems to be. Being scientifically literate ought to mean having the tools to make rational decisions about the degree to which any given scientific fact or theory warrants belief — belief so strong as to warrant actions based on it. To that end, one needs to know something not so much about what science says but about how science is done. Here are some fundamental axioms of that sort of scientific literacy:

—>>  Science is produced by scientists. Therefore it is influenced by how scientists behave. Scientists are human beings: fallible, subject to conflicts of interest, and influenced or even constrained by their social and political environment. Science can therefore be reliable or unreliable, depending on circumstances.
Textbook examples of unreliable science are Lysenkoist biology in the Soviet Union and Deutsche Physik (Aryan, non-Jewish physics) in Nazi Germany. But conflicts of interest, dishonesty, systemic corruption (for example) can make science — what scientists produce — unreliable even in open, free, democratic societies.
—>> Science isn’t done by the so-called scientific method.
Many schools and many college-level courses in social science teach that science proceeds by posing hypotheses, testing them, and then rejecting them or keeping them as established theory. That “scientific method” seems like an entirely impersonal formula, capable of producing objective results; but even on its own terms, it takes little thought to realize that judgment needs to be exercised as to whether or not any given results do or do not support the hypothesis being tested. Preconceptions, conflicts of interest, and other quite personal matters enter into the forming of judgments.
The chief trouble, though, is that almost no science is done that way. Very few scientists ever do anything like that; for an extended illustration and examples, see my Scientific Literacy and the Myth of the Scientific Method (University of Illinois Press 1992). In a few areas of physics, or in planning protocols for experiments requiring statistical analysis, something like that “scientific method” is used, but not in most of science. If science in general were done that way, then every budding scientist would be taught that “scientific method”. They are not. I had acquired a chemistry Ph.D. before I had even heard of that scientific method — which I did from a political scientist.
—>>   However, most people do believe that science is done by the scientific method.  Since that procedure sounds so objective, impersonal, and reliable a way of attaining knowledge, most people generally assume “science” to be trustworthy, not significantly different from true. Scientists enjoy high prestige and their pronouncements are accorded the trustworthiness that “science” enjoys.
—>>   Science became regarded as trustworthy because of its perceived successes. It superseded religious authority with the triumph of Darwinism over Biblical creation: Darwin’s disciple, T. H. Huxley, preached sermons explicitly on behalf of The Church Scientific. By the end of the 19th century, science had become the touchstone of authentic knowledge (David Knight, The Age of Science: The Scientific World-View in the Nineteenth Century, Basil Blackwell, 1986).
To get a sense of just how powerfully persuasive scientific conclusions are nowadays, contemplate the different rhetorical impacts of “tests have shown” and “scientific tests have shown”; you might harbor doubts as to the first but surely not about the second.
—>>    Science does not and cannot deal in truth.
Science can describe how things are and how they behave. Why they do so is not observable, so any explanations are inferences, not facts. The misguided view that the scientific method delivers objective impersonal reliable knowledge obscures that.
The fact that any given theory can yield calculations that fit with what actually happens doesn’t mean that the theory is true or that the things it uses are what really exists. We can calculate planetary motions with exquisite accuracy using gravity theory, even as we nowadays don’t believe that gravity really exists, curvatures of space-time produce the illusion of gravity. We can calculate many things very accurately about electrons and atoms and molecules with the “wave-function” equations of quantum mechanics, but one can hardly believe that wave functions are real things, or that the “wavicles” we call photons, electrons , protons, etc., are actual things that have the properties sometimes of waves but sometimes of particles.
No matter how useful a theory has been in the past, it cannot be guaranteed useful in the future. The data we accumulate by observation cannot guarantee that future data might not seem contradictory; the fact that all Europe-observed swans had been white did not mean that all swans are white, as explorers of Western Australia discovered.
The misguided belief that science represents truth, which amounts to a religious-like faith in science, is called scientism.
—>>   Science is popularly thought to progress steadily. In reality, the history of science is one of trials and errors, periodic advances but also periodic discarding of earlier notions found later to be wanting.
Many people have heard of Thomas Kuhn’s description of scientific progress through scientific revolutions, which is readily interpreted as perpetual advance in step-wise fashion, revolutions as milestones of progress. What is not widely understood is that the revolutions — overturning of earlier views — are thereby also gravestones of a previous mainstream consensus.
Nor is any given scientific revolution necessarily permanent. Belief that light consists of particles or that it consists of waves alternated over the centuries, up to the latest view that it is something else, in a sense both or neither. That latest view may not be the last word.
—>>   Science as methodical and objective became readily confused with the even more misguided view that scientists behave methodically and objectively. Surveys have indicated that scientists are widely regarded as smarter, more intellectual, more capable of cold objectivity than non-scientists. First-hand acquaintance with scientists effectively disabuses one of such an opinion: religious views, conflicts of interest, cognitive dissonance (inability to recognize contradictory evidence) affect scientists as much as anyone else.
—>>   Science and thereby scientists are commonly thought to be always on the lookout for new discoveries, the more striking the better. The history of science, however, reveals that the most remarkable advances have almost always been strongly resisted when they were first claimed. The classic descriptions of this phenomenon are Bernard Barber, “Resistance by scientists to scientific discovery” (Science, 134 [1961] 596-602) and Gunther Stent, “Prematurity and uniqueness in scientific discovery” (Scientific American, December 1972, 84-93); more recent commentaries are in Ernest B. Hook (ed)., Prematurity in Scientific Discovery: On Resistance and Neglect (University of California Press, 2002).
Extensions of mainstream views, discoveries that do not require any modification of established theories, are surely welcomed; but the breakthroughs, the revolutionary claims, are almost always resisted or ignored, and become appreciated only by hindsight.

None of those points are controversial among those whose special interest is the nature of science, its history, and its role in society. But those specialists are a small and somewhat obscure breed, even within academe, and their specialty, STS, is still not widely known — see the ABOUT page of this blog.
A salient problem with contemporary national and international science policies is that the policy makers and the media and the general public have not yet learned that it is scholars of STS who should be consulted regarding science policy and controversies about scientific issues. Current practice is to take the advice of the experts in the technical disciplines, the scientists and the physicians and the engineers. But this amounts to believing the contemporary mainstream consensus to be unquestionably true, and history has shown that this is not warranted, the consensus might even be fatally flawed.
The understanding that STS offers should mediate between the experts and the policy makers. As war is too important to be left to the decisions of the generals, so nowadays science and medicine are far too important to be left to the decisions of the scientists and the doctors.

Posted in resistance to discovery, science is not truth, science policy, scientific literacy, scientists are human, the scientific method | Tagged: , , , | 6 Comments »