Skepticism about science and medicine

In search of disinterested science

Archive for the ‘peer review’ Category

How to interpret statistics; especially about drug efficacy

Posted by Henry Bauer on 2017/06/06

How (not) to measure the efficacy of drugs  pointed out that the most meaningful data about a drug are the number of people needed to be treated for one person to reap benefit, NNT, and the number needed to be treated for one person to be harmed, NNH.

But this pertinent, useful information is rarely disseminated, and most particularly not by drug companies. Most commonly cited are statistics about drug performance relative to other drugs or relative to placebo. Just how misleading this can be is described in easily understood form in this discussion of the use of anti-psychotic drugs.


That article (“Psychiatry defends its antipsychotics: a case study of institutional corruption” by Robert Whitaker) has many other points of interest. Most important, of course, the potent demonstration that official psychiatric practice is not evidence-based, rather, its aim is to defend the profession’s current approach.


In these ways, psychiatry differs only in degree from the whole of modern medicine — see WHAT’S WRONG WITH PRESENT-DAY MEDICINE  — and indeed from contemporary science on too many matters: Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, Jefferson (NC): McFarland 2012.

Posted in conflicts of interest, consensus, media flaws, medical practices, peer review, prescription drugs, scientific culture, unwarranted dogmatism in science | Tagged: , | Leave a Comment »

Climate-change orthodoxy: alternative facts, uncertainty equals certainty, projections are not predictions, and other absurdities of the “scientific consensus”

Posted by Henry Bauer on 2017/05/10

G. K. Chesterton once suggested that the best argument for accepting the Christian faith lies in the reasons offered by atheists and skeptics against doing so. That interesting slant sprang to mind as I was trying to summarize the reasons for not believing the “scientific consensus” that blames carbon dioxide for climate change.

Of course the very best reason for not believing that CO2 causes climate change are the data, as summarized in an earlier post

–>      Global temperatures have often been high while CO2 levels were low, and vice versa

–>     CO2 levels rise or fall after temperatures have risen or fallen

–>     Temperatures decreased between the 1940s and 1970s, and since about 1998 there has been a pause in warming, perhaps even cooling, while CO2 levels have risen steadily.

But disbelieving the official propaganda becomes much easier when one recognizes the sheer absurdities and illogicalities and self-contradictions committed unceasingly by defenders of the mainstream view.

1940s-1970s cooling
Mainstream official climate science is centered on models: computer programs that strive to simulate real-world phenomena. Any reasonably detailed description of such models soon reveals that there are far too many variables and interactions to make that feasible; and moreover that a host of assumptions are incorporated in all the models (1). In any case, the official models do not simulate the cooling trend of these three decades.
“Dr. James Hansen suspects the relatively sudden, massive output of aerosols from industries and power plants contributed to the global cooling trend from 1940-1970” (2).
But the models do not take aerosols into account; they are so flawed that they are unable to simulate a thirty-year period in which carbon emissions were increasing and temperatures decreasing. An obvious conclusion is that no forecast based on those models deserves to be given any credence.

One of the innumerable science-groupie web-sites expands on the aerosol speculation:
“40’s to 70’s cooling, CO2 rising?
This is a fascinating denialist argument. If CO2 is rising, as it was in the 40’s through the 70’s, why would there be cooling?
It’s important to understand that the climate has warmed and cooled naturally without human influence in the past. Natural cycle, or natural variability need to be understood if you wish to understand what modern climate forcing means. In other words modern or current forcing is caused by human industrial output to the atmosphere. This human-induced forcing is both positive (greenhouse gases) and negative (sulfates and aerosols).”

Fair enough; but the models fail to take account of natural cycles.

Rewriting history
The Soviet Union had an official encyclopedia that was revised as needed, for example by rewriting history to delete or insert people and events to correspond with a given day’s political correctness. Some climate-change enthusiasts also try to rewrite history: “There was no scientific consensus in the 1970s that the Earth was headed into an imminent ice age. Indeed, the possibility of anthropogenic warming dominated the peer-reviewed literature even then” (3). Compare that with a host of reproductions and citations of headlines from those cold times when media alarms were set off by what the “scientific consensus” indeed then was (4). And the cooling itself was, of course, real, as is universally acknowledged nowadays.

The media faithfully report what officialdom disseminates. Routinely, any “extreme” weather event is ascribed to climate change — anything worth featuring as “breaking news”, say tsunamis, hurricanes, bushfires in Australia and elsewhere. But the actual data reveal no increase in extreme events in recent decades: not Atlantic storms, nor Australian cyclones, nor US tornadoes, nor “global tropical cyclone accumulated energy”, nor extremely dry periods in the USA, in the last 150 years during which atmospheric carbon dioxide increased by 40% (pp. 46-51 in (1)). Nor have sea levels been rising in any unusual manner (Chapter 6 in (1)).

Defenders of climate-change dogma tie themselves in knots about whether carbon dioxide has already affected climate, whether its influence is to be seen in short-term changes or only over the long term. For instance, the attempt to explain 1940s-70s cooling presupposes that CO2 is only to be indicted for changes over much longer time-scales than mere decades. Perhaps the ultimate demonstration of wanting to have it both ways — only long-term, but also short-term — is illustrated by a pamphlet issued jointly by the Royal Society of London and the National Academy of Science of the USA (5, 6).

No warming since about 1998
Some official sources deny that there has been any cessation of warming in the new century or millennium. Others admit it indirectly by attempting to explain it away or dismiss it as irrelevant, for instance “slowdowns and accelerations in warming lasting a decade or more will continue to occur. However, long- term climate change over many decades will depend mainly on the total amount of CO2 and other greenhouse gases emitted as a result of human   activities” (p. 2 in (5)); “shorter-term variations are mostly due to natural causes, and do not contradict our fundamental understanding that the long-term warming trend is primarily due to human-induced changes in the atmospheric levels of CO2 and other greenhouse gases” (p. 11 in (5)).

Obfuscating and misdirecting
The Met Office, the UK’s National Meteorological Service, is very deceptive about the recent lack of warming:

“Should climate models have predicted the pause?
Media coverage … of the launch of the 5th Assessment Report of the IPCC has again said that global warming is ‘unequivocal’ and that the pause in warming over the past 15 years is too short to reflect long-term trends.

[No one disputes the reality of long-term global warming — the issue is whether natural forces are responsible as opposed to human-generated carbon dioxide]

… some commentators have criticised climate models for not predicting the pause. …
We should not confuse climate prediction with climate change projection. Climate prediction is about saying what the state of the climate will be in the next few years, and it depends absolutely on knowing what the state of the climate is today. And that requires a vast number of high quality observations, of the atmosphere and especially of the ocean.
On the other hand, climate change projections are concerned with the long view; the impact of the large and powerful influences on our climate, such as greenhouse gases.

[Implying sneakily and without warrant that natural forces are not “large and powerful”. That is quite wrong and it is misdirection, the technique used by magicians to divert attention from what is really going on. By far the most powerful force affecting climate is the energy coming from the sun.]

Projections capture the role of these overwhelming influences on climate and its variability, rather than predict the current state of the variability itself.
The IPCC model simulations are projections and not predictions; in other words the models do not start from the state of the climate system today or even 10 years ago. There is no mileage in a story about models being ‘flawed’ because they did not predict the pause; it’s merely a misunderstanding of the science and the difference between a prediction and a projection.
[Misdirection again. The IPCC models failed to project or predict the lack of warming since 1998, and also the cooling of three decades after 1940. The point is that the models are inadequate, so neither predictions nor projections should be believed.]

… the deep ocean is likely a key player in the current pause, effectively ‘hiding’ heat from the surface. Climate model projections simulate such pauses, a few every hundred years lasting a decade or more; and they replicate the influence of the modes of natural climate variability, like the Pacific Decadal Oscillation (PDO) that we think is at the centre of the current pause.
[Here is perhaps the worst instance of misleading. The “Climate model projections” that are claimed to “simulate such pauses, a few every hundred years lasting a decade or more” are not made with the models that project alarming human-caused global warming, they are ad hoc models that explore the possible effects of variables not taken into account in the overall climate models.]”

The projections — which the media (as well as people familiar with the English language) fail to distinguish from predictions — that indict carbon dioxide as cause of climate change are based on models that do not incorporate possible effects of deep-ocean “hidden heat” or such natural cycles as the Pacific Decadal Oscillation. Those and other such factors as aerosols are considered only in trying to explain why the climate models are wrong, which is the crux of the matter. The climate models are wrong.

Asserting that uncertainty equals certainty
The popular media disseminated faithfully and uncritically from the most recent official report that “Scientists are 95% certain that human are responsible for the ‘unprecedented’ warming experienced by the Earth over the last few decades

Leave aside that the warming cannot be known to be “unprecedented” — global temperatures have been much higher in the past, and historical data are not fine-grained enough to compare rates of warming over such short time-spans as mere decades or centuries.

There is no such thing as “95% certainty”.
Certainty means 100%; anything else is a probability, not a certainty.
A probability of 95% may seem very impressive — until it is translated into its corollary: 5% probability of being wrong; and 5% is 1 in 20. I wouldn’t bet on anything that’s really important to me if there’s 1 chance in 20 of losing the bet.
So too with the frequent mantra that 97% or 98% of scientists, or some other superficially impressive percentage, support the “consensus” that global warming is owing to carbon dioxide (7):


“Depending on exactly how you measure the expert consensus, it’s somewhere between 90% and 100% that agree humans are responsible for climate change, with most of our studies finding 97% consensus among publishing climate scientists.”

In other words, 3% (“on average”) of “publishing climate scientists” disagree. And the history of science teaches unequivocally that even a 100% scientific consensus has in the past been wrong, most notably on the most consequential matters, those that advanced science spectacularly in what are often called “scientific revolutions” (8).
Furthermore, “publishing climate scientists” biases the scales a great deal, because peer review ensures that dissenting evidence and claims do not easily get published. In any case, those percentages are based on surveys incorporating inevitable flaws (sampling bias as with peer review, for instance). The central question is, “How convinced are you that most recent and near future climate change is, or will be, the result of anthropogenic causes”? On that, the “consensus” was only between 33% and 39%, showing that “the science is NOT settled” (9; emphasis in original).

Science groupies — unquestioning accepters of “the consensus”
The media and countless individuals treat the climate-change consensus dogma as Gospel Truth, leading to such extraordinary proposals as that by Professor of Law, Philippe Sands, QC, that “False claims from climate sceptics that humans are not responsible for global warming and that sea level is not rising should be scotched by an international court ruling”.

I would love to see any court take up the issue, which would allow us to make defenders of the orthodox view attempt to explain away all the data which demonstrate that global warming and climate change are not driven primarily by carbon dioxide.

The central point

Official alarms and established scientific institutions rely not on empirical data, established facts about temperature and CO2, but on computer models that are demonstrably wrong.

Those of us who believe that science should be empirical, that it should follow the data and change theories accordingly, become speechless in the face of climate-change dogma defended in the manner described above. It would be screamingly funny, if only those who do it were not our own “experts” and official representatives (10). Even the Gods are helpless in the face of such determined ignoring of reality (11).


(1)    For example, chapter 10 in Howard Thomas Brady, Mirrors and Mazes, 2016; ISBN 978-1522814689. For a more general argument that models are incapable of accurately simulating complex natural processes, see, O. H. Pilkey & L. Pilkey-Jarvis, Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future, Columbia University Press, 2007
(2)    “40’s to 70’s cooling, CO2 rising?”
(3)    Thomas C. Peterson, William M. Connolley & John Fleck, “The myth of the 1970s global cooling scientific consensus”, Bulletin of the American Meteorological Society, September 2008, 1325-37
(4)    “History rewritten, Global Cooling from 1940 – 1970, an 83% consensus, 285 papers being ‘erased’”; 1970s Global Cooling Scare; 1970s Global Cooling Alarmism
(5)    Climate Change: Evidence & Causes—An Overview from the Royal   Society and the U.S. National Academy of Sciences, National Academies Press; ISBN 978-0-309-30199-2
(6)    Relevant bits of (e) are cited in a review, Henry H. Bauer, “Climate-change science or climate-change propaganda?”, Journal of Scientific Exploration, 29 (2015) 621-36
(7)    The 97% consensus on global warming
(8) Thomas S. Kuhn, The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1970; Bernard Barber, “Resistance by scientists to scientific discovery”, Science, 134 (1961) 596–602. Gunther Stent, “Prematurity and uniqueness in   scientific discovery”, Scientific American, December 1972, pp. 84-93. Hook, Ernest B. (ed), Prematurity in Scientific Discovery: On Resistance and Neglect, University of California Press, 2002
(9)    Dennis Bray, “The scientific consensus of climate change revisited”, Environmental Science & Policy, 13 (2010) 340 –50; see also “The myth of the Climate Change ‘97%’”, Wall Street Journal, 27 May 2014, p. A.13, by Joseph Bast & Roy Spencer
(10) My mother’s frequent repetitions engraved in my mind the German folk-saying, “Wenn der Narr nicht mein wär’, lacht’ ich mit”. Google found it in the Deutsches sprichwörter-lexikon edited by Karl Friedrich Wilhelm Wander (#997, p. 922)
(11)  “Mit der Dummheit kämpfen Götter selbst vergebens”; Friedrich Schiller, Die Jungfrau von Orleans.


Posted in consensus, denialism, global warming, media flaws, peer review, resistance to discovery, science is not truth, science policy, scientism, unwarranted dogmatism in science | Tagged: , , | 6 Comments »

The banality of evil — Psychiatry and ADHD

Posted by Henry Bauer on 2017/04/25

“The banality of evil” is a phrase coined by Hannah Arendt when writing about the trial of Adolf Eichmann who had supervised much of the Holocaust. The phrase has been much misinterpreted and misunderstood. Arendt was pointing to the banality of Eichmann, who “had no motives at all” other than “an extraordinary diligence in looking out for his personal advancement”; he “never realized what he was doing … sheer thoughtlessness … [which] can wreak more havoc than all the evil instincts” (1). There was nothing interesting about Eichmann. Applying Wolfgang Pauli’s phrase, Eichmann was “not even wrong”: one can learn nothing from him other than that evil can result from banality, from thoughtlessness. As Edmund Burke put it, “The only thing necessary for the triumph of evil is for good men to do nothing” — and not thinking is a way of doing nothing.

That train of thought becomes quite uncomfortable with the realization that sheer thoughtlessness nowadays pervades so much of the everyday practices of science, medicine, psychiatry. Research simply — thoughtlessly — accepts contemporary theory as true, and pundits, practitioners, teachers, policy makers all accept the results of research without stopping to think about fundamental issues, about whether the pertinent contemporary theories or paradigms make sense.

Psychiatrists, for example, prescribe Ritalin and other stimulants as treatment for ADHD — Attention-Deficit/Hyperactivity Disorder — without stopping to think about whether ADHD is even “a thing” that can be defined and diagnosed unambiguously (or even at all).

The official manual, which one presumes psychiatrists and psychologists consult when assigning diagnoses, is the Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the American Psychiatric Association, now (since 2013) in its 5th edition (DSM-5). DSM-5 has been quite widely criticized, including by such prominent psychiatrists as Allen Frances who led the task force for the previous, fourth, edition (2).

Even casual acquaintance with the contents of this supposedly authoritative DSM-5 makes it obvious that criticism is more than called for. In DSM-5, the Diagnostic Criteria for ADHD are set down in five sections, A-E.

A: “A persistent pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development, as characterized by (1) and/or (2):
     1.   Inattention: Six (or more) of the following symptoms have persisted for at least 6 months to a degree that is inconsistent with developmental level and that negatively impacts directly on social and academic/occupational activities
           Note: The symptoms are not solely a manifestation of oppositional behavior, defiance, hostility, or failure to understand tasks or instructions. For older adolescents and adults (age 17 and older), at least five symptoms are required.
a.     Often fails to give close attention to details or makes careless mistakes in schoolwork, at work, or during other activities (e.g., overlooks or misses details, work is inaccurate)
b.     Often has difficulty sustaining attention in tasks or play activities (e.g., has difficulty remaining focused during lectures, conversations, or lengthy reading).”
and so on through c-i, for a total of nine asserted characteristics of inattention.

Paying even cursory attention to these “criteria” makes plain that they are anything but definitive. Why, for example, are six symptoms required up to age 16 when five are sufficient at 17 years and older? There is nothing clear-cut about “inconsistent with developmental level”, which depends on personal judgment about both the consistency and the level of development. Different people, even different psychiatrists no matter how trained, are likely to judge inconsistently in any given case whether the attention paid (point “a”) is “close” or not. So too with “careless”, “often”, “difficulty”; and so on.

It is if anything even worse with Criteria A(2):

“2.    Hyperactivity and Impulsivity:
Six (or more) of the following symptoms have persisted for at least 6 months to a degree that is inconsistent with developmental level and that negatively impacts directly on social and academic/occupational activities
       Note: The symptoms are not solely a manifestation of oppositional behavior, defiance, hostility, or failure to understand tasks or instructions. For older adolescents and adults (age 17 and older), at least five symptoms are required.
a.    Often fidgets with or taps hands or feet or squirms in seat.”
and so on through b-i, for again a total of nine supposed characteristics of inattention. There is no need to cite any of those since “a” amply reveals the absurdity of designating as the symptom of a mental disorder a type of behavior that is perfectly normal for the majority of young boys. This “criterion” makes self-explanatory the reported finding that boys are three times more likely than girls to be diagnosed with ADHD, though experts make heavier weather of it by suggesting that sex hormones may be among the unknown causes of ADHD (3).

A(1) and (2) are followed by
“B. Several inattentive or hyperactivity-impulsivity symptoms were present prior to age 12 years.
C. Several inattentive or hyperactivity-impulsivity symptoms are present in two or more
settings  (e.g., at home, school, or work; with friends or relatives; in other activities).
D. There is clear evidence that the symptoms interfere with, or reduce the quality of, social,
academic, or occupational functioning.
E. The symptoms do not occur exclusively during the course of schizophrenia or another
psychotic disorder and are not better explained by another mental disorder (e.g., mood
disorder, anxiety disorder, dissociative disorder, personality disorder, substance
intoxication or withdrawal).”

It should be plain enough that this set of so-called criteria is not based on any definitive empirical data, as a simple thought experiment shows: What clinical (or any other sort of) trial could establish by observation that six symptoms are diagnostic up to age 17 whereas five can be decisive from that age on? What if the decisive symptoms were apparent for only 5 months rather than six; or five-and-three-quarters months? How remarkable, too, that “inattention” and hyperactivity and impulsivity” are both characterized by exactly nine possible symptoms.

Leaving aside the deplorable thoughtlessness of the substantive content of DSM-5, it is also saddening that something published by an authoritative medical society should reflect such carelessness or thoughtlessness in presentation. Competent copy-editing would have helped, for example by eliminating the many instances of “and/or”: “this ungraceful phrase … has no right to intrude in ordinary prose” (4) since just “or” would do nicely; if, for instance, I tell you that I’ll be happy with A or with B, obviously I’ll be perfectly happy also if I get both.
Good writing and proper syntax are not mere niceties; their absence indicates a lack of clear substantive thought in what is being written about, as Richard Mitchell ( “The Underground Grammarian”), liked to illustrate by quoting Ben Jonson: “Neither can his Mind be thought to be in Tune, whose words do jarre; nor his reason in frame, whose sentence is preposterous”.

At any rate, ADHD is obviously an invented condition that has no clearly measurable characteristics. Assigning that diagnosis to any given individual is an entirely subjective, personal judgment. That this has been done for some large number of individuals strikes me as an illustration of the banality of evil. Countless parents have been told that their children have a mental illness when they are behaving just as children naturally do. Countless children have been fed mind-altering drugs as a consequence of such a diagnosis. Some number have been sent to special schools like Eagle Hill, where annual tuition and fees can add up to $80,000 or more.

Websites claim to give information that is patently unfounded or wrong, for example:

“Researchers still don’t know the exact cause, but they do know that genes, differences in brain development and some outside factors like prenatal exposure to smoking might play a role. … Researchers looking into the role of genetics in ADHD say it can run in families. If your biological child has ADHD, there’s a one in four chance you have ADHD too, whether it’s been diagnosed or not. … Some external factors affecting brain development have also been linked to ADHD. Prenatal exposure to smoke may increase your child’s risk of developing ADHD. Exposure to high levels of lead as a toddler and preschooler is another possible contributor. … . It’s a brain-based biological condition”.

Those who establish such websites simply follow thoughtlessly, banally, what the professional literature says; and some number of academics strive assiduously to ensure the persistence of this misguided parent-scaring and children-harming. For example, by claiming that certain portions of the brains of ADHD individuals are characteristically smaller:

“Subcortical brain volume differences in participants with attention deficit hyperactivity disorder in children and adults: a cross-sectional mega-analysis” by Martine Hoogman et al., published in Lancet Psychiatry (2017, vol. 4, pp. 310–19). The “et al.” stands for 81 co-authors, 11 of whom declared conflicts of interest with pharmaceutical companies. The conclusions are stated dogmatically: “The data from our highly powered analysis confirm that patients with ADHD do have altered brains and therefore that ADHD is a disorder of the brain. This message is clear for clinicians to convey to parents and patients, which can help to reduce the stigma that ADHD is just a label for difficult children and caused by incompetent parenting. We hope this work will contribute to a better understanding of ADHD in the general public”.

An extensive detailed critique of this article has been submitted to the journal as a basis for retracting that article: “Lancet Psychiatry Needs to Retract the ADHD-Enigma Study” by Michael Corrigan & Robert Whitaker. The critique points to a large number of failings in methodology, including that the data were accumulated from a variety of other studies with no evidence that diagnoses of ADHD were consistent or that controls were properly chosen or available — which ought in itself be sufficient reason not to find publication.

Perhaps worst of all: Nowhere in the article is IQ mentioned; yet the Supplementary Material contains a table revealing that the “ADHD” subjects had on average higher IQ scores than the “normal” controls. “Now the usual assumption is that ADHD children, suffering from a ‘brain disorder,’ are less able to concentrate and focus in school, and thus are cognitively impaired in some way. …. But if the mean IQ score of the ADHD cohort is higher than the mean score for the controls, doesn’t this basic assumption need to be reassessed? If the participants with ADHD have smaller brains that are riddled with ‘altered structures,’ then how come they are just as smart as, or even smarter than, the participants in the control group?”

[The Hoogman et al. article in many places refers to “(appendix)” for details, but the article — which costs $31.50 — does not include an appendix; one must get it separately from the author or the journal.]

As usual, the popular media simply parroted the study’s claims, illustrated by headlines cited in the critique:

And so the thoughtless acceptance by the media of anything published in an established, peer-reviewed journal contributes to making this particular evil a banality. The public, including parents of children, are further confirmed in the misguided, unproven, notion that something is wrong with the brains of children who have been designated with a diagnosis that is no more than a highly subjective opinion.

The deficiencies of this article also illustrate why those of us who have published in peer-reviewed journals know how absurd it is to regard “peer review” as any sort of guarantee of quality, or even of minimal standards of competence and honesty. As Richard Horton, himself editor of The Lancet, has noted, “Peer review . . . is simply a way to collect opinions from experts in the field. Peer review tells us about the acceptability, not the credibility, of a new finding” (5).

The critique of the Hoogman article is just one of the valuable pieces at the Mad in America website. I also recommend highly Robert Whitaker’s books, Anatomy of an Epidemic and Mad in America.

(1)  Hannah Arendt, Eichmann in Jerusalem — A Report on the Banality of Evil. Viking Press,
1964 (rev. & enlarged ed.). Quotes are at p. 134 of PDF available at
(2)  Henry H. Bauer, “The Troubles With Psychiatry — essay review of Saving Normal by Allen
Frances and The Book Of Woe by Gary Greenberg”, Journal of Scientific Exploration,
29  (2015) 124-30
(3)   Donald W. Pfaff, Man and Woman: An Inside Story, Oxford University Press, 2010: p. 147
(4)   Modern American Usage (edited & completed by Jacques Barzun et al. from the work of
Wilson Follett), Hill & Wang 1966
(5)    Health Wars: On the Global Front Lines of Modern Medicine, New York Review Books,
2003, p. 306


Posted in conflicts of interest, consensus, media flaws, medical practices, peer review, prescription drugs, science is not truth, unwarranted dogmatism in science | Tagged: , , , , | Leave a Comment »

We are being routinely misled about health and diet

Posted by Henry Bauer on 2017/03/24

Most of what the media make a fuss about over health or diet should not be believed.

It should not be believed even as it cites peer-reviewed articles or official guidelines. All too often the claims made are based on misuse of statistics and are an abuse of common sense.

That little rant was set off by a piece in the august New York Times: “Pollution leads to greater risk of dementia among older women, study says”).

Alarms were triggered:
“Older women”: Only among older and not younger? Women but not men?

The original article did not improve my mood:
The pollution actually studied was “fine particulate matter, P.M. 2.5, 2.5 micrometers or smaller in diameter”: What about 2.5 to 3, say? Or 3 to 4? And so on.
“Women with the genetic variant APOE4, which increases the risk of Alzheimer’s disease, were more likely to be affected by high levels of air pollution”:
Is this asserting that there’s synergy? That the combined effect is not just the added effects of the two factors? That pollution is not just an independent risk factor but somehow is more effective with APOE4 carriers? So what about APOE3 or APOE2 carriers?

The New York Times piece mentioned some other studies as well:
“[P]renatal exposure to air pollution could result in children with greater anxiety, depression and attention-span disorders”.
“[A]ir pollution caused more than 5.5 million premature deaths in 2013”.

With those sort of assertions, my mind asks, “How on earth could that be known?”
What sort of study could possibly show that? What sort of data, and how much of it, would be required to justify those claims?

So, with the older women and dementia, how were the observational or experimental subjects (those exposed to the pollution) distinguished from the necessary controls that were not exposed to pollution? Controls need to be just like the experimental subjects (in age, state of health, economic circumstances, etc.) with the sole exception that the latter were exposed to pollution and the controls were not.
For the controls not to be exposed to the pollution, obviously the two groups must be geographically separate. Then what other possibly pertinent factors differed between those geographic regions? How was each of those factors controlled for?

In other words, what’s involved is not some “simple” comparison of polluted and not polluted; there is a whole set of possibly influential factors that need somehow to be controlled for.

The more factors, the larger the needed number of experimental subjects and controls; and the required number of data points increases much more than linearly with the number of variables. Even just that realization should stimulate much skepticism about many of the media-hyped stories about diet or health. Still more skepticism is called for when the claim has to do with lifestyle, since the data then depend on how the subjects recall and describe how they have behaved.

The dementia article was published in Translational Psychiatry, an open-access journal from the Nature publishing group. The study had enrolled 3647 women aged between 65 and 79. That is clearly too small a number for all possibly relevant factors to have been controlled for. Many details make that more than a suspicion, for example, “Women in the highest PM2.5 quartile (14.34–22.55 μg m −3) were older (aged ≥75 years); more likely to reside in the South/Midwest and use hormonal treatment; but engage less in physical activities and consume less alcohol, relative to counterparts (all P-values <0.05. . . )” — in other words, the highest exposure to pollution was experiences by subjects who differed from controls and from other subjects in several ways besides pollution exposure.

At about the same time as the media were hyping the dementia study, there was also “breaking news” about how eating enough fruit and vegetables protects against death and disease, based on the peer-reviewed article “Fruit and vegetable intake and the risk of cardiovascular disease, total cancer and all-cause mortality — a systematic review and dose-response meta-analysis of prospective studies”.

Meta-analysis means combining different studies, the assumption being that the larger amount of primary data can make conclusions stronger and firmer. However, that requires that each of the individual studies being drawn on is sound and that the subjects and circumstances are reasonably comparable in all the different studies. In this case, 95 studies reported in 142 publications were analyzed. Innumerable factors need to be considered — the specific fruit or vegetable (one cannot presume that apples and pears have the same effect, nor cauliflower and carrots); and the effects of different amounts of what is eaten must somehow be taken into account. There are innumerable variables, in other words, permitting considerable skepticism about the claims that “An estimated 5.6 and 7.8 million premature deaths worldwide in 2013 may be attributable to a fruit and vegetable intake below 500 and 800 g/day, respectively, if the observed associations are causal” and that ‘Fruit and vegetable intakes were associated with reduced risk of cardiovascular disease, cancer and all-cause mortality. These results support public health recommendations to increase fruit and vegetable intake for the prevention of cardiovascular disease, cancer, and premature mortality.” Skepticism is yet more called for since health and mortality are influenced to a great extent by genetics and geography, which were not controlled for.
The authors deserve credit, though, for the clause, “if the observed associations are causal”. What everyone should know about statistics is that correlations, associations, never prove causation. That law is almost universally ignored as the media disseminate press releases and other spin from researchers and their institutions, implying that associations are meaningful about what causes what.

It is easy enough to understand why considerable skepticism should be exercised with claims like those about mortality and diet or about dementia and pollution, simply because studies to test these claims properly would need to include much larger numbers of subjects. But an even greater reason to doubt such claims, as well as claims about newly approved drugs and treatments, is that the statistical analyses commonly used are inherently flawed, most particularly by a quite inadequate criterion for statistical significance.

Almost universally in social science and in medical science, statistical significance is defined as p≤0.05: the probability that the results are mere coincidence, owing just to random chance, is less than 5%, in other words less than 1 in 20.

Several things are wrong with that. Among the most serious are:

  1. That something is not a coincidence, not owing to random chance, does not tell us what it is owing to, what the cause is. It is not necessarily the experimenter’s hypothesis, yet that is the assumption made universally with this type of statistical analysis.
  2. 1 in 20 is a very weak criterion. It means that 1 in every 20 “statistically significant” conclusions is wrong. Do 20 studies, and on average one of them will be “statistically significant” even though it is wrong.
  3. That something is statistically significant does not mean that the effect is meaningful.
    For example, after I had a TIA (transient ischemic attack, minor stroke), the neurologist automatically prescribed the “blood thinner” Plavix, clopidogrel, as lessening the risk of further strokes. I am wary of all drugs since they all have “side” effects, so later I searched the literature and found that Plavix is statistically significantly better at decreasing risk than is aspirin, p = 0.043, better than p≤0.05. However, the relative efficacies found were just 5.83% compared to 5.32%; to my mind, not at all a significant difference, not enough to compensate for the greater risk of “side” effects from clopidogrel than from aspirin which has been in use for far longer by far more people without discovery of seriously dangerous “side” effects. (Chemicals don’t have two types of effect, main and side, those we want and those we don’t want. “Side” effects are just as real as the intended effects.)

Many statisticians have pointed out for many years what is wrong with the p-value approach to statistics and its use in social science and in medical science. More than two decades ago, an editorial in the British Medical Journal pointed to “The scandal of poor medical research” [i] with incompetent statistical analysis one of the prime culprits. Matthews [ii] has explained clearly point 1 above. Colquhoun [iii] explains that p ≤ 0.05 makes for wrong conclusions even more often than 1 in 20 times: “If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time”. Gigerenzer [iv] has set out in clear detail the troubles with the commonly used p-value analysis.
Nevertheless, this misleading approach continues to be routine, standard, because it is so simple that many researchers who have no real understanding of statistics can use it. Among the consequences is that most published research findings are false [v] and that newly approved drugs have had to be withdrawn sooner and sooner after their initial approval [vi].
Slowly the situation improves as systemic inertia is penetrated by a few initiatives. A newly appointed editor of the journal Basic and Applied Social Psychology (BASP) announced that p-value analyses would no longer be required [vii], and soon after that they were actually banned [viii].

In the meantime, however, tangible damage is being done by continued use of the p-value approach in the testing and approval of prescription drugs, which adds to a variety of deceptive practices routinely employed by the pharmaceutical industry in clinical trials, see for example Ben Goldacre, Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients (Faber & Faber, 2013); Peter C. Gøtzsche, Deadly Medicines and Organised Crime: How Big Pharma Has Corrupted Healthcare (Radcliffe, 2013); David Healy, Pharmageddon (University of California Press, 2012). Gøtzsche and Healy report that prescription drugs, even though “properly” used, are the 3rd or 4th leading cause of death in developed countries.


[i] D G Altman, BMJ, 308 [1994] 283

[ii] Matthews, R. A. J. 1998. “Facts versus Factions: The use and abuse of subjectivity in scientific research.” European Science and Environment Forum Working Paper; pp. 247-82 in J. Morris (ed.), Rethinking Risk and the Precautionary Principle, Oxford: Butterworth (2000).

[iii] David Colquhoun, “An investigation of the false discovery rate and the misinterpretation of p-values”, Royal Society Open Science, 1 (2014) 140216;

[iv] Gerd Gigerenzer, “Mindless statistics”, Journal of Socio-Economics, 33 [2004] 587-606)

[v] (John P. A. Ioannidis, “Why most published research findings are false”, PLoS Medicine, 2 [#8, 2005] 696-701; e124)

[vi] Henry H. Bauer, Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012, Table 5 (p. 240) and text pp. 238-42

[vii] David Trafimow, Editorial, Basic and Applied Social Psychology, 36 (2014) 1-2

[viii] (David Trafimow & Michael Marks, Editorial, BASP, 37 [2015] 1-2; comments by Royal Statistical sociry[viii] and at

Posted in media flaws, medical practices, peer review, prescription drugs, unwarranted dogmatism in science | Tagged: , , , , | Leave a Comment »

Money has corrupted science, including some individual scientists

Posted by Henry Bauer on 2017/03/11

Some years ago, I had blogged about “The business of for-profit ‘science’”, pointing out that “A number of trends, in society as a whole as well as in science and medicine, have led to the present dysfunctional state of affairs. It is not the result of conspiracies or overt evil-doing . . .”.

Systemic change means that just “doing what everyone does” results in bad things for the public as a whole. An obvious illustration at the moment is that politics has become so pervaded by “spin” that truth has essentially disappeared from what politicians and their spokespeople say, with consequences that everyone should fear.

But that “normal” behavior has become dysfunctional does not entail that there is not also deliberate additional mischief being done, and things that seem so out of order that they ought to be criminally prosecutable.

One aspect of present dysfunctionality in scientific activities is the proliferation of what has been aptly described as predatory publishing on-line of what seem on their face to be scientific journals but whose entire raison d’être is to make money for the publishers from the fees paid by author. The steadily updated list of apparently predatory publishers and journals inaugurated by Jeffrey Beall was no longer on-line as of some time between 12 and 18 January 2017, but the Wayback Machine makes an earlier version available .

Admittedly, every active, publishing researcher knows that peer review and editorial judgments are far from infallibly expert and impartial, but the predatory journals have no quality control at all, illustrated by the acceptance of entirely fake articles, for instance in Open Information Science published by Bentham Science (Jessica Shepherd, “Editor quits after journal accepts bogus science article”, 18 June 2009 ); the editor of another Bentham journal, Open Chemical Physics, resigned after an article she had never seen was published, a piece that alleged the presence of “nanothermite” particles in the dust from the Twin Towers terrorist attacks of 11 September 2001 (Thomas Hoffmann, “Chefredaktør skrider efter kontroversiel artikel om 9/11”, 28 April 2009; Denis G. Rancourt, “Editor in Chief resigned over the Harrit et al. nanothermite paper”, 11 November 2010).

Beall had listed more than 1100 publishers, some of which publish hundreds of ”journals” where “article processing charges” run from a few hundred dollars upwards to more than $1000. Any honest researcher with results of any importance seeks publication in a long-established and respected journal, so all this “publication” by the predators is sheer waste, much of it money that had been awarded to scientists as research grants. Bentham Science, perhaps iconic of the more prominent predators, lists well over 100 journals. In 2013, Science published the report of a sting operation in which fake manuscripts with obvious flaws were sent to a number of open-access journals; more than half the fake articles were accepted for publication (John Bohannon, “Who’s Afraid of Peer Review? A spoof paper concocted by Science reveals little or no scrutiny at many open-access journals”, Science 342 [2013] 60-5).

Of course not all mainstream print journals manage always to detect even obvious deficiencies, but predatory journals leave other clues, for example, that they continually solicit people for submissions and to serve as editors and on editorial boards (e.g. D. H. Kaye, “Flaky academic journals”, 21 December 2016; Gunther Eysenbach, “Black sheep among Open Access Journals and Publishers”).

Legitimate journals employ copyeditors, but the predators do not. Recently I benefited from e-mails that revealed yet further deceitful money grubbing. Bentham Science journals suggest that authors get (and pay for) copy-editing and language improvement services offered by Eureka Science — whose staff happens to be the same people who also run Bentham Science. The “two” companies also pretend to be separate entities in the arranging of conferences, for example the International Conference on Drug Discovery and Therapy (six since 2008).

Conferences can be real money-makers. For the 2017 International Conference on Drug Discovery and Therapy, registration fees range from about $500 for mere attendees to, for speakers ~$1000 (academic) o r~ $1600 (corporate) (the approximate “~” because fees vary a bit according to when they are paid). Invited speakers pay the same fees as non-invited, which strikes me as odd. When I’m invited to speak I’m offered expenses, even an honorarium; but then I haven’t been active in mainstream science research for quite some time. The Conference organizers do offer free travel and accommodation to a few eminent people, say Nobel Prize winners, since having those attend lends apparent legitimacy to the proceedings. These meetings can be lucrative indeed for the organizers: the 2015 International Conference on Drug Discovery and Therapy listed more than 360 registrants.

The identity of Bentham Science and Eureka Science was revealed to me by Fiona Hayden, self-described as a researcher in the field of corporate ethics with a special interest in the STM publishing industry. She discovered that
Ø      Bentham Science hides its identity and location.
Ø      It organizes conferences but tells potential audience that it is just a media partner, that the organizer is a different company.
Ø      It asks authors to pay for grammar and English editing to its own company with the different name Eureka Science.
Ø      It does not allow its employees to disclose on their social media accounts that they work for Bentham Science.
Ø      It puts people who expose them on a black list.

The version of the black list Hayden sent me had about 30 names. The criterion for inclusion seems to be anyone who might be a whistleblower about improper happenings: one person on the list whom I had known reasonably well was an activist for integrity of academic ideals; another has been one of the most prominent advocates of respectable high-quality open-access publishing.

At one of the “Eureka” conferences, several of the staff had identified themselves as Bentham employees to Hayden and her colleagues, who also identified by name and e-mail address several individuals active in “both” companies, which are registered in Karachi as Information Technology Services (ITS). Among the registrants at the 2015 Conference on Drug Discovery and Therapy, about 15 were Bentham employees listed as ITS or Eureka.

ITS, Bentham Science, & Eureka Science are one and the same, owned by retired Professor Atta-ur-Rehman who is always president or vice president of Eureka conferences (Fiona Hayden e-mail, 2 March 2017). While serving as Chairman of the Higher Education Commission of Pakistan, Atta-ur-Rehman had been warned about the publishing of fake journals in Pakistan (Q. Isa Daudpota [professor at Pakistan’s Air University], “Scourge of fake journals”, 30 November 2011, ).

I had posted recently about The Scourge of Wikipedia; Wiki’s unreliability is illustrated by its Google summary for Bentham Science, which makes it appear as a perfectly respectable mainstream outfit instead of the reality:

Fiona Hayden also supplied links to some articles by a range of authors deploring predatory publishing and other sad aspects of contemporary science:

*                     *                   *                   *                   *                   *                   *                   *

Predatory publishing exists because of how the whole enterprise of science has been corrupted by outside interests and the overweening pursuit of financial profit. I deplore what Bentham/Eureka/ITS does, though the conferences are evidently found useful, given that they attract so many attendees. Meeting fresh faces from distant places can be a rewarding experience, as I found at a couple of the Conferences on the Unity of the Sciences  despite that they were organized by the Unification Church, many of whose other activities I deplore.

The degree to which “normal” mainstream science has succumbed to financial corruption may be illustrated by the Institute of Global Environment and Society, established by a professor at George Mason University. It has cashed in on the hysteria over climate change  by garnering “82 federal grants and 3 contracts from 5 agencies totaling $26,222,420 from Fiscal Year 2008 to FY 2016: (Source:” and spending most of it on salaries:

“IGES 2014 Income: $3,846,141 including $3,832,383 federal contributions; 2013 income $4,186,639 including $4,174 658 federal contributions; IGES spent $3,296,720 on salaries in 2014; $3,194,792 on salaries in 2013”. Principals of IGES moreover had the gall to urge criminal action against “global warming deniers” — Political correctness in science, 2017/03/06.

Not that long-established scientific publishers abstain from money grubbing, also profiting exorbitantly from open-access publishing designed to extract more money from authors and their patrons: Nature also publishes more than 30 open-access on-line journals as well as 42 journals with “hybrid open access” with per article fees between $1350 and $5200 for different journals. Elsevier charges fees ranging between $500 and $5,000, depending on the journal, for “open access” publishing.

It may be that predatory publishing will inevitably continue so long as science continues to be characterized by cutthroat competitiveness and judgments made by quantity of research grants and of publications.

There may be an analogy with drug trafficking or prostitution: so long as the demand exists, entrepreneurs will find profitable ways to satisfy the demand. So long as scientific careers call for long lists of publications, sleazy publishers will continue to exist.


Posted in fraud in science, funding research, peer review, science is not truth, scientific culture, scientists are human | Tagged: , , , , , , , , , , , , , , , | Leave a Comment »

The scourge of Wikipedia

Posted by Henry Bauer on 2017/03/07

Searching my files, I see that Wikipedia has featured quite often on my blogs; the article titles illustrate some of the stimuli:

Knowledge, understanding — but then there’s Wikipedia;  The Wiles of WikiHealth, Wikipedia, and Common Sense; Facebook: As bad as Wikipedia, or worse?Lowest common denominator — Wikipedia and its ilkThe unqualified (= without qualifications) gurus of Wikipedia; Another horror story about Wikipedia; The Fairy-Tale Cult of Wikipedia;  Beware the Internet: “reviews”, Wikipedia, and other sources of misinformation.

Four decades ago, as the Internet was coming into general use, the anticipated benefits and drawbacks were being discussed quite assiduously, at least in academe. Enthusiasts pointed to the advantages of low-cost, rapid publication of research; skeptics wondered what would happen to peer review and quality control. But I am not aware of any voices that foresaw just how abominable things would become as the cost of blathering on-line is virtually zero and there is no control of quality, no fact-checking, no ethical standards, and pervasive anonymity. No one seems to have foreseen the spate of predatory publishing of purportedly scientific research.

It has always been almost impossible to undo the consequences of lies, as too many people believe that the presence of smoke always proves the presence of fire; now, in the Internet age, it has become totally impossible to eradicate the influence of lies because of the speed with which they spread. I have too many friends who pass along stuff that strikes me immediately as unlikely to be true, and that reveals to be not true, yet this stuff comes to me no matter how often I ask my friends to check snopes first.

I don’t use Twitter, Snapchat, or any other social media, though I am formally listed in LinkedIn and Facebook after I didn’t want to offend friends who asked me to join. Having tried Facebook and found it nothing but time-wasting obsession with trivia, I tried to disconnect from it. It wasn’t straightforward, but eventually I seemed to have succeeded as a screen assured me that I had successfully closed my account. But the next statement undercut that: I was assured that any time I wanted back in, I could log on with my old password and would fine all my material still there. When Facebook boasts of its huge membership, I wonder how many of those they count belong to my group, people who don’t use it at all and tried to get off.

At any rate, I recognize purely as a outsider how the damage done on the Internet is abetted and exacerbated by Twitter, with its encouragement of thought-bites to shorten attention spans even more, or by something like Snapchat where evidence disappears as soon as the alternative fake news has been disseminated. The contemporary political hullabaloo about fake news and alternative facts brings home that a sadly significant portion of the population exercises no skepticism or critical thought when statements are emotionally congenial.

All this is whistling in the wind, so I was pleased to find a large-circulation British newspaper laying out the faults of Wikipedia in considerable detail: “The making of a Wiki-Lie: Chilling story of one twisted oddball and a handful of anonymous activists who appointed themselves as censors to promote their own warped agenda on a website that’s a byword for inaccuracy”.

Admittedly, the Daily Mail is no TIMES, and some of its content competes with tabloids and the ilk of National Enquirer; and its ire was aroused not by the intellectual damage done by Wikipedia but by a smear that labeled the Daily Mail as an unreliable source — shades of pots and kettles.

The Daily Mail story, credited to Guy Adams, deserves wide dissemination for its valuable analysis that includes detailed biographical information about someone who might well be iconic of trouble-making trolls on the Internet; and for its exposure of how Wikipedia is impervious to correction, is controlled by largely anonymous and often self-appointed “editors”, and is rather scandalously dishonest about its finances: the governing Foundation, which advertises itself as non-profit and solicits for donations on many Wiki pages, has about 280 staff with average salaries of ~$110,000, a former executive director having garnered ~$320,000.

The British Guardian did neither itself nor the public a service by covering the Wikipedia dissing of the Daily Mail by treating Wikipedia as though it were more factually reliable and more ethical than it is: Jasper Jackson, “Wikipedia bans Daily Mail as ‘unreliable’ source” (8 February 2017). People who have tried to get errors corrected on Wikipedia are unlikely to agree that “No matter how hard Wikipedia’s volunteers work, wrong and sometimes defamatory entries will inevitably appear, with editors engaged in a game of whack-a-mole to correct them” (Jasper Jackson, “‘We always look for reliability’: why Wikipedia’s editors cut out the Daily Mail”, 12 February 2017). Some of the editors work to preserve the defamatory stuff. See my blog posts cited above for illustrations.

Posted in media flaws, peer review | Tagged: , , | 2 Comments »

Psychological toll of climate-science belief

Posted by Henry Bauer on 2015/07/11

Mountainmere  just drew our attention to the devastating psychological impact of belief in human-caused climate change.

Esquire carried (7 July) a story by John Richardson, “When the End of Human Civilization Is Your Day Job: Among many climate scientists, gloom has set in. Things are worse than we think, but they can’t really talk about it” — they are afraid to talk about it because of “the relentless campaign against them” in which the poor folk are labeled “alarmist”. (The heartbreaking Richardson story was picked up in a number of places, for instance “Climate Scientists Are Dealing with Psychological Problems”  as well as the Judith Curry blog that mountainmere had cited, “Pre-traumatic stress syndrome: climate scientists speak out”.)
If climate “scientists” want to know what a relentless campaign really looks like, they should examine the treatment meted out to those “denialists” who draw attention to the lack of evidence to support the hypothesis of human-caused global warming.

Richardson’s featured climate-scientist victim, Jason Box, is a stereotypical ultra-environmentalist: an American who has worked for Greenpeace, demonstrated at the White House, claimed that sea levels would rise inevitably by 70 feet in the next few centuries, and “escaped America’s culture of climate-change denial” by moving from Ohio to Denmark. A report of methane seeping into Arctic sea-water so terrified Box that he immediately tweeted “If even a small fraction of Arctic sea floor carbon is released to the atmosphere, we’re f’d”, which naturally brought a flurry of headlines.
Box looks at the worst, and among the least likely, of the various scenarios generated by the computer models used by climate “scientists” — models that have been demonstrably wrong for the last 15-18 years or so during which there has been no warming while carbon dioxide levels have continued to rise; models that fail to account for the 1940s-to-1970s period when global temperatures were actually decreasing while carbon-dioxide levels were steadily rising.
Box thinks “most scientists must be burying overt recognition of the awful truths of climate change in a protective layer of denial (not the same kind of denial coming from conservatives, of course). I’m still amazed how few climatologists have taken an advocacy message to the streets, demonstrating for some policy action.”

Richardson’s story is full of errors, notably that “warming is tracking the rise of greenhouse gases exactly as their models predicted”. No. The models have not predicted the empirical fact that global temperatures have been stable rather than rising since about 2000; some reports even have it as a cooling rather than a slowing or halt in global average temperature:;;;;

Richardson describes the terrible stress that climate scientists are under for bringing their message of lack of hope: “targets of an unrelenting and well-organized attack that includes death threats, summonses from a hostile Congress, attempts to get them fired, legal harassment, and intrusive discovery demands so severe they had to start their own legal-defense fund, all amplified by a relentless propaganda campaign nakedly financed by the fossil-fuel companies”.
It’s just as well that they can continue to do their depressing work with the help of large grants and that any attempts to have them fired went nowhere; and that the “intrusive discovery demands” were no more than to ask for the raw data on which Michael Mann conjured his alarmist “hockey-stick” graph of unprecedented rate of warming — a graph that the Intergovernmental Panel on Climate Change dropped from its Reports because it was shown to be not a valid reorientation of the data. Professional scientific journals have increasingly being demanding that all data on which articles are based need to be made publicly available; it is not clear to me why climate “science” should be exempt. The only reason to keep data secret is to avoid that others could show that published analyses are flawed.
And those poor climate scientists suffered from having their e-mails hacked, revealing that they were deliberately fudging the evidence. (Google “Climategate” for details about that.)

So, anyway, those poor activist climate “scientists” are suffering gloom, sadness, fear, anger; “Dr. Lise Van Susteren, a practicing psychiatrist and graduate of Al Gore’s Inconvenient Truth slide-show training, calls this ‘pretraumatic’ stress.” Some are retreating off the grid to await the catastrophe. “No one has experienced that hostility more vividly than Michael Mann”, who barley manages to keep going as a well-paid tenured full professor at Penn State.

I urge you to read Richardson’s full story, especially the later parts that describe all the suffering that climate scientists endure.

For yet more insight, go to Judith Curry’s earlier blog post, “Pre-traumatic stress syndrome: Climate trauma survival tips”  which informs, among other things, about “the relatively new field of psychology of global warming”; followed by Curry’s sensible deconstruction of climate-change hysteria.

The unfortunate pre-traumatically stressed climate-“science” activists suffer quite unnecessarily. I recommend resort to the school of psychology, “rational-emotive therapy”, associated with the name of Albert Ellis; see his A New Guide to Rational Living, or Help yourself to happiness through rational self-counseling by Macie C. Maultsby, an acolyte of Ellis.
The essence of this approach is to list in writing one’s depressing thoughts, and then the emotions they arouse. Merely writing these down tends to reveal how out of all proportion the emotions are. Then, the really important part, annotate those depressing thoughts with the actual evidence.
With climate “scientists”, this should bring immediate relief, since all their depression arises only from computer models, whereas reality demonstrates that global warming is the result of the Earth recovering from the last Ice Age and that carbon dioxide has no appreciable effect, as proven by the periods from the 1940s to the 1970s and again since 2000, when “carbon” was being emitted relentlessly but Earth warmed not at all or even cooled.


Posted in denialism, funding research, global warming, media flaws, peer review, science is not truth, science policy, scientific culture, scientism, scientists are human, unwarranted dogmatism in science | Tagged: , , , , , | 10 Comments »

Corrupt “science” publications and meetings

Posted by Henry Bauer on 2014/12/20

The “publish-or-perish” syndrome, together with the low cost of “publishing” on-line, has brought an endless spate of new “journals” put out by entrepreneurs ready to cash in; and quality control is not a consideration, even as some of the “publishers” pay lip-service to peer review.

A correspondent  to my HIV/AIDS blog  contributed a link to a story at Retraction Watch  that shows how the urge to make money by “publishing” is not restricted to new entrepreneurs, it is alive and well at corporate giants like Elsevier, whose prime interest in proliferating publications means that they do not even exercise ordinary care in overseeing how they accept articles: they had to retract a number of published articles that had been accepted after faked “peer review” because the article authors were allowed to choose who the “peer reviewers” would be.

Elsevier, of course, also published advertisements for drug companies under the pretense that they were journals (Corruption in medical science: Ghostwriting), and emasculated the innovative Medical Hypotheses after unfounded initiatives by HIV/AIDS vigilantes (see Chapter 3 in Dogmatism in Science and Medicine).

A related phenomenon to fake and shoddy “journals” is the proliferation of “conferences” whose only purpose is self-promotion by individuals, institutions, or even perhaps countries, since China is a prominent venue for these occasions; again see Fake, deceptive, predatory Science Journals and Conferences. The invitations to pseudo-conferences are often so incompetently composed that they remind one of the emails from Nigeria that one has won a huge prize at a lottery or inherited a huge amount from a previously unknown relative. Below is a just-received specimen; note that I never responded to earlier invites as well as other signs that this is an unedited from letter; note the poor written expression and syntax; but above all, browse the list of “Keynote Speakers” and “Part” listing of “renowned speakers”; a number of academics are quite happy to enjoy a grant-paid sightseeing vacation in China at an event organized primarily by Big Pharma and an entrepreneurial pseudo-conference-arranging outfit. Don’t neglect the link to the organizational home to note the huckstering of sponsorships, exhibition space, and the registration fees that range from $1300 to $2000; as well as the list of eight other concurrent “conferences” .

Dear Henry H. Bauer,

How are you? I wish everything goes well with you!

This is an email to follow up my previous invitations. I have not heard from you for a couple of weeks since my first letter. Now we have received well responding from worldwide experts in planned sessions, in case you won’t miss it, we’ d like to extend our invitation again. I am writing to confirm whether you would like to attend this grand congress and present a speech. Would you please give me a tentative reply? Thank you very much.

I apologize for the inconvenience if the letter disturbed you more than once. On behalf of the Meeting Organizing Committee, it is my pleasure and privilege to invite you to be the Session speaker in the 7th Annual International Congress of Antibodies (ICA-2015).

The conference with the theme “Innovations from Defending Surface to Penetrating the Membrane” will be held during April 25-28, 2015 in Nanjing, China. If the suggested thematic session is not your current focused core, you may look through the whole sessions and transfer another one that fits your interest. We sincerely wish your participation.

Keynote Speakers:

Dr. Brian E. Harvey, Vice President, Pfizer Inc., USA
Dr. Liangzhi Xie, Founder & CEO, Sino Biological Inc., China
Dr. Andrew Wang, Chairman, Taiwan Antibody Association, Taiwan
Dr. Jonathan Milner, CEO, Abcam, UK
Dr. Chien-Hsing Ken Chang, Vice President, Research and Development, Immunomedics, Inc., USA
Dr. Michael Yu, Presidert, Innovent Biologics, Inc., China

We look forward to seeing you in Nanjing in 2015 for this influential event.

If you need any assistance about the conference, please do not hesitate to contact us at any time!

For more information, please visit:
Sincerely yours,

Organizing Commission of ICA-2015
East Area, F11, Building 1,
Dalian Ascendas IT Park,
1 Hui Xian Yuan,
Dalian Hi-tech Industrial Zone,
LN 116025, China
Tel: 0086-411-84575669-860

PS: Part of Renowned Speakers:
Mr. Homan Chan, Investigator, Novartis Institute of Biomedical Research, USA
Dr. Tao Wu, Principal Scientist, Boehringer Ingelheim, USA
Dr. Liming Liu, Merck Research Laboratories, USA
Dr. Joshua DiNapoli, Senior Scientist, Sanofi Pasteur, USA
Dr. Ostendorp Ralf, Vice President, MorphoSys AG, Germany
Dr. Abdul Wajid, Senior Director, XOMA, USA
Dr. Ernesto Oviedo-Orta, Clinical Sciences Expert, Novartis Vaccines Diagnostics Siena, Italy
Dr. Guohong Wang, VP, Immunalysis Corporation, USA
Dr. Rong-Rong Zhu, Senior Scientist, EMD Millipore, USA
Dr. David P. Humphreys, Senior Group Leader, UCB-New Medicines, UK
Dr. Jian Li, Principal Scientist, Pfizer Inc., USA
Dr. Bing Kuang, Principal Scientist, Pfizer, USA
Dr. William Haseltine, Founder, Chairman of the Board and CEO, Human Genome Sciences, USA
Dr. Martin Lemmerer, Principal Scientist, Novartis Institutes for BioMedical Research, Inc., USA
Dr. Jijie Gu, Senior Principal Research Scientist, AbbVie Pharmaceuticals, Inc., USA
Dr. Ronald C. Desrosiers, Professor, Harvard Medical School, USA
Dr. Eva Kimby, Professor, Karolinska University Hospital, Sweden
Dr. Joseph F. John, Professor and Chief, Medical University of South Carolina, USA
Dr. Dongfeng Tan, Professor, the University of Texas M. D. Anderson Cancer Center, USA
Dr. Paul Fisch, Group leader and Professor, University of Freiburg, Germany
Dr. Koshi Mimori, Professor & Director, Kyushu University Beppu Hospital, Japan
Dr. Peggy Hsieh, Professor, Florida State University, USA
Dr. Rudiger Schade, Professor, Charité-University Medicine of Berlin, Germany
Dr. Tae Young Jang, Professor, Inha University, Korea
Dr. Oddmund Bakke, Professor, University of Oslo, Norway
Dr. Rajat Sethi, Chair, California Health Sciences University, USA
Mr. Tim Bernard, CEO, Pivotal Scientific Limited, UK
Dr. Dan Zhang, Chairman and CEO, Fountain Medical Development Ltd., China
Dr. Kaia Agarwal, President, Regulatory Compass, LLC., USA
Ms. Sandra Frantzen, Shareholder, McAndrews, Held Malloy, Ltd., USA
Dr. Seth D. Ginsberg, President, Global Healthy Living Foundation, USA
Dr. James R Harris, CEO, Healthcare Economics LLC., USA
Dr. Martin Gleeson, CSO, Genalyte Inc., USA
Dr. Mingjiu Chen, President and CEO, biosynergics Inc., China
Dr. Jane Dancer, Chief Operating Officer, F-star, UK
Dr. Xiaodong Yang, President and CEO, Apexigen, USA
Dr. Wenzhi Tian, President and CEO, Huabo Biopharm Co Ltd, China
Dr. Ralph V. Boccia, Director, Center for Cancer and Blood Disorders, USA
Dr. Jun Bao, Senior Vice President, Shenogen Pharma Group, China
Dr. Francesc Mitjans, Chief Scientific Officer, Lykera Biomed, Spain
Dr. Fiona Greer, Director, SGS M-Scan, UK
Dr. Albrecht Gröner, Head Pathogen Safety, CSL Behring, Germany
Dr. Chung-Chou Lee, CEO of Medigen Vaccinology Corporation, Taiwan
Dr. Chengbin Wu, President of RD, Shanghai CP Guojian Pharmaceutical, China
Dr. Ni Jian, General Manager, National Engineering Research Center of Antibody Medicine, China
Dr. Ian Q. Li, Chief scientific Officer, ATGCell Inc., Canada
Dr. Terry Dyck, President, CEO, IGY Immune Technologies Life Sciences Inc., Canada
Dr. Vijay E-Bionary, CEO, E-Bionary Technologies, India
Dr. Allan Riting Liu, Vice President & Senior Advisor, Wanbang Biopharmaceutical Group, China


All I can say is, FOR SHAME, to everyone associated with such scams.

Posted in conflicts of interest, fraud in medicine, fraud in science, peer review, scientific culture | Tagged: , , | 1 Comment »

Magical statistics: Hearing loss causes dementia

Posted by Henry Bauer on 2014/07/27

Magical thinking sees a meaningful, causal link between two things that happen to occur together or to look alike in some way. On this view, there are no actual coincidences, links owing purely to random chance: what might appear to be coincidences are actually linked in some manner that we do not understand; Carl Jung described them as “synchronous” and meaningful, not coincidental.

The Skeptic’s Dictionary gives many examples, as does Psychology Today.

What needs to be said is that much, most, or perhaps all of medical statistics is pervaded by magical thinking, the confusion of correlation with causation. For example, increasingly fashionable (or faddish?) emphasis on prevention is replete with references to “risk factors”, things that are “associated = correlated” with some condition. In short order, “risk factor” becomes confused with actual risk, and drug companies capitalize on this to sell drugs that claim to lower risks when actually they only affect risk factors: symptoms and not illnesses are being “treated”.

This deception inaugurated the era of “blockbuster” drugs, enormously profitable because they are taken lifelong: drugs to lower cholesterol, blood pressure, blood sugar and to increase bone density. But data on morbidity and mortality fail to detect any actual benefit from “statins, antihypertensives, and bisphosphanates” *, and anti-diabetes pills continue to be marketed at the same time as law firms carry on class-action suits because of the toxicities of those drugs, which have highly unpleasant ands sometimes deadly “side” effects including allergic reactions, bloating, diarrhea, flatulence, hypoglycemia, cardiovascular troubles, cholestatic jaundice, lactic acidosis, nausea, urinary tract infections, weight gain.

Blockbuster drugs rely on the confusion
of symptoms (risk factors)
with actual risks (causes),
exemplifying magical thinking
whereby actual harm is actually caused
to those who take the drugs

Another instance of magical thinking is the increasingly prominent insinuation that hearing loss leads to (causes) dementia.

The charge on this seems to be led by Dr. Frank Lin, MD, PhD, at Johns Hopkins University:
Hearing Loss and Dementia Linked in Study
Release Date: February 14, 2011
Seniors with hearing loss are significantly more likely to develop dementia over time than those who retain their hearing, a study by Johns Hopkins and National Institute on Aging researchers suggests. The findings, the researchers say, could lead to new ways to combat dementia, a condition that affects millions of people worldwide and carries heavy societal burdens. Although the reason for the link between the two conditions is unknown, the investigators suggest that a common pathology may underlie both or that the strain of decoding sounds over the years may overwhelm the brains of people with hearing loss, leaving them more vulnerable to dementia. They also speculate that hearing loss could lead to dementia by making individuals more socially isolated, a known risk factor for dementia and other cognitive disorders.
Whatever the cause, the scientists report, their finding may offer a starting point for interventions — even as simple as hearing aids — that could delay or prevent dementia by improving patients’ hearing.”

This press release from Johns Hopkins gives the clear impression that hearing loss is a cause dementia. The last sentence also delivers the astonishingly nonsensical assertion that even if hearing loss is not the cause, treating it could have a beneficial effect on the risk of dementia!

Public media of course parrot this pseudo-scientific stuff. Most of the headlines as well as the texts of these pieces support the idea that hearing loss can lead to dementia:
A 2011 study found that hearing loss may increase your chances of developing dementia
Johns Hopkins: Hearing problems lead to dementia
Hearing loss linked to dementia — Can getting a hearing aid help prevent memory loss?
Hearing loss speeds up brain shrinkage and could lead to dementia, researchers claim
The link between hearing loss and dementia — A new discovery gives you a new reason to check your hearing now
Straining to hear and fend off dementia
Could hearing loss and dementia be connected?

Manufacturers of hearing aids jumped on the bandwagon:
Hearing loss is now linked to Alzheimer’s disease and dementia.
According to several major studies, older adults with hearing loss are more likely to develop Alzheimer’s disease and dementia, compared to those with normal hearing. Further, the risk escalates as a person’s hearing loss grows worse. Those with mild hearing impairment are nearly twice as likely to develop dementia compared to those with normal hearing. The risk increases three-fold for those with moderate hearing loss, and five-fold for those with severe impairment.
Specifically, the risk of dementia increases among those with a hearing loss greater than 25 decibels. For study participants over the age of 60, 36 percent of the risk for dementia was associated with hearing loss.
How are the conditions connected?
Although the reason for the link between hearing loss and dementia is not conclusive, study investigators suggest that a common pathology may underlie both”

Also on the bandwagon is a local Speech & Hearing Center.  From an“Ask the experts” page of SENIORS GUIDE magazine:
“Researchers have shown a strong correlation between un-treated hearing loss (i.e., having hearing loss and not wearing hearing aids) and dementia. A study completed by Dr. Lin and colleagues at Johns Hopkins and the National Institute for Communicative Disorders revealed that for every one year an individual with a mild hearing loss went without hearing aids, there was a seven year cognitive decline”.
That’s quite an extension and distortion of the published study.
That published scientific article is Lin et al., “Hearing Loss and Incident Dementia”, Archives of Neurology, 68 (2011) 214–20. Its stated conclusions are that “Hearing loss is independently associated with incident all-cause dementia. Whether hearing loss is a marker for early stage dementia or is actually a modifiable risk factor for dementia deserves further study.”
Unwary readers might take the first sentence as meaning that hearing loss does cause dementia. The second sentence makes the mistake of confusing risk factor with risk and adds to the impression of a causative link.

“[H]earing loss was independently associated with incident all-cause dementia after adjustment for sex, age, race, education, diabetes, smoking, and hypertension, and our findings were robust to multiple sensitivity analyses. The risk of all-cause dementia increased log-linearly with hearing loss severity, and for individuals >60 years in our cohort, over one-third of the risk of incident all-cause dementia was associated with hearing loss.”

Lay readers might again be inclined to take these comments as supporting a causative link. But “independently associated” means only that these particular variables were fed into a computer program looking for degrees of association. Considerable uncertainty remains because of possible effects of other variables not taken into account, notably history of health, diet, and exercise, all of which are likely to be very influential on the rate of age-related deterioration; and there are obvious uncertainties associated with the manner in which education, smoking, hypertension were coded.

But bear in mind the inescapable fact that the probabilities of every type of organ failure and physiological dysfunction increase with age. Age is indubitably independently associated with hearing loss, dementia, diabetes, hypertension, as well as cancer, kidney failure, lung disease, etc.
Hearing loss is independently associated with age.
Dementia is independently associated with age.

It would take more than this study to make a plausible let alone convincing case for hearing loss as a potential cause of dementia. The original article actually spells out quite well the uncertainties that ought to stop speculation about causation, but it steps back from those sound observations to speculate about possible causative mechanisms: “exhaustion of cognitive reserve, social isolation, environmental deafferentation [presumably meaning deficiency of environmental stimuli], or a combination of these”. None of those appears to be amenable to study in any potentially convincing manner.

By contrast, direct evidence from the people studied is waved aside: “self-reported hearing aid use was not associated with a significant reduction in dementia risk”.
The researchers measured the dementia risk in this prospective study, that was not a subjective assessment by the people in the study. They could surely, however, be regarded as largely reliable in their testimony as to use or non-use of hearing aids.
The conclusion is clear: hearing aids did not help to avoid dementia among the people studied.
However, this ugly fact might destroy the hypothesis and impede ongoing research, so reasons are offered for ignoring it: “data on other key variables (e.g. type of hearing aid used, hours worn per day, number of years used, characteristics of subjects choosing to use hearing aids, use of other communicative strategies, adequacy of rehabilitation, etc) that would affect the success of aural rehabilitation and affect any observed association were not gathered. Consequently, whether hearing advices [sic; should perhaps be devices?] and aural rehabilitative strategies could have an effect on cognitive decline and dementia remains unknown and will require further study”.

*  Järvinen et al., “The true cost of pharmacological disease prevention”, British Medical Journal, 342 (2011) doi: 10.1136/bmj.d2175


Posted in funding research, medical practices, peer review, science is not truth | Tagged: , , | 2 Comments »

Health, Wikipedia, and Common Sense

Posted by Henry Bauer on 2014/06/19

OMSJ™ (Office of Medical & Scientific Justice) once again alerted me to something well worth reading: a study in the Journal of the American Osteopathic Association  revealing how unreliable Wikipedia is about matters of health and medicine. An editorial  in the Journal comments on the same issue.

I had first learned about Wikipedia when a friend alerted me that there was an entry about me. It turned out to have been composed by someone furious about my “HIV/AIDS denialism”, namely, a graduate student and member of  who had also posted at a nasty review — however soon withdrawn by him — of my book, The Origin, Persistence and Failings of HIV/AIDS Theory.
Several of my friends had attempted to have the worst calumnies in the Wiki entry modified toward accuracy, but they were always defeated by the original miscreant, abetted by Wiki’s editors. And I learned that Wiki’s rules forbid one from correcting even factual errors in one’s own bio entry.

For some of what I’ve learned Wiki’s flaws, see Beware the Internet: “reviews”, Wikipedia, and other sources of misinformation; The Fairy-Tale Cult of Wikipedia; Another horror story about Wikipedia; The unqualified (= without qualifications) gurus of Wikipedia; Lowest common denominator — Wikipedia and its ilk.

The obvious question is, why would anyone think that an “encyclopedia” could be at all reliable when it is written by whoever cares to do so? With “editors” “appointed” just because they want to be?
It could only be someone who is very simpleminded and naively ignorant about human beings.
Fifty years ago or so, that was exemplified by some science-fiction buffs: for instance, those who fell for Dianetics, a bowdlerized and over-simplistic take-off on psychology and psychoanalysis, and Dianetics’ progeny, Scientology, which adds to the pseudo-psychology the pseudo-religious notions of Theosophy and its ilk. The intellectual basis for these cults was no secret, they originated with L. Ron Hubbard, a successful author of Science Fiction.

Nowadays the Hubbard-role is played by computer buffs or computeroids (like Jimmy Wales, founder of Wikipedia) who appear to believe that software programs and robots can be made artificially intelligent, that things designed and made by human beings can transcend the fallibilities of humans, and that anyone clever enough to use a computer is thereby qualified by integrity, knowledge, and wisdom to participate in creating an “encyclopedia”.

Others don’t agree. A petition at reads:
“Wikipedia is widely used and trusted. Unfortunately, much of the information related to holistic approaches to healing is biased, misleading, out-of-date, or just plain wrong. For five years, repeated efforts to correct this misinformation have been blocked and the Wikipedia organization has not addressed these issues. As a result, people who are interested in the benefits of Energy Medicine, Energy Psychology, and specific approaches such as the Emotional Freedom Techniques, Thought Field Therapy and the Tapas Acupressure Technique, turn to your pages, trust what they read, and do not pursue getting help from these approaches which research has, in fact, proven to be of great benefit to many. This has serious implications, as people continue to suffer with physical and emotional problems that might well be alleviated by these approaches.
Larry Sanger, co-founder of Wikipedia, left the organization due to concerns about its integrity. He stated: ‘In some fields and some topics, there are groups who “squat” on articles and insist on making them reflect their own specific biases. There is no credible mechanism to approve versions of articles.’
This is exactly the case with the Wikipedia pages for Energy Psychology, Energy Medicine, acupuncture, and other forms of complementary/alternative medicine (CAM), which are currently skewed to a negative, unscientific view of these approaches despite numerous rigorous studies in recent years demonstrating their effectiveness. These pages are controlled by a few self-appointed ‘skeptics’ who serve as de facto censors for Wikipedia. They clothe their objections in the language of the narrowest possible understanding of science in order to inhibit open discussion of innovation in health care. As gatekeepers for the status quo, they refuse discourse with leading edge research scientists and clinicians or, for that matter, anyone with a different point of view. Fair-minded referees should be given the responsibility of monitoring these important areas.
I pledge not to donate to your fundraising efforts until these changes have been made.”

The response from Jimmy Wales was:
“No, you have to be kidding me. Every single person who signed this petition needs to go back to check their premises and think harder about what it means to be honest, factual, truthful.
Wikipedia’s policies around this kind of thing are exactly spot-on and correct. If you can get your work published in respectable scientific journals — that is to say, if you can produce evidence through replicable scientific experiments, then Wikipedia will cover it appropriately.
What we won’t do is pretend that the work of lunatic charlatans is the equivalent of ‘true scientific discourse’. It isn’t.”

So Wales reveals himself to be an acolyte of scientism (Scientism, the Religion of Science) and wrong as well about replication and peer review; and a typical computeroid who believes that all that matters is that policies should be “spot-on”, whereas anyone with experience of working with human beings knows that it isn’t the policies that matter but who administers them and how.
Wiki’s policies are indeed splendid, and they would work just fine if the people contributing to Wiki were impartial, unbiased, unprejudiced, and scrupulous in gathering all available information on any given topic and presenting it evenhandedly. Such people do not exist, however, and there’s no mechanism for impartial resolution of differences of opinion about Wiki entries. On any topic where there is a significant difference of opinion among sane and reasonably informed people, Wiki is at the mercy of the fanatical extremists who grab control of the pertinent entry.

Full disclosure on substantive matters:
Re “Energy Psychology, Energy Medicine, acupuncture, and other forms of complementary/alternative medicine (CAM)”:
I’m agnostic about acupuncture, knowing people who have been helped by it and others who have not, and having seen studies where fMRI and voltage measurements seem to show something significant about the classical acupuncture points.
However, I’m not a fan of “Energy Psychology, Energy Medicine” and their ilk and believe that any of their benefits reflect the placebo response.
Re Journal of the American Osteopathic Association:
Some decades ago I read Martin Gardner’s Fads & Fallacies In the Name of Science and did not question his classification of chiropractic and osteopathy as quackery. Since then I’ve learned, and not only at first hand, that chiropractic can be very helpful in some instances of back pain, and that osteopathy is nowadays quite different from its origins.
A former colleague in the Chemistry Department is now president of the Via College of Osteopathic Medicine in Blacksburg, and I learned that the curriculum of this College is the same as that of conventional Colleges of Medicine with the addition of 200 hours of instruction in manipulation: in other words, osteopathy nowadays is mainstream medicine plus chiropractic.


Posted in conflicts of interest, media flaws, medical practices, peer review, scientism, scientists are human, unwarranted dogmatism in science | Tagged: , , , , , | 8 Comments »