Skepticism about science and medicine

In search of disinterested science

Archive for the ‘prescription drugs’ Category

HPV vaccines: risks exceed benefits

Posted by Henry Bauer on 2017/07/09

“Vaccination” is publicly argued in black/white, yes/no fashion, as though one had to be either for or against ALL vaccinations. But the fact is that the benefits of some vaccinations far outweigh the dangers of occasional harmful “side” effects whereas that is not clear with other vaccines. Polio vaccine, for example, seems to have been wonderfully effective and is still so in many countries; on the other hand, in regions where polio is no longer endemic, the risk of contracting polio from oral vaccine exceeds the danger of contracting it when not vaccinated (see links near the end of What to believe? Science is a red herring and a wild-goose chase).

Immune systems are complex and not fully understood, and there are individual variations galore — as when one of my friends came down with shingles shortly after being vaccinated against shingles. (The doctor of course assured him that the outbreak would have been more painful had he not been vaccinated; an ex cathedra assertion without possibility of verification.)

I was reminded of the issue of HPV vaccination by a brouhaha in Europe between the European Medicines Agency (EMA) and medical practitioners and researchers who had come across a substantial number of cases of harm seemingly following HPV vaccination, harm specifically in the form of chronic autoimmune ailments. Since vaccination affects the immune system, such an undesired effect in some individuals seems perfectly plausible.

The Nordic Cochrane Center exists for the purpose of evaluating the evidence underlying medical practices. The Cochrane Center and others have been campaigning for many years to have the data from clinical trials made available to all researchers (1). Last year it lodged a complaint (2) against EMA for conflicts of interest with drug companies exacerbated by the secrecy of discussions that led to criticism of physicians’ reports about autoimmune symptoms appearing after vaccination against HPV. That secrecy is truly extraordinary, virtually an admission of conspiracy: “experts who are involved in the process are not named and are bound by lifelong secrecy about what was discussed” (3).

An EMA publication had severely criticized publications by Louise Brinth and others who had published reports of autoimmune symptoms following vaccination (4); Brinth has delivered a blistering response to the EMA insinuations (5).

The supposed benefit of vaccinating against HPV is to decrease the risk of certain cancers, primarily of the cervix. There are perhaps a hundred types of HPV, of which about 40 are sexually transmitted, and two to four of these seem to be statistically correlated with cancer:
“High-risk HPV strains include HPV 16 and 18 . . . . Other high-risk HPV viruses include 31, 33, 45, 52, 58, and a few others. Low-risk HPV strains, such as HPV 6 and 11, cause about 90% of genital warts, which rarely develop into cancer” (What is HPV?).

HPV infections are the most common sexually transmitted infection: “HPV is so common that nearly all sexually active men and women get the virus at some point in their lives” (Human Papillomavirus (HPV) Statistics). Thus most infections do not lead to cancer, which might induce thought about what “cause” could mean in this context. About 4% of Americans are infected each year with a “high-risk” strain, about 6 million women (USA population is about 320 million, so roughly 160 million women). There are only about 12,000 cases annually of cervical cancer: thus only about 1 in 500 of even “high-risk” infections is associated with this cancer. Thus vaccinating about 500 “high-risk” women might prevent 1 cervical cancer; NNT (number needed to be treated for 1 person to benefit) = 500.

On the other hand, there appears to be about 1 chance in 200 of an adverse effect from vaccination by Gardasil (Gardasil and the sad state of present-day medical practices); about 8% (~ 1 in 12) of adverse events are “serious”, so there’s about 1 chance in 2500 of a serious adverse event. NNH (number needed to be treated for one person to be seriously harmed) = 2500.

For any medical treatment to be desirable, it should be necessary to treat many more people to harm a single one than the number needed to be treated to benefit a single person; NNH should exceed NNT by a substantial amount.
The numbers just mentioned yield a ratio of only 5 — in other words, there’s something like a 1 in 5 chance, 20%, that HPV vaccination would harm rather than benefit. But those numbers apply if only those women infected with high-risk strains are vaccinated. However, the advocates of HPV vaccination, which includes official agencies in the USA and some other countries, recommend HPV vaccination for all girls. That increases NNT by a factor of 25 and reverses drastically the benefit/cost ratio: It is 5 times more likely that an HPV vaccination will result in a serious adverse event than that the vaccination prevents a case of cervical cancer — even if HPV is the actual cause of cervical cancer, which remains to be proved beyond a mere weak statistical correlation.

It is simply not known whether HPV causes cancer at all. Certainly it does not always cause cancer. An extended article on the invaluable website that debunks urban legends is judicious on this matter by pointing out that the claimed association of HPV vaccination with autoimmune symptoms is only speculative. On the other hand, it also concludes in an update of 12 June 2017:
“An earlier version of this story incorrectly stated that countries with high HPV vaccination rates show declines in cervical cancer diagnoses. Both Gardasil and Cervarix have demonstrated efficacy in preventing HPV infections that cause cervical cancer, and evidence suggests declines in precancerous lesions and other abnormal growths as a result of HPV vaccination. There is debate over evidence for declines in cervical cancer diagnoses — as well as over how much time it would take after the introduction of the vaccine to see any effect on cancer diagnoses” [italics added].

The vaccines against HPV are successful against HPV — but it has never been proved that HPV (or the four strains of it supposed to be associated with cervical cancer) actually causes cancer. Since the rate of HPV infections exceeds the rate of cervical cancer by a huge amount, any “causative” action of HPV must be very indirect, especially since only a small percentage of HPV strains shows even a statistical association with cancer.
Recall that the usual test of “statistical significance” in medicine is p ≤ 0.05, meaning that there is less than a 5% chance that the association is owing only to chance. If there are 100 possible associations, about 5 of them will seem significant even though they are not, being picked out purely by chance because of the (weak!) criterion for statistical significance (6). If there are 100 strains of HPV, then at p ≤ 0.05, purely by chance about 5 strains will seem to be correlated with cervical cancer — or with just about anything else.
Before accepting any role fort HPV in cervical cancer, one should want a demonstration of the mechanism of the claimed causative effect.

(1) “Opening up data at the European Medicine”, Peter Gøtzsche & Anders Jørgensen, British Medical Journal, 342 (28 MAY 2011) 1184-6; “EMA must improve the quality of its clinical trial reports”, Corrado Barbui , Cinzia Baschirotto & Andrea Cipriani, ibid., 1187-9
(2) Complaint to the European Medicines Agency (EMA) over maladministration at the EMA, 26 May 2016
(3) “Complaint filed over EMA’s handling of HPV Vaccine safety issues”, Zosia Chustecka, 5 July 2016
(4) “Suspected side effects to the quadrivalent human papilloma vaccine”, Louise Brinth, Ann Cathrine Theibel1, Kirsten Pors & Jesper Mehlsen, Danish Medical Journal, 62 (#4, 2015) A5064
(5) “Responsum to Assessment Report on HPV-vaccines released by EMA November 26th 2015” by Louise Brinth, MD PhD, Syncope Unit, Bispebjerg and Frederiksberg Hospital, Copenhagen, December 15th 2015
(6) For a thorough discussion of the pitfalls of interpreting p values, see Gerd Gigerenzer, “Mindless Statistics”, Journal of Socio-Economics, 33 (2004) 587-606.

Posted in medical practices, prescription drugs, science policy, unwarranted dogmatism in science | Tagged: , , | 2 Comments »

How to interpret statistics; especially about drug efficacy

Posted by Henry Bauer on 2017/06/06

How (not) to measure the efficacy of drugs  pointed out that the most meaningful data about a drug are the number of people needed to be treated for one person to reap benefit, NNT, and the number needed to be treated for one person to be harmed, NNH.

But this pertinent, useful information is rarely disseminated, and most particularly not by drug companies. Most commonly cited are statistics about drug performance relative to other drugs or relative to placebo. Just how misleading this can be is described in easily understood form in this discussion of the use of anti-psychotic drugs.


That article (“Psychiatry defends its antipsychotics: a case study of institutional corruption” by Robert Whitaker) has many other points of interest. Most important, of course, the potent demonstration that official psychiatric practice is not evidence-based, rather, its aim is to defend the profession’s current approach.


In these ways, psychiatry differs only in degree from the whole of modern medicine — see WHAT’S WRONG WITH PRESENT-DAY MEDICINE  — and indeed from contemporary science on too many matters: Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, Jefferson (NC): McFarland 2012.

Posted in conflicts of interest, consensus, media flaws, medical practices, peer review, prescription drugs, scientific culture, unwarranted dogmatism in science | Tagged: , | Leave a Comment »

Vaccines: The good, the bad, and the ugly

Posted by Henry Bauer on 2017/05/21

Only in recent years have I begun to wonder whether there are reasons not to follow official recommendations about vaccination. In the 1930s, I had the then-usual vaccinations, including (in Austria, perhaps Europe) against smallpox. A few others in later years when I traveled quite a bit.

But the Andrew Wakefield affair *, and the introduction of Gardasil **, showed me that official sources had become as untrustworethy about vaccines as they have become about prescription drugs.

It seems that Big Pharma had just about run out of new diseases to invent against which to create drugs and had turned to snake-oil-marketing of vaccines. We are told, for example, that 1 in 3 people will experience shingles in their lifetime and should get vaccinated against it. Have one in three of your aged friends ever had shingles? Not among my family and friends. One of my buddies got himself vaccinated, and came down with shingles a couple of weeks later. His physician asserted that the attack would have been more severe if he hadn’t been vaccinated — no need for a control experiment, or any need to doubt official claims.

So it’s remarkable that the Swedish Government has resisted attempts to make vaccinations compulsory (“Sweden bans mandatory vaccinations over ‘serious health concerns’” by Baxter Dmitry, 12 May 2017).

That article includes extracts from an interview of Robert F. Kennedy, Jr., on the Tucker Carlson Show, which included such tidbits as the continued presence of thimerosal (organic mercury compound) in many vaccines including the seasonal flu vaccines that everyone is urged to get; and the huge increase in number of things against which vaccination is being recommended:

“I got three vaccines and I was fully compliant. I’m 63 years old. My children got 69 doses of 16 vaccines to be compliant. And a lot of these vaccines aren’t even for communicable diseases. Like Hepatitis B, which comes from unprotected sex, or using or sharing needles – why do we give that to a child on the first day of their life? And it was loaded with mercury.”



“Autism and Vaccines: Can there be a final unequivocal answer?”
      “YES: Thimerosal CAN induce autism”

** See “Gardasil and Cervarix: Vaccination insanity” and many other posts recovered with SEARCH for “Gardasil” on my blogs: and

Posted in fraud in medicine, legal considerations, medical practices, politics and science, prescription drugs, science is not truth, science policy, unwarranted dogmatism in science | Tagged: | Leave a Comment »

The banality of evil — Psychiatry and ADHD

Posted by Henry Bauer on 2017/04/25

“The banality of evil” is a phrase coined by Hannah Arendt when writing about the trial of Adolf Eichmann who had supervised much of the Holocaust. The phrase has been much misinterpreted and misunderstood. Arendt was pointing to the banality of Eichmann, who “had no motives at all” other than “an extraordinary diligence in looking out for his personal advancement”; he “never realized what he was doing … sheer thoughtlessness … [which] can wreak more havoc than all the evil instincts” (1). There was nothing interesting about Eichmann. Applying Wolfgang Pauli’s phrase, Eichmann was “not even wrong”: one can learn nothing from him other than that evil can result from banality, from thoughtlessness. As Edmund Burke put it, “The only thing necessary for the triumph of evil is for good men to do nothing” — and not thinking is a way of doing nothing.

That train of thought becomes quite uncomfortable with the realization that sheer thoughtlessness nowadays pervades so much of the everyday practices of science, medicine, psychiatry. Research simply — thoughtlessly — accepts contemporary theory as true, and pundits, practitioners, teachers, policy makers all accept the results of research without stopping to think about fundamental issues, about whether the pertinent contemporary theories or paradigms make sense.

Psychiatrists, for example, prescribe Ritalin and other stimulants as treatment for ADHD — Attention-Deficit/Hyperactivity Disorder — without stopping to think about whether ADHD is even “a thing” that can be defined and diagnosed unambiguously (or even at all).

The official manual, which one presumes psychiatrists and psychologists consult when assigning diagnoses, is the Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the American Psychiatric Association, now (since 2013) in its 5th edition (DSM-5). DSM-5 has been quite widely criticized, including by such prominent psychiatrists as Allen Frances who led the task force for the previous, fourth, edition (2).

Even casual acquaintance with the contents of this supposedly authoritative DSM-5 makes it obvious that criticism is more than called for. In DSM-5, the Diagnostic Criteria for ADHD are set down in five sections, A-E.

A: “A persistent pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development, as characterized by (1) and/or (2):
     1.   Inattention: Six (or more) of the following symptoms have persisted for at least 6 months to a degree that is inconsistent with developmental level and that negatively impacts directly on social and academic/occupational activities
           Note: The symptoms are not solely a manifestation of oppositional behavior, defiance, hostility, or failure to understand tasks or instructions. For older adolescents and adults (age 17 and older), at least five symptoms are required.
a.     Often fails to give close attention to details or makes careless mistakes in schoolwork, at work, or during other activities (e.g., overlooks or misses details, work is inaccurate)
b.     Often has difficulty sustaining attention in tasks or play activities (e.g., has difficulty remaining focused during lectures, conversations, or lengthy reading).”
and so on through c-i, for a total of nine asserted characteristics of inattention.

Paying even cursory attention to these “criteria” makes plain that they are anything but definitive. Why, for example, are six symptoms required up to age 16 when five are sufficient at 17 years and older? There is nothing clear-cut about “inconsistent with developmental level”, which depends on personal judgment about both the consistency and the level of development. Different people, even different psychiatrists no matter how trained, are likely to judge inconsistently in any given case whether the attention paid (point “a”) is “close” or not. So too with “careless”, “often”, “difficulty”; and so on.

It is if anything even worse with Criteria A(2):

“2.    Hyperactivity and Impulsivity:
Six (or more) of the following symptoms have persisted for at least 6 months to a degree that is inconsistent with developmental level and that negatively impacts directly on social and academic/occupational activities
       Note: The symptoms are not solely a manifestation of oppositional behavior, defiance, hostility, or failure to understand tasks or instructions. For older adolescents and adults (age 17 and older), at least five symptoms are required.
a.    Often fidgets with or taps hands or feet or squirms in seat.”
and so on through b-i, for again a total of nine supposed characteristics of inattention. There is no need to cite any of those since “a” amply reveals the absurdity of designating as the symptom of a mental disorder a type of behavior that is perfectly normal for the majority of young boys. This “criterion” makes self-explanatory the reported finding that boys are three times more likely than girls to be diagnosed with ADHD, though experts make heavier weather of it by suggesting that sex hormones may be among the unknown causes of ADHD (3).

A(1) and (2) are followed by
“B. Several inattentive or hyperactivity-impulsivity symptoms were present prior to age 12 years.
C. Several inattentive or hyperactivity-impulsivity symptoms are present in two or more
settings  (e.g., at home, school, or work; with friends or relatives; in other activities).
D. There is clear evidence that the symptoms interfere with, or reduce the quality of, social,
academic, or occupational functioning.
E. The symptoms do not occur exclusively during the course of schizophrenia or another
psychotic disorder and are not better explained by another mental disorder (e.g., mood
disorder, anxiety disorder, dissociative disorder, personality disorder, substance
intoxication or withdrawal).”

It should be plain enough that this set of so-called criteria is not based on any definitive empirical data, as a simple thought experiment shows: What clinical (or any other sort of) trial could establish by observation that six symptoms are diagnostic up to age 17 whereas five can be decisive from that age on? What if the decisive symptoms were apparent for only 5 months rather than six; or five-and-three-quarters months? How remarkable, too, that “inattention” and hyperactivity and impulsivity” are both characterized by exactly nine possible symptoms.

Leaving aside the deplorable thoughtlessness of the substantive content of DSM-5, it is also saddening that something published by an authoritative medical society should reflect such carelessness or thoughtlessness in presentation. Competent copy-editing would have helped, for example by eliminating the many instances of “and/or”: “this ungraceful phrase … has no right to intrude in ordinary prose” (4) since just “or” would do nicely; if, for instance, I tell you that I’ll be happy with A or with B, obviously I’ll be perfectly happy also if I get both.
Good writing and proper syntax are not mere niceties; their absence indicates a lack of clear substantive thought in what is being written about, as Richard Mitchell ( “The Underground Grammarian”), liked to illustrate by quoting Ben Jonson: “Neither can his Mind be thought to be in Tune, whose words do jarre; nor his reason in frame, whose sentence is preposterous”.

At any rate, ADHD is obviously an invented condition that has no clearly measurable characteristics. Assigning that diagnosis to any given individual is an entirely subjective, personal judgment. That this has been done for some large number of individuals strikes me as an illustration of the banality of evil. Countless parents have been told that their children have a mental illness when they are behaving just as children naturally do. Countless children have been fed mind-altering drugs as a consequence of such a diagnosis. Some number have been sent to special schools like Eagle Hill, where annual tuition and fees can add up to $80,000 or more.

Websites claim to give information that is patently unfounded or wrong, for example:

“Researchers still don’t know the exact cause, but they do know that genes, differences in brain development and some outside factors like prenatal exposure to smoking might play a role. … Researchers looking into the role of genetics in ADHD say it can run in families. If your biological child has ADHD, there’s a one in four chance you have ADHD too, whether it’s been diagnosed or not. … Some external factors affecting brain development have also been linked to ADHD. Prenatal exposure to smoke may increase your child’s risk of developing ADHD. Exposure to high levels of lead as a toddler and preschooler is another possible contributor. … . It’s a brain-based biological condition”.

Those who establish such websites simply follow thoughtlessly, banally, what the professional literature says; and some number of academics strive assiduously to ensure the persistence of this misguided parent-scaring and children-harming. For example, by claiming that certain portions of the brains of ADHD individuals are characteristically smaller:

“Subcortical brain volume differences in participants with attention deficit hyperactivity disorder in children and adults: a cross-sectional mega-analysis” by Martine Hoogman et al., published in Lancet Psychiatry (2017, vol. 4, pp. 310–19). The “et al.” stands for 81 co-authors, 11 of whom declared conflicts of interest with pharmaceutical companies. The conclusions are stated dogmatically: “The data from our highly powered analysis confirm that patients with ADHD do have altered brains and therefore that ADHD is a disorder of the brain. This message is clear for clinicians to convey to parents and patients, which can help to reduce the stigma that ADHD is just a label for difficult children and caused by incompetent parenting. We hope this work will contribute to a better understanding of ADHD in the general public”.

An extensive detailed critique of this article has been submitted to the journal as a basis for retracting that article: “Lancet Psychiatry Needs to Retract the ADHD-Enigma Study” by Michael Corrigan & Robert Whitaker. The critique points to a large number of failings in methodology, including that the data were accumulated from a variety of other studies with no evidence that diagnoses of ADHD were consistent or that controls were properly chosen or available — which ought in itself be sufficient reason not to find publication.

Perhaps worst of all: Nowhere in the article is IQ mentioned; yet the Supplementary Material contains a table revealing that the “ADHD” subjects had on average higher IQ scores than the “normal” controls. “Now the usual assumption is that ADHD children, suffering from a ‘brain disorder,’ are less able to concentrate and focus in school, and thus are cognitively impaired in some way. …. But if the mean IQ score of the ADHD cohort is higher than the mean score for the controls, doesn’t this basic assumption need to be reassessed? If the participants with ADHD have smaller brains that are riddled with ‘altered structures,’ then how come they are just as smart as, or even smarter than, the participants in the control group?”

[The Hoogman et al. article in many places refers to “(appendix)” for details, but the article — which costs $31.50 — does not include an appendix; one must get it separately from the author or the journal.]

As usual, the popular media simply parroted the study’s claims, illustrated by headlines cited in the critique:

And so the thoughtless acceptance by the media of anything published in an established, peer-reviewed journal contributes to making this particular evil a banality. The public, including parents of children, are further confirmed in the misguided, unproven, notion that something is wrong with the brains of children who have been designated with a diagnosis that is no more than a highly subjective opinion.

The deficiencies of this article also illustrate why those of us who have published in peer-reviewed journals know how absurd it is to regard “peer review” as any sort of guarantee of quality, or even of minimal standards of competence and honesty. As Richard Horton, himself editor of The Lancet, has noted, “Peer review . . . is simply a way to collect opinions from experts in the field. Peer review tells us about the acceptability, not the credibility, of a new finding” (5).

The critique of the Hoogman article is just one of the valuable pieces at the Mad in America website. I also recommend highly Robert Whitaker’s books, Anatomy of an Epidemic and Mad in America.

(1)  Hannah Arendt, Eichmann in Jerusalem — A Report on the Banality of Evil. Viking Press,
1964 (rev. & enlarged ed.). Quotes are at p. 134 of PDF available at
(2)  Henry H. Bauer, “The Troubles With Psychiatry — essay review of Saving Normal by Allen
Frances and The Book Of Woe by Gary Greenberg”, Journal of Scientific Exploration,
29  (2015) 124-30
(3)   Donald W. Pfaff, Man and Woman: An Inside Story, Oxford University Press, 2010: p. 147
(4)   Modern American Usage (edited & completed by Jacques Barzun et al. from the work of
Wilson Follett), Hill & Wang 1966
(5)    Health Wars: On the Global Front Lines of Modern Medicine, New York Review Books,
2003, p. 306


Posted in conflicts of interest, consensus, media flaws, medical practices, peer review, prescription drugs, science is not truth, unwarranted dogmatism in science | Tagged: , , , , | Leave a Comment »

We are being routinely misled about health and diet

Posted by Henry Bauer on 2017/03/24

Most of what the media make a fuss about over health or diet should not be believed.

It should not be believed even as it cites peer-reviewed articles or official guidelines. All too often the claims made are based on misuse of statistics and are an abuse of common sense.

That little rant was set off by a piece in the august New York Times: “Pollution leads to greater risk of dementia among older women, study says”).

Alarms were triggered:
“Older women”: Only among older and not younger? Women but not men?

The original article did not improve my mood:
The pollution actually studied was “fine particulate matter, P.M. 2.5, 2.5 micrometers or smaller in diameter”: What about 2.5 to 3, say? Or 3 to 4? And so on.
“Women with the genetic variant APOE4, which increases the risk of Alzheimer’s disease, were more likely to be affected by high levels of air pollution”:
Is this asserting that there’s synergy? That the combined effect is not just the added effects of the two factors? That pollution is not just an independent risk factor but somehow is more effective with APOE4 carriers? So what about APOE3 or APOE2 carriers?

The New York Times piece mentioned some other studies as well:
“[P]renatal exposure to air pollution could result in children with greater anxiety, depression and attention-span disorders”.
“[A]ir pollution caused more than 5.5 million premature deaths in 2013”.

With those sort of assertions, my mind asks, “How on earth could that be known?”
What sort of study could possibly show that? What sort of data, and how much of it, would be required to justify those claims?

So, with the older women and dementia, how were the observational or experimental subjects (those exposed to the pollution) distinguished from the necessary controls that were not exposed to pollution? Controls need to be just like the experimental subjects (in age, state of health, economic circumstances, etc.) with the sole exception that the latter were exposed to pollution and the controls were not.
For the controls not to be exposed to the pollution, obviously the two groups must be geographically separate. Then what other possibly pertinent factors differed between those geographic regions? How was each of those factors controlled for?

In other words, what’s involved is not some “simple” comparison of polluted and not polluted; there is a whole set of possibly influential factors that need somehow to be controlled for.

The more factors, the larger the needed number of experimental subjects and controls; and the required number of data points increases much more than linearly with the number of variables. Even just that realization should stimulate much skepticism about many of the media-hyped stories about diet or health. Still more skepticism is called for when the claim has to do with lifestyle, since the data then depend on how the subjects recall and describe how they have behaved.

The dementia article was published in Translational Psychiatry, an open-access journal from the Nature publishing group. The study had enrolled 3647 women aged between 65 and 79. That is clearly too small a number for all possibly relevant factors to have been controlled for. Many details make that more than a suspicion, for example, “Women in the highest PM2.5 quartile (14.34–22.55 μg m −3) were older (aged ≥75 years); more likely to reside in the South/Midwest and use hormonal treatment; but engage less in physical activities and consume less alcohol, relative to counterparts (all P-values <0.05. . . )” — in other words, the highest exposure to pollution was experiences by subjects who differed from controls and from other subjects in several ways besides pollution exposure.

At about the same time as the media were hyping the dementia study, there was also “breaking news” about how eating enough fruit and vegetables protects against death and disease, based on the peer-reviewed article “Fruit and vegetable intake and the risk of cardiovascular disease, total cancer and all-cause mortality — a systematic review and dose-response meta-analysis of prospective studies”.

Meta-analysis means combining different studies, the assumption being that the larger amount of primary data can make conclusions stronger and firmer. However, that requires that each of the individual studies being drawn on is sound and that the subjects and circumstances are reasonably comparable in all the different studies. In this case, 95 studies reported in 142 publications were analyzed. Innumerable factors need to be considered — the specific fruit or vegetable (one cannot presume that apples and pears have the same effect, nor cauliflower and carrots); and the effects of different amounts of what is eaten must somehow be taken into account. There are innumerable variables, in other words, permitting considerable skepticism about the claims that “An estimated 5.6 and 7.8 million premature deaths worldwide in 2013 may be attributable to a fruit and vegetable intake below 500 and 800 g/day, respectively, if the observed associations are causal” and that ‘Fruit and vegetable intakes were associated with reduced risk of cardiovascular disease, cancer and all-cause mortality. These results support public health recommendations to increase fruit and vegetable intake for the prevention of cardiovascular disease, cancer, and premature mortality.” Skepticism is yet more called for since health and mortality are influenced to a great extent by genetics and geography, which were not controlled for.
The authors deserve credit, though, for the clause, “if the observed associations are causal”. What everyone should know about statistics is that correlations, associations, never prove causation. That law is almost universally ignored as the media disseminate press releases and other spin from researchers and their institutions, implying that associations are meaningful about what causes what.

It is easy enough to understand why considerable skepticism should be exercised with claims like those about mortality and diet or about dementia and pollution, simply because studies to test these claims properly would need to include much larger numbers of subjects. But an even greater reason to doubt such claims, as well as claims about newly approved drugs and treatments, is that the statistical analyses commonly used are inherently flawed, most particularly by a quite inadequate criterion for statistical significance.

Almost universally in social science and in medical science, statistical significance is defined as p≤0.05: the probability that the results are mere coincidence, owing just to random chance, is less than 5%, in other words less than 1 in 20.

Several things are wrong with that. Among the most serious are:

  1. That something is not a coincidence, not owing to random chance, does not tell us what it is owing to, what the cause is. It is not necessarily the experimenter’s hypothesis, yet that is the assumption made universally with this type of statistical analysis.
  2. 1 in 20 is a very weak criterion. It means that 1 in every 20 “statistically significant” conclusions is wrong. Do 20 studies, and on average one of them will be “statistically significant” even though it is wrong.
  3. That something is statistically significant does not mean that the effect is meaningful.
    For example, after I had a TIA (transient ischemic attack, minor stroke), the neurologist automatically prescribed the “blood thinner” Plavix, clopidogrel, as lessening the risk of further strokes. I am wary of all drugs since they all have “side” effects, so later I searched the literature and found that Plavix is statistically significantly better at decreasing risk than is aspirin, p = 0.043, better than p≤0.05. However, the relative efficacies found were just 5.83% compared to 5.32%; to my mind, not at all a significant difference, not enough to compensate for the greater risk of “side” effects from clopidogrel than from aspirin which has been in use for far longer by far more people without discovery of seriously dangerous “side” effects. (Chemicals don’t have two types of effect, main and side, those we want and those we don’t want. “Side” effects are just as real as the intended effects.)

Many statisticians have pointed out for many years what is wrong with the p-value approach to statistics and its use in social science and in medical science. More than two decades ago, an editorial in the British Medical Journal pointed to “The scandal of poor medical research” [i] with incompetent statistical analysis one of the prime culprits. Matthews [ii] has explained clearly point 1 above. Colquhoun [iii] explains that p ≤ 0.05 makes for wrong conclusions even more often than 1 in 20 times: “If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time”. Gigerenzer [iv] has set out in clear detail the troubles with the commonly used p-value analysis.
Nevertheless, this misleading approach continues to be routine, standard, because it is so simple that many researchers who have no real understanding of statistics can use it. Among the consequences is that most published research findings are false [v] and that newly approved drugs have had to be withdrawn sooner and sooner after their initial approval [vi].
Slowly the situation improves as systemic inertia is penetrated by a few initiatives. A newly appointed editor of the journal Basic and Applied Social Psychology (BASP) announced that p-value analyses would no longer be required [vii], and soon after that they were actually banned [viii].

In the meantime, however, tangible damage is being done by continued use of the p-value approach in the testing and approval of prescription drugs, which adds to a variety of deceptive practices routinely employed by the pharmaceutical industry in clinical trials, see for example Ben Goldacre, Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients (Faber & Faber, 2013); Peter C. Gøtzsche, Deadly Medicines and Organised Crime: How Big Pharma Has Corrupted Healthcare (Radcliffe, 2013); David Healy, Pharmageddon (University of California Press, 2012). Gøtzsche and Healy report that prescription drugs, even though “properly” used, are the 3rd or 4th leading cause of death in developed countries.


[i] D G Altman, BMJ, 308 [1994] 283

[ii] Matthews, R. A. J. 1998. “Facts versus Factions: The use and abuse of subjectivity in scientific research.” European Science and Environment Forum Working Paper; pp. 247-82 in J. Morris (ed.), Rethinking Risk and the Precautionary Principle, Oxford: Butterworth (2000).

[iii] David Colquhoun, “An investigation of the false discovery rate and the misinterpretation of p-values”, Royal Society Open Science, 1 (2014) 140216;

[iv] Gerd Gigerenzer, “Mindless statistics”, Journal of Socio-Economics, 33 [2004] 587-606)

[v] (John P. A. Ioannidis, “Why most published research findings are false”, PLoS Medicine, 2 [#8, 2005] 696-701; e124)

[vi] Henry H. Bauer, Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012, Table 5 (p. 240) and text pp. 238-42

[vii] David Trafimow, Editorial, Basic and Applied Social Psychology, 36 (2014) 1-2

[viii] (David Trafimow & Michael Marks, Editorial, BASP, 37 [2015] 1-2; comments by Royal Statistical sociry[viii] and at

Posted in media flaws, medical practices, peer review, prescription drugs, unwarranted dogmatism in science | Tagged: , , , , | Leave a Comment »

Speaking Truth to Big Pharma Power

Posted by Henry Bauer on 2017/03/18

Some time ago I recommended the newsletter of Mad in America, a diligent and reliable commentary on the flaws of modern psychiatric medicine.

A recent issue had links to a superb series of articles by David Healy, a psychiatrist who has spoken truth to Big Pharma and to the conventional (lack of) wisdom, at considerable personal cost. Healy also founded a website with information about dru side effects, RxRisk:
Tweeting While Psychiatry Burns
Tweeting while Medicine Burns (Psychopharmacology Part 2)
Burn Baby Burn (Psychopharmacology Part 3)

Also useful in this newsletter, link to a report of a meta-analysis confirming the Minimal Effectiveness and High Risk of SSRIs

Posted in conflicts of interest, medical practices, politics and science, prescription drugs, science is not truth, scientific culture, scientists are human | Tagged: , , | Leave a Comment »

Modern medicine: danger to public health and public purse?

Posted by Henry Bauer on 2017/02/23

Healthcare costs in the USA are now unmanageable, as illustrated by the common bankruptcies of people without good insurance and, just now, by the realization that the Republican promise to “repeal and replace Obamacare” is unworkable if many Americans are not to lose the insurance help they currently have.

One way to reduce costs that is not talked about, and that is unlikely to gain much traction until the crisis becomes catastrophic, would be to call a halt to medical  treatments that do harm rather than good. It comes as a surprise to learn, for example, that prescription drugs, used as prescribed, are the 3rd or 4th leading cause of death in developed nations (Gøtzsche, Peter C. Deadly Medicines and Organised Crime: How Big Pharma Has Corrupted Healthcare. Oxford & New York: Radcliffe, 2013
Healy, David. Pharmageddon. University of California Press, 2012).

A recent article at ProPublica and in  The Atlantic has much information  about unnecessary, often harmful practices that continue in routine medical practice:

When Evidence Says No, But Doctors Say Yes
Years after research contradicts common practices, patients continue to demand them and doctors continue to deliver. The result is an epidemic of unnecessary and unhelpful treatment.
David Epstein, ProPublica    February 22, 2017
This story was co-published with The Atlantic.”

The interests vested in present ways of doing things are many and powerful — clinics, hospitals, professional guilds, but chiefly the pharmaceutical industry. So it will not be easy to change the system, despite the fact that dozens of books and articles over the last few decades have described in documented detail what’s wrong with modern medicine.

Posted in medical practices, prescription drugs | Tagged: | 2 Comments »

Anti-psychotic drugs: initial benefit, long-term harm

Posted by Henry Bauer on 2016/08/03

Recently (Trust medical science at your peril (2): What is the evidence, especially in psychiatry? ) I recommended the newsletter of Mad in America  for disseminating reliable information about psychiatric matters. A recent issue of the Newsletter has links to a very thorough examination of the evidence that anti-psychotic drugs make things worse if used long-term: “The case against antipsychotics — A review of their long-term effects”, by Robert Whitaker (July 2016).

There is considerable support for the hypothesis that psychotic episodes are associated with heightened sensitivity to dopamine. Anti-psychotics ameliorate such episodes by blocking dopamine receptors. These drugs appear to be beneficial immediately, and for perhaps as long as a couple of years. However, once exposed to the drugs, withdrawal almost always has severe bad effects.

It appears that the brain tries to overcome the blocking of the dopamine receptors by increasing the number of these receptors. That takes appreciably long time, apparently many months if not years, so the consequences become significantly important only eventually. That explains why withdrawal brings even worse symptoms than the original ones were, and why long-term treatment is more harmful than beneficial. The drugs must be used forever, and their cumulating “side” effects are very debilitating.

Non-drug treatment of schizophrenia and other psychoses, sometimes teamed with short-term drug use, has much better long-term outcomes than does continual medication; better outcomes in terms of better all-around functioning and fewer relapses.


Posted in medical practices, prescription drugs, resistance to discovery, science is not truth | Tagged: | 4 Comments »

What to believe? Science is a red herring and a wild-goose chase

Posted by Henry Bauer on 2016/07/24

To be certain about things is reassuring. It allows feelings of safety, security.

For knowledge, for understanding the world, humankind seems to have turned at first to what could be inferred from the spirits of things — the spirits associated with or inherent in everything: in mountains, in trees, in bodies of water. The spirits could be understood, at least partly, because they were similar to people in having emotions and desires.

Eventually — quite recently, only a few thousand years ago — the plurality and hierarchies of spirits and gods yielded to monotheistic religions in most parts of the world. Even more recently, and only in the most powerfully developed countries, religion yielded to science.

That is to say, traditional religion yielded to scientism, the religion of science. Even the monotheistic gods have emotions and desires, but science doesn’t. So knowledge became entirely impersonal, at least in principle.

Nowadays, then, for real certainty we look to science. “Scientific” stands for unquestionably true. Science is the gatekeeper of truth. “Science” and “scientific” are mediators of being certain, being sure about something.

Consequently, a great deal of arguing to-and-fro has to do with whether something is scientific:
Does it emerge from use of the scientific method?
Is it reproducible?
Is it falsifiable?

And if a claim doesn’t satisfy those criteria or equivalent ones then it’s dismissed as not scientific, or as pseudo-science, or as just plain not to be believed.

That’s an indirect way of judging believability, and arguments about whether something is scientific can be and have been highly abstract, complicated, and sophisticated as technical philosophical discourse tends to be.

Instead, why not go directly at the issues of certainty and truth and just ask, what does it take to be justifiably and reliably certain about something?

In any case, although we use science as mediator of certain truth, we’ve also learned that contemporary scientific knowledge and understanding really isn’t always reliably true. Even when an explanation has been based on tangible evidence, and withstood challenges and tests — if it’s properly scientific, in other words — we’ve learned that it may be misleading. Scientific progress with periodic scientific revolutions has continually revealed flaws, deficiencies, errors, in what were for a time the most widely and fully accepted scientific theories.

If something has always happened in the past, can we be certain that it always will happen in the future? We’ve learned that we cannot be quite certain.

When an explanation has always worked in the past, can we be certain that it always will work in the future? We’ve learned that we cannot be quite certain.

When tangible things are sub-divided into their ultimate components, those turned out to be nothing like objects accessible to direct human observation. They do not fit our concepts of particles or energy, although many of their reactions can be calculated using sometimes particle equations and sometimes wave equations. They behave sometimes as though they were locatable, delimited in space-time, and at other times appear to be “non-local”, not so delimited.

In other words, we’ve learned that we cannot get certain and humanly comprehensible understanding of everything about the whole of the natural world. It’s surely time to accept that, that human beings will never attain complete certainty.

That could be liberating. It would make more feasible pragmatic, non-ideological communication and cooperative action — if only we could be rid of the ideologues: the true believers in a religion, including the true believers in scientism, the religion of science. Anyone who claims complete certainty has insufficient warrant for that claim. The world and its behaviors can be known only within degrees of probability. Instead of arguing about whether something is scientific or whether it is true, we ought to be discussing plausibility, likelihood, utility, risk.

Instead of dismissing as pseudo-science the claims that Loch Ness Monsters are real animals, we should be content to say, “Feel free to believe that if the evidence seems to you sufficiently convincing. For my part, I’ll wait until someone shows me an actual specimen or an indubitable bit of one”. And similarly with yetis and other cryptids, and with UFOs, and with all other anomalous or Fortean reports or claims.

Instead of arguing over being for or against vaccination, we should ask for the statistical data of harm possibly caused by each specific vaccine. For instance, since in many countries the chance of becoming infected by polio is less than the risk of contracting polio from the oral vaccine. perhaps official sources might be less dogmatic about enforcing use of that particular vaccine (“Polio vaccines now the #1 cause of polio paralysis”; “Oral polio vaccine-associated paralysis in a child despite previous immunization with inactivated virus”; “Bill Gates’ polio vaccine program caused 47,500 cases of paralysis death“).

And so on. For every drug and every treatment, we should demand that the Food and Drug Administration require data on NNT and NNH — NNT: the number of patients needed to be treated in order that 1 patient benefit, compared with NNH: the number of patients who must receive a drug in order to have 1 patient experience harm [How (not) to measure the efficacy of drugs].  That would go a long way to decreasing the number of people nowadays being killed by prescription drugs, which are the 3rd or 4th leading cause of death in First-World countries (Peter C. Gøtzsche, Deadly Medicines and Organised Crime: How Big Pharma Has Corrupted

Healthcare [Radcliffe, 2013]; David Healy, Pharmageddon [University of California Press, 2012]).

We need more data and less dogmatism.



Posted in medical practices, prescription drugs, science is not truth, unwarranted dogmatism in science | Tagged: , , , , , , | Leave a Comment »

Trust medical science at your peril (2): What is the evidence, especially in psychiatry?

Posted by Henry Bauer on 2016/07/15

All too often, the evidence turns out to be nothing more than statistical association: “Trust medical science at your peril: Correlations never prove causation”.

A particular example of confusing association with causation is the reliance on biomarkers:

“The Institute of Medicine Report, Evaluation of Biomarkers and Surrogate Endpoints in Chronic Disease (IOM 2010), finds that none of the commonly used biomarkers is a valid measure of the illness it supposedly tracks. As to subsequent treatment, Järvinen et al. have pointed out that ‘There are no valid data on the effectiveness . . . [of] statins, antihypertensives, and bisphosphanates’ (the last, e.g. Fosamax, are prescribed against osteoporosis) — British Medical Journal, 342 (2011) doi: 10.1136/bmj.d2175.
That last quote is surely an astonishing assertion, given that innumerable individuals are being fed statins and blood-pressure drugs and bisphosphanates not because they feel ill in any way but purely on the basis of levels of biomarkers (bone density in the case of bisphosphanates)” (Everyone is sick?)

Supporting evidence is sadly lacking for a wide range of accepted, standard medical practices. For at least a couple of decades, insiders and well-informed observers have described and documented the failings of modern medicine: “What’s wrong with present -day medicine”.

Bad as things are with the treatment of physical illnesses, they are much worse where psychiatry is involved. This blog post was stimulated by the informative article, “In Search of an Evidence-based Role for Psychiatry” by John Read, Olga Runciman, & Jacqui Dillon.

I had learned of it through the Newsletter of Mad in America, an excellent website dedicated to disseminating reliable information about psychiatric matters. One can sign up for the Newsletter at


Posted in conflicts of interest, consensus, medical practices, prescription drugs | Tagged: | Leave a Comment »