Skepticism about science and medicine

In search of disinterested science

Politics, science, and medicine

Posted by Henry Bauer on 2017/12/31

I recently posted a blog about President Trump firing members of the Presidential Advisory Council on HIV/AIDS in which I concluded with
”Above all, the sad and bitter fact is that truth-seeking does not have a political constituency, be it about HIV, AIDS, or anything else”.

That sad state of affairs, the fragile foothold that demonstrable truth has in contemporary society, is owing to a number of factors, including that “Science is broken” and the effective hegemony of political correctness (Can truth prevail?).

A consequence is that public policies are misguided about at least two issues of significant social impact: HIV/AIDS (The Case against HIV), and human-caused global warming (A politically liberal global-warming skeptic?).

Science and medicine are characterized nowadays on quite a number of matters by dogmatic adherence to views that run counter to the undisputed evidence (Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012). To cite just one absurdity (on a matter that has no significant public impact): in cosmology, the prevailing Big-Bang theory of the universe requires that “dark matter” and “dark energy” make up most of the universe, the “dark” signifying that they have never been directly observed; and there are no credible suggestions for how they might be observed directly, and nothing is known about them except that their postulated influences are needed to make Big-Bang theory comport to the facts of the real world. Moreover, a less obviously flawed theory has been available for decades, the “steady-state” theory that envisages continual creation of new matter, observational evidence for which was collected and published by Halton Arp (Qasars, Redshifts and Controversies, Interstellar Media, 1987; Seeing Red: Redshifts, Cosmology and Academic Science, Apeiron, 1998).

Dozens of books have documented what is wrong with contemporary medicine, science, and academe:
Critiques of contemporary science and academe;
What’s wrong with present-day medicine.

The common feature of all the flaws is the failure to respect the purported protocols of “the scientific method”, namely, to test hypotheses against reality and to keep testing theories against reality as new evidence comes in.

Some political commentators have described our world as “post-truth”, and a variety of social commentators have held forth for decades about a “post-modern” world. But the circumstances are not so much “post-truth” or “post-modern” as pre-Enlightenment.

So far as we know and guess, humans accepted as truth the dogmatic pronouncements of elders, shamans, priests, kings, emperors and the like until, perhaps half a millennium ago, the recourse to observable evidence began to supersede acceptance of top-down dogmatic authority. Luther set in motion the process of taking seriously what the Scriptures actually say instead of accepting interpretations from on high. The religious (Christian only) Reformation was followed by the European Enlightenment; the whittling away of political power from traditional rulers; the French Revolution; the Scientific Revolution. By and large, it became accepted, gradually, that truth is to be found by empirical means, that explanations should deal with the observed natural world, that beliefs should be tested against tangible reality.

Science, in its post-17th-century manifestation as “modern science”, came to be equated with tested truth. Stunning advances in understanding confirmed science’s ability to learn accurately about the workings of nature. Phenomena of physics and of astronomy came to be understood; then chemistry; then sub-atomic structure, relativity, quantum mechanics, biochemistry … how could the power of science be disputed?

So it has been shocking, not fully digested by any means, that “science” has become untrustworthy, as shown in the last few decades by, for instance, increasing episodes of dishonesty, fraud, unreproducible claims.

Not yet widely realized is the sea change that has overtaken science since about the middle of the 20th century, the time of World War II. It’s not the scientific method that determines science, it’s the people who are doing the research and interpreting it and using it; and the human activity of doing science has changed out of sight since the early days of modern science. In a seriously oversimplified nutshell:

The circumstances of scientific activity have changed, from about pre-WWII to nowadays, from a cottage industry of voluntarily cooperating, independent, largely disinterested ivory-tower intellectual entrepreneurs in which science was free to do its own thing, namely the unfettered seeking of truth about the natural world, to a bureaucratic corporate-industry-government behemoth in which science has been pervasively co-opted by outside interests and is not free to do its own thing because of the pervasive conflicts of interest. Influences and interests outside science now control the choices of research projects and the decisions of what to publish and what not to make public.

What science is purported to say is determined by people; actions based on what science supposedly says are chosen by people; so nowadays it is political and social forces that determine beliefs about what science says. Thus politically left-leaning people and groups acknowledge no doubt that HIV causes AIDS and that human generation of carbon dioxide is the prime forcer of climate change; whereas politically right-leaning people and groups express doubts or refuse flatly to believe those things.

For more detailed discussion of how the circumstances of science have changed, see “Three stages of modern science”; “The science bubble”; and chapter 1 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017).

For how to make science a public good again, to make science truly reflect evidence rather than being determined by political or religious ideology, see chapter 12 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017).


Posted in conflicts of interest, fraud in medicine, fraud in science, global warming, politics and science, science is not truth, science policy, scientists are human, the scientific method, unwarranted dogmatism in science | Tagged: | Leave a Comment »

Science is broken: Illustrations from Retraction Watch

Posted by Henry Bauer on 2017/12/21

I commented before about Science is broken: Perverse incentives and the misuse of quantitative metrics have undermined the integrity of scientific research.  The magazine The Scientist published on 18 December “Top 10 Retractions of 2017 —
Making the list: a journal breaks a retraction record, Nobel laureates Do the Right Thing, and Seinfeld characters write a paper”, compiled by Retraction Watch. It should be widely read and digested for an understanding of the jungle of unreliable stuff nowadays put out under the rubric of “science”.

See also “Has all academic publishing become predatory? Or just useless? Or just vanity publishing?”


Posted in conflicts of interest, fraud in medicine, fraud in science, media flaws, science is not truth, scientific culture, scientists are human | Tagged: , | Leave a Comment »

Blood pressure: Official guidelines make no sense

Posted by Henry Bauer on 2017/11/28

These guidelines make no sense because

  1. BP increases normally with age, as known for more than half a century; yet guidelines for what is said to be “normal” and what is called “hypertension” ignore the correlation with age.
  2. The guidelines are not based on pertinent data because the dependence on age is not properly taken into account.

It’s no wonder, then, that the guidelines were changed in one way in 2013 and in the opposite way just four years later.

At the end of 2013, the most authoritative recommendations for managing blood pressure stated that “There is strong evidence to support treating hypertensive persons aged 60 years or older to a BP goal of less than 150/90 mmHg and hypertensive persons 30 through 59 years of age to a diastolic goal of less than 90 mmHg; however, there is insufficient evidence in hypertensive persons younger than 60 years for a systolic goal, or in those younger than 30 years for a diastolic goal, so the panel recommends a BP of less than 140/90 mmHg for those groups based on expert opinion” [emphases added].

Note first that the criterion for describing someone as “hypertensive” is based on insufficient evidence, which has not prevented modern medicine from being quite dogmatic about calling people of any age hypertensive when their BP exceeds what is the common average in healthy 30-40-year-olds, namely about 140/90.

Then note that the goal of ≤150 systolic not as low as what had been recommended dogmatically for the previous three decades or more.

And then contemplate how to value “expert opinion” that is based on insufficient evidence.

In “Don’t take a pill if you’re not ill”  I made a point I’ve not seen elsewhere: population-average numbers for blood sugar, cholesterol, and BP are taken as the desirable upper limits and medication is administered to lower everyone’s numbers to those levels; yet no consideration is given to raising the numbers if they are lower than the average, even as there is evidence that, for example, higher cholesterol is good for older people since it is associated with lower mortality (1, 2). If the population average is more desirable than higher numbers, why aren’t the averages regarded as better than lower numbers as well?

In “Everyone is sick?” I cited the Institute of Medicine finding that measures like (and including) BP are not symptoms of illness even as they are treated as such; discussed further re BP in “‘Hypertension’: An illness that isn’t illness”.

“60 MINUTES on aging — correlations or causes?” cited the finding that mini-strokes in older people were less frequent with higher blood pressure, the very opposite of the official dogma.

So now in 2017 the guidelines call for significantly lower BP than the 2013-14 set, namely “normal (<120/80 mmHg), elevated (120-129/<80 mmHg), stage 1 hypertension (130-139/80-89 mmHg), or stage 2 hypertension(³140/90 mmHg)”; though it is conceded that this is merely a “strong recommendation” based on “moderate-quality evidence” (3).

Defining hypertension as ≥130 makes it likely that some of this “moderate-quality” evidence came from the SPRINT trial, which concluded (4) that “Among patients at high risk for cardiovascular events but without diabetes, targeting a systolic blood pressure of less than 120 mm Hg, as compared with less than 140 mm Hg, resulted in lower rates of fatal and nonfatal major cardiovascular events and death from any cause, although significantly higher rates of some adverse events were observed in the intensive-treatment group” [emphasis added].

There is here a conundrum: How could there be lower rates of “fatal and nonfatal major cardiovascular events” when Table S5 in the Supplementary Appendix reports only 118 “Serious Adverse Events and Conditions of Interest Classified as Possibly or Definitely Related to the Intervention” under standard treatment (to ≤140) by contrast to 220 under the intensive treatment? With the latter confirming “significantly higher rates of some adverse events were observed in the intensive-treatment group”?

At any rate, all these data are incapable of delivering a meaningful answer about possible risks posed by high BP. Since BP increases with age, the only way to detect its possible risk would be to monitor the health and mortality rates of cohorts of people of the same age, and this is not the case in the SPRINT Trial.

There are plenty of other reasons to be wary of the SPRINT study. The Supplementary Appendix asserts that “All components of the SPRINT study protocol were designed and implemented by the investigators. The investigative team collected, analyzed, and interpreted the data. All aspects of manuscript writing and revision were carried out by the coauthors. The content is solely the responsibility of the authors”. But which ones exactly? There are 6 pages of names; there were 102 clinical sites; a trial coordinating center and centers for MRI reading and electrocardiography reading; an independent data and safety monitoring board; institutional review boards at each clinical site; and a steering committee (13 members) and a writing committee (members not detailed in the Appendix).

When everyone is responsible, then in practice no one is responsible.

Rhetorical questions:

Ø      Who conceived the idea of testing more stringent criteria than formerly for controlling BP?

Ø      What data stimulated that idea, given that the 2013 guidelines cited above revealed a lack of evidence for a systolic goal in persons younger than 60?

Ø      Why are there no statements about conflicts of interest? Biomedical research requires funding. Research articles typically list potential conflicts of interest, and it is well known that most biomedical scientists have some sort of consulting or other relationship with drug companies. Here the only possible clue lies in the Acknowledgments: “The SPRINT investigators acknowledge the contribution of study medications (azilsartan and azilsartan combined with chlorthalidone) from Takeda Pharmaceuticals International, Inc.”

The official BP guidelines make no sense because

  1. BP increases normally with age, as known for more than half a century; yet guidelines for what is said to be “normal” ignore the correlation with age.
  2. The guidelines are not based on pertinent data but on admittedly “moderate-quality” evidence; that is actually of much lower quality than that because it does not offer age-specific information.


  1. Schatz et al., “Cholesterol and all-cause mortality in elderly people from the Honolulu Heart Program: a cohort study”, Lancet, 358 (2001) 351-5.
  2. Chapter 3 in Joel M. Kaufmann, Malignant Medical Myths, Infinity Publishing, 2006; ISBN 0-7414-2909-8.
  3. Adam S. Cifu & Andrew M. Davis, “JAMA Clinical Guidelines Synopsis: Prevention, detection, evaluation, and management of high blood pressure in adults”, JAMA; published online 20 November 2017; Clinical Review & Education, E1-3.
  4. The SPRINT Research Group, “A randomized trial of intensive versus standard blood-pressure control”, New England Journal of Medicine, 373 (2015) 2103-16.


Posted in conflicts of interest, medical practices, prescription drugs, unwarranted dogmatism in science | Tagged: | Leave a Comment »

Fog Facts: Side effects and re-positioning of drugs

Posted by Henry Bauer on 2017/11/23

Fog Facts: things that are known and yet not known —
[not known to the conventional wisdom, the general public, the media
but known to those (few) who are genuinely informed about the subject]

For that delightful term, Fog Facts, I’m grateful to Larry Beinhart who introduced me to it in his novel “The Librarian”. There it’s used in connection with political matters, but it’s entirely appropriate for the disconnect between “what everyone knows” about blood pressure, cholesterol, prescription drugs, and things of that ilk, and what the actual facts are in the technical literature.

For example, the popular shibboleth is that drug companies spend hundreds of millions of dollars in the development of a new drug, and that’s why they need to make such large profits to plough back into research. The truth of the matter is that most new drugs originate in academic research, conducted to a great extent at public expense; and drug companies spend more on advertising and marketing than they do on research. All that is known to anyone who cares to read material other than what the drug-company ads say and what the news media disseminate; and yet it’s not known because too few people read the right things, even books by former editors of medical journals and academic researchers at leading universities and published by mainstream publishers; see “What’s wrong with modern medicine”.

When it comes to drug “development”, the facts are all hidden in plain view. There’s even a whole journal about it, Nature Reviews — Drug Discovery, that began publication in 2002. I came to learn about this because Josh Nicholson had alerted me to an article in that journal, “Drug repositioning: identifying and developing new uses for existing drugs” (by Ted T. Ashburn and Karl B. Thor, 3 [2004] 673-82). I had never heard of “drug repositioning”. What could it mean?

Well, it means finding new uses for old drugs. And the basic reason for doing so is that it’s much easier and more profitable than trying to design or discover a new drug, because old drugs have already been approved as safe, and it’s already known how to manufacture them.

What seems obvious, however — albeit only as a Fog Fact — is that the very success of repositioning drugs should be a red flag warning against the drug-based medicine or drug-first medicine or drug-besotted medicine that has become standard practice in the United States. The rationale for prescribing a drug is that it will fix what needs attending to without seriously and adversely affecting anything else, in other words that there are no serious “side” effects. But repositioning a drug shows that it has a comparably powerful effect on something other than its original target. In other words, “side” effects may be as powerful and significant as the originally intended effect. Ashburn and Thor give a number of examples:

Cymbalta was originally prescribed to treat depression, anxiety, diabetic peripheral neuropathy, and fibromyalgia (all at about the same dosage, which might cause one to wonder how many different mechanisms or systems are actually being affected besides the intended one). The listed side effects do not include anything about urination, yet the drug has been repositioned as Duloxetine SUI to treat “stress urinary incontinence (SUI), a condition characterized by episodic loss of urine associated with sharp increases in intra-abdominal pressure (for example, when a person laughs, coughs or sneezes)”; and “Lilly is currently anticipating worldwide sales of Duloxetine SUI to approach US $800 million within four years of launch”.

Dapoxetine was not a success for analgesia or against depression, but came into its own to treat premature ejaculation.

Thalidomide was originally marketed to treat morning sickness, but it produced limb defects in babies. Later it was found effective against “erythema nodosum laprosum (ENL), an agonizing inflammatory condition of leprosy”. Moreover, since the birth defects may have been associated with blocking development of blood vessels, thalidomide might work against cancer; and indeed “Celgene recorded 2002 sales of US $119 million for Thalomid, 92% of which came from off-label use of the drug in treating cancer, primarily multiple myeloma . . . . Sales reached US $224 million in 2003 . . . . The lesson from the thalidomide story is that no drug is ever understood completely, and repositioning, no matter   how unlikely, often remains a possibility” [emphasis added: once the FDA has approved drug A to treat condition B, individual doctors are allowed to prescribe it for other conditions as well, although drug companies are not allowed to advertise it for those other uses. That legal restriction is far from always honored, as demonstrated by the dozens of settlements paid by drug companies for breaking the law.]

Perhaps the prize for repositioning (so far) goes to Pfizer, which turned sildenafil, an unsuccessful treatment for angina, into Viagra, a very successful treatment for “erectile dysfunction”: “By 2003, sildenafil had annual sales of US $1.88 billion and nearly 8 million men were taking sildenafil in the United States alone”.

At any rate, Ashburn and Thor could not be more clear: The whole principle behind repositioning is that it’s more profitable to see what existing drugs might do than to look for what might be biologically speaking the best treatment for a given ailment. So anti-depressants get approved and prescribed against smoking, premenstrual dysphoria, or obesity; a Parkinson’s drug and a hypertension drug are prescribed for ADHD; an anti-anxiety medication is prescribed for irritable bowel syndrome; Alzheimer’s, whose etiology is not understood, gets treated with Reminyl which, as Nivalin, (generic galantamine) is also supposed to treat polio and paralysis. Celebrex, a VIOXX-type anti-arthritic, can be prescribed against breast and colon cancer; treatment of enlarged prostate is by the same drug used to combat hair loss; the infamous “morning after” pill for pregnancy termination can treat “psychotic major depression”; Raloxifene to treat breast and prostate cancer is magically able also to treat osteoporosis.

And so on and so forth. This whole business of drug repositioning exposes the fallacy of the concept that it is possible to find “a silver bullet”, a chemical substance that can be introduced into the human body to accomplish just one desired thing. That concept ought to be recognized as absurd a priori, since we know that human physiology is an interlocking network of signals, feedback, attempted homeostasis, defenses against intruders.

It is one thing to use, for brief periods of time, toxins that can help the body clear infections — sulfa drugs, antibiotics. It is quite another conceit and ill-founded hubris to administer powerful chemicals to decrease blood pressure, lower cholesterol, and the like, in other words, to attempt to alter interlocking self-regulating systems as though one single aspect of them could be altered without doing God-only-knows-what-else elsewhere.

The editorial in the first issue (January 2002) of Nature Reviews Drug Discovery was actually clear about this: “drugs need to work in whole, living systems”.

But that editorial also gave the reason for the present-day emphasis on medicine by drugs: “Even with vastly increased R & D spending, the top 20 pharmaceutical companies still churn out only around 20 drugs per year between them, far short of the 4-5 new drugs that analysts say they each need to produce to justify their discovery and development costs”.

And the editorial also mentions one of the deleterious “side” effects of the rush to introduce new drugs: “off-target effects . . . have led to the vastly increased number of costly late-stage failures seen in recent years (approximately half the withdrawals in the past 20 years have occurred since 1997)” — “off-target effects” being a synonym for “side” effects.

It’s not only that new drugs are being rushed to market. As a number of people have pointed out, drug companies also create their own markets by inventing diseases like attention-deficit disorder, erectile dysfunction, generalized anxiety disorder, and so on and on. Any deviation of behavior from what might naively be described as “normal” offers the opportunity to discover a new disease and to re-position a drug.

The ability of drug companies to sell drugs for new diseases is helped by the common misconception about “risk factors”. Medication against hypertension or high cholesterol, for example, is based on the presumption that both those raise the risk of heart attack, stroke, and other undesirable contingencies because both are “risk factors” for such contingencies. But “risk factor” describes only an observed association, a correlation, not an identified causation. Correlation never proves causation. “Treating” hypertension or high cholesterol makes sense only if those things are causes, and they have not been shown to be that. On the other hand, lifelong ingestion of drugs is certainly known to have potentially dangerous consequences.

Modern drug-based, really drug-obsessed medical practice is as misguided as “Seeking Immortality”.

Posted in fraud in medicine, legal considerations, media flaws, medical practices, prescription drugs | Tagged: , , | Leave a Comment »

Science is broken

Posted by Henry Bauer on 2017/11/21

Science is broken: Perverse incentives and the misuse of quantitative metrics have undermined the integrity of scientific research is the full title of an article published in the on-line journal AEON . I learned of it through a friend who was interested in part because the authors are at the university from which I retired some 17 years ago.

The article focuses on the demands on researchers to get grants and publish, and that their achievements are assessed quantitatively rather than qualitatively, through computerized scoring of such things as Journal Impact Factor and numbers of citations of an individual’s work.

I agree that those things are factors in what has gone wrong, but there are others as well.

The AEON piece is an abbreviated version of the full article in Environmental Engineering Science (34 [2017] 51-61; DOI: 10.1089/ees.2016.0223). I found it intriguing that the literature cited in it overlaps very little with the literature with which I’ve been familiar. That illustrates how over-specialized academe has become, and with that the intellectual life of society as a whole. There is no longer a “natural philosophy” that strives to integrate knowledge across the board, from all fields and specializations; and there are not the polymath public intellectuals who could guide society through the jungle of ultra-specialization. So it is possible, as in this case of “science is broken”, for different folk to reach essentially the same conclusion by extrapolating from quite different sets of sources and quite independently of one another.

I would add more factors, or perhaps context, to what Edwards and Roy emphasized:

The character of research activity has changed out of sight since the era or “modern science” began; for example, the number of wannabe “research universities” in the USA has tripled or quadrupled since WWII — see “Three stages of modern science”; “The science bubble”; chapter 1 in Science Is Not What You Think [McFarland 2017].

This historical context shows how the perverse incentives noted by Edwards and Roy came about. Honesty and integrity, dedication to truth-seeking above all, were notable aspects of scientific activity when research was something of an ivory-tower avocation; nowadays research is so integrated with government and industry that researchers face much the same difficulties as professionals who seek to practice honesty and integrity while working in the political realm or the financial realm: the system makes conflicts of interest, institutional as well as personal, inevitable. John Ziman (Prometheus Bound, Cambridge University Press) pointed out how the norms of scientific practice nowadays differ from those traditionally associated with science “in the good old days” (the “Mertonian” norms of communality, universality, disinterestedness, skepticism).

My special interest has long been in the role of unorthodoxies and minority views in the development of science. The mainstream, the scientific consensus, has always resisted drastic change (Barber, “Resistance by scientists to scientific discovery”, Science, 134 [1961] 596–602), but nowadays that resistance can amount to suppression; see “Science in the 21st century”; Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth [McFarland, 2012]). Radical dissent from mainstream views is nowadays expressed openly almost only by long-tenured full professors or by retired people.

I’m in sympathy with the suggestions at the end of the formal Edwards and Roy paper, but I doubt that even those could really fix things since the problem is so thoroughgoingly systemic. Many institutions and people are vested in the status quo. Thus PhD programs will not change in the desired direction so long as the mentoring faculty are under pressure to produce more publications and grants, which leads to treating graduate students as cheap hired hands pushing the mentor’s research program instead of designing PhD research as optimum for neophytes to learn to do independent research. The drive for institutional prestige and status and rankings seems the same among university leaders, and they seek those not by excelling in “higher education” but by winning at football and basketball and by getting and spending lots of grant money on “research”. How to change that obsession with numbers: dollars for research, games won in sports?

That attitude is not unique to science or to academe. In society as a whole there has been increasing pressure to find “objective” criteria to avoid the biases inherent inevitably in human judgments. Society judges academe by numbers — of students, of research expenditures, of patents, of magnitude of endowment , etc. — and we compare nations by GDP rather than level of satisfaction among the citizens. In schools we create “objective” and preferably quantifiable criteria like “standards of learning” (SOLs), that supersede the judgments of the teachers who are in actual contact with actual students. Edwards and Roy cite Goodhart’s Law, which states that “when a measure becomes a target, it ceases to be a good measure”, which was new to me and which encapsulates so nicely much of what has gone wrong. For instance, in less competitive times, the award of a research grant tended to attest the quality of the applicant’s work; but as everything increased in size, and the amount of grants brought in became the criterion of quality of applicant and of institution, the aim of research became to get more grants rather than to do the most advancing work that would if successful bring real progress as well as more research funds. SOLs induced teachers to cheat by sharing answers with their students before giving the test. And so on and on. The cart before the horse. The letter of every law becomes the basis for action instead of the human judgment that could put into practice the spirit of the law.

Posted in conflicts of interest, consensus, fraud in science, funding research, politics and science, resistance to discovery, science is not truth, scientific culture | Tagged: , | Leave a Comment »

Can truth prevail?

Posted by Henry Bauer on 2017/10/08

Recently I joined the Heterodox Academy, whose mission is to promote viewpoint diversity :

We are a politically diverse group of social scientists, natural scientists, humanists, and other scholars who want to improve our academic disciplines and universities.
We share a concern about a growing problem: the loss or lack of “viewpoint diversity.” When nearly everyone in a field shares the same political orientation, certain ideas become orthodoxy, dissent is discouraged, and errors can go unchallenged.
To reverse this process, we have come together to advocate for a more intellectually diverse and heterodox academy.

My personal focus for quite some time has been the lack of viewpoint diversity on scientific issues — HIV/AIDS, global warming, and a host of less prominent topics (see Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth). But earlier I had been appalled — and still am — over political correctness, by which I mean the dogmatic assertion that certain sociopolitical views must not only prevail but must be enforced, including by government action.

I became aware of political correctness when it came to my university in the late 1980s (distinctly later than elsewhere) and led to the resignation of Alan Mandelstamm, a nationally renowned teacher (of economics) who had for more than a decade attracted more students to his classes than any other teacher of any subject, as well as having a variety of faculty members from other fields sit in on his classes purely for the learning experience. I’ve described the circumstances of Al’s resignation in a couple of articles (The trivialization of sexual harassment: Lessons from the Mandelstamm Case; Affirmative action at Virginia Tech: The tail that wagged the dog). Al passed away some years ago; his obituary has been funded to be permanent and the contributors to it testify to what a marvelous instructor Al was, to the benefit of untold numbers of individuals: 4 years after Al died, former students and associates who learn of his passing continue to add their recollections. Al and I had both participated in the Virginia chapter of the National Association of Scholars (NAS), which stands for traditional academic ideals:
NAS is concerned with many issues, including academic content, cost, unfairness, academic integrity, campus culture, attitudes, governance, and long-term trends. We encourage commitment to high intellectual standards, individual merit, institutional integrity, good governance, and sound public policy.

What that involves in practice is illustrated in the newsletter I edited until my retirement.

Common to NAS, the Heterodox Academy, and dissenting from dogmatism on HIV/AIDS, global warming, and many other issues is the belief that views and actions ought to be consonant with and indeed formed by the available evidence and logical inferences from it — by the truth, in other words, at least as close as humans can come to it at any given time.

Ideologies and worldviews can make it difficult for us even to acknowledge what the evidence is when it seems incompatible with our beliefs. Since my interest for many decades has been in unorthodoxies, I’ve looked into the evidence pertaining to a greater number of controversial issues, in more detail and depth, than most people have had occasion to, with the frustrating consequence that nowadays many of the people with whom I share the preponderance of sociopolitical preferences are not with me regarding HIV/AIDS or global warming; I’m the rare example of “A politically liberal global-warming skeptic”; and I wish that those who seem to agree with me did not include people whose sociopolitical views and actions are abhorrent to me (say, Ted Cruz or Jeff Sessions).

At any rate, in science and in the humanities and in politics, in all aspects of human life, the thing to aim for is to find the best evidence and to be guided by it. Through the Heterodox Academy I learned recently of the Pro-Truth Pledge; see “How to address the epidemic of lies in politics: The ‘Pro-Truth Pledge,’ based on behavioral science research, could be part of the answer”.

The badge of that pledge is now on my personal website, and I encourage others to join this venture.


I don’t expect quick results, of course, but “The journey of a thousand miles begins with one step” (often misattributed to Chairman Mao, but traceable more than a millennium further back to Lao Tzu or Laozi, founder of Taoism).

Posted in conflicts of interest, consensus, global warming, politics and science, science is not truth, science policy, scientific culture, scientists are human, unwarranted dogmatism in science | Tagged: , , , , | Leave a Comment »

HPV vaccination: a thalidomide-type scandal

Posted by Henry Bauer on 2017/09/17

I’ve posted a number of times about the lack of proof that HPV causes cervical cancer and that the anti-HPV vaccines are being touted widely by officialdom as well as manufacturers even though the vaccines have been associated with an unusually high number of adverse reactions, some of them very severe, literally disabling.

Long-time medical journalist and producer of award-winning documentaries, Joan Shenton, has just made available the first of a projected trilogy, Sacrificial Virgins, about the dangers of anti-HPV vaccines:

The website, WHAT DOCTORS WON’T TELL YOU, comments in this way: “HPV vaccine ‘a second thalidomide scandal’, says new YouTube documentary”


Posted in medical practices, peer review, prescription drugs | Tagged: , | 2 Comments »

American Medicine Needs Reform — or perhaps revolution

Posted by Henry Bauer on 2017/09/10

Dozens of books and myriad articles have been published over the last few decades about What’s Wrong With Present-Day Medicine.

A recent addition is An American Sickness: How Healthcare Became Big Business and How You Can Take It Back by Elisabeth Rosenthal, lauded in a lead review  in the New York Times (4 & 9 April 2017) and with 250 customer reviews on, >80% of them 5-starred.

The New York Times review is titled “Why an open market won’t repair American health care”, which indicates clearly enough why it may take a revolution, and perhaps a President Bernie Sanders, and certainly a squashing of the Republican Party’s free-market-above-all ideology, to bring American citizens the guaranteed and affordable heath care that is enjoyed by the citizens of every other major country on Earth.

It is far from only the political left that recognizes this need. Angus Deaton, 2015 Nobel Prize for economic science, wrote: “I would add [to possible ways of reducing income inequality] the creation of a single-payer health system; not because I am in favor of socialized medicine but because the artificially inflated costs of health care are powering up inequality by producing large fortunes for a few while holding down wages; the pharmaceutical industry alone had 1,400 lobbyists in Washington in 2014. American health care does a poor job of delivering health, but is exquisitely designed as an inequality machine, commanding an ever-larger share of G.D.P. and funneling resources to the top of the income distribution” (review of The Crisis of the Middle-Class Constitution — Why Economic Inequality Threatens Our Republic by Ganesh Sitaraman, New York Times, 20 March 2017)



Posted in conflicts of interest, medical practices | Tagged: | 3 Comments »

Scientific consensus vs. the evidence: Big-Bang theory and fudge factors

Posted by Henry Bauer on 2017/09/02

The scientific consensus is that the universe began in a “Big Bang” around 13 billion years ago.

As with the scientific consensus on most matters, the media and society at large treat this consensus as unquestionable truth. Serious and competent dissenters are almost invisible, and much of the media depict people who don’t accept the consensus as Flat-Earthers, crackpots.

Again as with the scientific consensus on many matters, the actual evidence, the facts, do not support the consensus unequivocally. Sorely missing from society’s respect for “science” is an appreciation of the difference between facts and theories.

Concerning the Big Bang, the facts are the differences in colors of the light emitted by the chemical elements as observed on Earth and on cosmic objects.

“Color” is the human sensation experienced when visible light of particular wavelengths (or frequencies, inversely proportional to wavelengths) hits the eye’s retina. A well established physical phenomenon is the Doppler Effect: an observer moving away from a source of waves registers a longer wavelength than an observer at the source itself (and vice versa, an observer traveling towards a source of waves registers an apparently shorter wavelength). The example typically given in schools, long ago in the days of steam-engine trains, was that the whistles from the train’s engine sounded a higher tone when the train was approaching the station and a lower note when moving away from the station; the Internet offers many illustrations of this, for example “Brass band on train demonstrates Doppler effect”.

All observations of distant cosmic objects show a “redshift”: the colors of light emitted by the chemical elements on the objects are shifted to longer wavelengths, to the red end of the spectrum of visible light. According to the Doppler Effect, that means the objects are moving away from Earth, in all directions; the universe is expanding, in other words.

However: Is the Doppler Effect the only possible reason for the cosmic redshifts?

No, according to observational evidence accumulated by astronomer Halton Arp, which suggests that the light emitted by quasars has a redshift that is only partly a Doppler Effect, the other part possibly characteristic of newly formed matter. Quasars are “quasi-stellar objects”, emitting much larger amounts of energy than would stars of apparently similar size, and they are key components in calculations of the distances and speeds of cosmic objects. If Arp was right, then Big-Bang cosmology might well be replaced by the Steady-State theory of the universe promoted by Fred Hoyle and others. Since quasars are far from fully understood (Frequently asked questions about Quasars ), Arp may turn out to have been right.

At any rate, that the scientific consensus on Big-Bang cosmology is almost universally accepted, that the common conventional wisdom has no doubts about it, illustrates how a scientific consensus can become popular public dogma even when there are substantive reasons to doubt its validity.

There are actually many reasons to doubt the validity of the Big-Bang hypothesis, set out for instance by the late Tom Van Flandern (The Top 30 problems with the Big Bang Theory) or more recently and succinctly by “Tanya Techie” (Top Ten scientific flaws in the Big Bang Theory).

What has seemed to me the kiss of death for Big-Bang Theory is the need for the fudge factors of “dark matter” and “dark energy” to explain the calculated rate of universe expansion; fudge factors that seem utterly absurd given that they are supposed to represent amounts much larger than the known amounts of normal matter and energy (Rethinking “Star Soup”) but have never actually been observed, they are postulated to exist solely to make Big-Bang Theory work.

An additional ground for doubt is that the calculations on which dark matter-energy are estimated appear to be seriously flawed: Donald G. Saari, “N-body solutions and computing galactic masses”, Astronomical Journal, 149 (2015) 174; “Mathematics and the ‘Dark Matter’ puzzle”, American Mathematical Monthly, 122 (2015) 407.

*                     *                   *                   *                   *                   *                   *                   *

Big-Bang Theory is far from alone as an almost universally accepted doctrine that in reality conforms only doubtfully with the actual evidence. Close examination of the actual facts on quite a number of other topics reveals that there are reasonable doubts about the validity of the scientific consensus on how to interpret the evidence about

Ø      the extinction of the dinosaurs

Ø      the mechanism of smell

Ø      the efficacy of anti-depressants

Ø      the cholesterol theory of cardiovascular disease

Ø      the blood-pressure theory of strokes and heart attacks

Ø      the cause of AIDS

Ø      when and from where the first human settled in the Americas

Ø      the hazards of second-hand tobacco smoke

Ø      whether nuclear fusion is feasible at ordinary temperatures (“cold fusion”)

Ø      whether human-generated carbon dioxide is responsible for climate change

Ø      whether continental drift (plate tectonics) adequately explains all the facts about earthquakes and other geological phenomena

Ø      the cause(s) of Alzheimer’s disease

Ø      the potential danger of mercury in vaccines and in dental amalgams

and more; see Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth

Posted in consensus, science is not truth, unwarranted dogmatism in science | Tagged: , | 4 Comments »

Slowing of global warming officially confirmed — by reading between the lines

Posted by Henry Bauer on 2017/08/12

Climate-change skeptics, deniers, denialists, and also unbiased observers have pointed out that measured global temperatures seem to have risen at a slower rate, or perhaps ceased rising at all, since about 1998.
But the media have by and large reported the continuing official alarmist claims that each year has been the hottest on record, that a tipping point is nearly upon us, catastrophe is just around the corner — exemplified perhaps by Al Gore’s recent new film, An Inconvenient Sequel, with interviews of Gore in many media outlets.

However, that the pause in global warming is quite real is shown decisively by the way in which the mainstream consensus has tried to discount the facts, attempting to explain them away.
For example a pamphlet, published jointly by the Royal Society of London and the National Academy of Sciences of the USA, asserts that human-caused release of greenhouse gases is producing long-term warming and climate change, albeit there might be periods on the order of decades where there is no or little warming, as with the period since about 2000 when such natural causes as “lower solar activity and volcanic eruptions” have “masked” the rise in temperature (“Climate-Change Science or Climate-Change Propaganda?”).

That assertion misses the essential point: All the alarmist projections are based on computer models. The models failed to foresee the pause in temperature rise since 1998, demonstrating that the models are inadequate and therefore their projections are wrong. The models also fail to accommodate the period of global cooling rather than warming from the 1940s to the 1970s.

The crux is that the models do not incorporate important natural forces that affect the carbon cycle and the energy interactions. Instead, when the models are patently wrong, as from 1940s to 1970s and since 1998, the modelers and other researchers vested in the theory of human-caused climate change speculate about how one or other natural phenomenon somehow “masks” the asserted underlying temperature rise.

Above all, of course, the theorists neglect to mention that the Earth is still rebounding from the last Ice Age and will, if the last million years are any guide, continue to warm up for many tens of thousands of years (Climate-change facts: Temperature is not determined by carbon dioxide).
The various attempts to explain away the present pause in temperature rise were listed a few years ago at THE HOCKEY SCHTICK“Updated list of 66 excuses for the 18-26 year ‘pause’ in global warming — ‘If you can’t explain the ‘pause’, you can’t explain the cause’”.
Here are a few of the dozens of excuses for the failure of global temperature to keep up with projections of the climate models:

1. Lower activity of the sun
That ought to raise eyebrows about this whole business. Essentially all the energy Earth receives from out there comes from the Sun. Apparently the computer models do not start by taking that into account?
(Peter Stauning, “Reduced solar activity disguises global temperature rise”, Atmospheric and Climate Sciences, 4 #1, January 2014 “Without the reduction in the solar activity-related contributions the global temperatures would have increased steadily from 1980 to present”)
And of course if the Sun stopped shining altogether…
Anyway, the models are wrong.

2. The heat is being hidden in the ocean depths (Cheng et al., “Improved estimates of ocean heat content from 1960 to 2015”, Science Advances, 10 March 2017, 3 #3, e1601545, DOI: 10.1126/sciadv.160154
In other words, the models are wrong about the distribution of supposedly trapped heat.

3. Increased emission of aerosols especially in Asia (Kühn et al., “Climate impacts of changing aerosol emissions since 1996”, Geophysical Research Letters, 41 [14 July 2014] 4711–18, doi:10.1002/2014GL060349)
The climate models are wrong because they do not properly take aerosol emissions into account.

3a. “Volcanic aerosols, not pollutants, tamped down recent Earth warming, says CU study”
       In other words, the models are wrong because they cannot take into account the complexities of natural events that affect climate.

4. Reduced emission of greenhouse gases, following the Montreal Protocol eliminating ozone-depleting substances (Estrada et al, “Statistically derived contributions of diverse human influences to twentieth-century temperature changes”, Nature Geoscience, 6 (2013) 1050-55 doi:10.1038/ngeo1999
       The climate models are wrong because they do not take into account all greenhouse-gas emissions.

5. “Contributions of stratospheric water vapor to decadal changes in the rate of global warming” (Solomon et al., Science, 327 [2010] 1219-12;
DOI: 10.1126/science.1182488)
In other words, the models are wrong because they do not take account of variations in water vapor in the stratosphere.

6. Strengthened trade winds in the Pacific
Again, the models are wrong because they cannot take account of the innumerable natural phenomena that determine climate.

6a.     An amusing corollary is that “Seven years ago, we were told the opposite of what the new Matthew England paper says: slower (not faster) trade winds caused ‘the pause’”

And so on though another 50 or 60 different speculations. Although they are all different, there is a single commonality: The computer models used to represent Earth’s climate are woefully unable to do so. That might well be thought to be obvious a priori in view of the astronomical number of variables and interactions that determine climate. Moreover, a little less obviously perhaps, “global” climate is a human concept. The reality is that short- and long-term changes in climate by no means always occur in parallel in different regions.

Take-away points:

Mainstream climate science has demonstrated that
all the climate models are inadequate
and their projections have been wrong

Since the late 1990s, global temperatures have not risen
to the degree anticipated by climate models and climate alarmists
but that is not officially admitted
even as it is obvious from the excuses offered
for the failure of the models

Posted in consensus, denialism, global warming, media flaws, science is not truth, science policy, unwarranted dogmatism in science | Tagged: | 3 Comments »