Skepticism about science and medicine

In search of disinterested science

Archive for the ‘politics and science’ Category

The Loch Ness “Monster”: Its real and important significance

Posted by Henry Bauer on 2021/01/29

Because of my writings about Nessie, the Loch Ness Monster [1], I am periodically approached by various media. Last year I had published [2] the suggestion that the Loch Ness creatures are more plausibly related to sea turtles than to the commonly popular notion of plesiosaurs.

A Scottish journalist came across that article, and for one day something about it and me was featured in every yellow-press newspaper in Britain, and several broadcast media asked for interviews.

The episode reminded me of some of the things that are so wrong with modern mass media.

Their overriding concern is simply to attract an audience. There is no intention of offering that audience any genuinely insightful analysis or context or background information. Media attention span approximates that of Twittering. One television network asked for an instant interview, wanted the best phone-contact number, even offered me compensation — and then never followed up.

I did talk to one Russian and one Spanish station or network, and I tried to point to what the real significance is of the Loch Ness animals, namely, that their existence has been denied by official scientific sources for not much less than a century, demonstrating that official science can be wrong, quite wrong; and while that matters little if at all about Loch Ness, I said, it does matter greatly when official science is wrong about such matters of public importance as HIV/AIDS  or climate change,  about which official science does in fact happen to be wrong [3].

So far, however, my bait about those important matters has not been snapped up.

Misunderstandings about science are globally pervasive, especially not realizing that it is fallible. The consequent unwarranted acceptance of wrong beliefs about HIV and about carbon dioxide demonstrate the need for some institution independent of official science, independent of existing scientific organizations and institutions, to provide fact-checking of contemporary scientific consensuses, an impartial, unbiased, strictly evidence-based assessments of official science. In other words, society sorely needs a Science Court [4].

Misconceptions about science can already be seen as a significant reason for flaws in the announced policies of the new Biden administration, as it places high priority on “combating climate change” and engaging in a “moon shot” to cure cancer: having not learned any lessons from the failure of the war on cancer, or from the fact, obvious in great swaths of the geological literature, that carbon dioxide is demonstrably not the prime cause of global warming since there is no correlation between global temperatures and carbon-dioxide levels in the atmosphere [5], neither over the whole life of the Earth nor over the last couple of centuries.

——————————————————

[1]    The Enigma of Loch Ness: Making Sense of a Mystery, University of Illinois Press, 1986/88; Wipf & Stock reprint, 2012
GENUINE  FACTS about “NESSIE”, THE LOCH NESS “MONSTER”
[2]    “Loch Ness Monsters as Cryptid (Presently Unknown) Sea Turtles”, Journal of Scientific Exploration, 34 (2020) 93-104
[3]    Dogmatism  in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012
The Origin, Persistence and Failings of HIV/AIDS Theory, McFarland, 2007
[4]    Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017), chapter 12
“The Case for a Science Court”
Science Court: Why and What
[5]    “A politically liberal global-warming skeptic?”
”Climate-change facts: Temperature is not determined by carbon dioxide”

Posted in consensus, fraud in medicine, fraud in science, global warming, media flaws, politics and science, resistance to discovery, science is not truth, science policy, scientific culture, unwarranted dogmatism in science | Tagged: , , , , | 17 Comments »

The HIV/AIDS blunder: Missed opportunities for mainstream research to self-correct

Posted by Henry Bauer on 2021/01/20

Quite a number of specific mis-steps conspired to the acceptance and continuance of HIV/AIDS theory. They illustrate much of what has gone wrong with science: It is subject to interference by commercial, political, and ideological influences; it is comprised of a variety of institutions that do not interact usefully or reliably. Above all:


 Science has no overarching watchdog to ensure
 that theories change appropriately
 as evidence accumulates

  1. 1.The first and crucial mistake was when the Secretary of Health and Human Services (Margaret Heckler) held a press conference at which Robert Gallo claimed to have discovered the probable cause of AIDS. Illustrated by this sad episode is political interference and the pervasive ignorance of how science works:
    →     Gallo had not yet published anything. Insiders regarded him as incompetent and untrustworthy. Investigative journalism later (2002) fully documented that he is an unscrupulous charlatan [1].
    →     Heckler’s background was as a lawyer and a politically active Republican.
    →     Activists had been campaigning vigorously for the Republican administration to do something about AIDS.
     →    This official endorsement of Gallo’s claim acted as a signal that anyone who wanted research support from the National Institutes of Health (NIH) would likely be successful by proposing to work on HIV; virologists in particular were hungry for funding after their failure to discover cancer-causing viruses in the “war on cancer” [2].
  2. 2.An important contributing factor  was statistical incompetence at the Centers for Disease Control (CDC):
    →     Mistakenly taking “gay” rather than drug abuse as the most meaningful association with AIDS [3]. The CDC should also have been aware  that AIDS-like symptoms had been quite common among addicts during the 1960-70s epidemic of so-called recreational drug use [4].
    →     Initiated the misleading “young, previously healthy, gay men” characterization based on 5 cases aged 29-36, average 32.6 [5]. Its Task Force on Kaposi’s Sarcoma had found the average age of AIDS victims to be 35. When Cochrane [6] re-examined the medical records 20 years later, she found that the average age of the first 25 AIDS patients in San Francisco had been 38. This mattered crucially: The greatest risk for sexual infections is among people <30; lifestyle ailments are increasingly likely at older ages, more compatible with a decade or two of what used to be called dissolute living.
    CDC researchers as early as 1987 failed to recognize the significance of their finding that, among Job Corps  members at ages about 17 and younger, females are more likely to test HIV-positive than males [7].
  3. 3.The Army HIV Research Office also failed to recognize the significance of their finding that at ages about 17 and younger, females are more likely to test HIV-positive than males [8].
  4. 4.Duesberg had published comprehensive debunkings of HIV in 1987 [9] and 1989 [10]. The latter  has a footnote promising a rebuttal from Gallo that never eventuated, despite several reminders [11: 233].
  5. 5.As the years went by, more and more conundrums emerged whose significance was missed:
    →     The purple skin-patches of Kaposi’s Sarcoma had been the iconic signature of AIDS,  yet after half-a-dozen years they had become rare among AIDS patients.
    →     The correlation between drug abuse and AIDS became stronger and stronger.
    →     Prostitutes who did not use drugs were not at risk of  becoming HIV-positive.
    →     Drug abusers who used clean needles would more likely to test HIV-positive than those who exchanged needles.
    →     Marriage and pregnancy are risk factors for testing HIV-positive.
    →     Many further instances, with primary sources cited also for the points above, see The Case against HIV

Lessons:

The clearest general lesson is that policymakers and administrators should not take far-reaching actions on matters of science or medicine without advice from individuals who have at least an elementary acquaintance with the history of science and the understanding of present-day scientific activity incorporated in Science and Technology Studies (STS [12]). Anyone with that background would be familiar with the danger of accepting any scientific claim made by an individual researcher or administrator of research before the claim had even been published. The training of most scientists and most doctors neglects that important background.

A fairly general lesson is that competence in statistics may be sorely lacking even in an agency like CDC where gathering and analyzing statistical data is a central task. Much has been written during the last several decades about the pervasive abuse and misuse of statistics in medicine and medical science [13].

It is also not irrelevant that an overwhelming of proportion of those who were carrying out and reporting HIV tests were medical doctors, MDs or DVMs, rather than people trained in research. This is not to discount and the insights of the many MDs who have been able to learn from experience and to transcend some of the mistaken lore they were originally taught [14]. But medical training focuses on applying what is known, not on questioning it. By contrast,  journalists who were covering the HIV/AIDS story [1, 15] had a more holistic mindset and noticed how inadequate the officially accepted view is.

A part of understanding what contemporary scientific or research activity involves is to recognize that the overwhelming proportion of individuals doing what is loosely called “research” or “science”  are not engaged in seeking fundamental truths. Most of the published reports on HIV testing were based on taking for granted that HIV causes AIDS and gathering data for other purposes, say, recruitment into the Armed Forces, or the presumed need of for antiviral drugs in different regions of Africa; so those “researchers” had been blind to  the steady accumulation of data incompatible with the view of HIV as a contagious infection.

Present-day institutions of medical science
are incapable of self-correcting a mistaken “consensus”

That is why society needs a Science Court

***************************************************************************

[1]    John Crewdson, Science Fictions: A scientific mystery, a massive cover-up and the dark legacy of Robert Gallo, Little, Brown, 2002
[2]    Peter Duesberg, Inventing the AIDS Virus, Regnery, 1996; chapter 4
[3]    John Lauritsen, “CDC’s tables obscure AIDS-drug connection”, Philadelphia Gay News, 14 February 1985 (and five other papers); reprinted as chapter I in The AIDS war: propaganda, profiteering and genocide from the medical-industrial complex, ASKLEPIOS, 1993
[4]    Neville Hodgkinson, AIDS: The Failure of Contemporary Science, Fourth Estate, 1996
[5]    Pneumocystis Pneumonia — Los Angeles, Morbidity and Mortality Weekly Report, 30 (#21, 5 June 1981.) 250-52
[6]    Michelle Cochrane, When AIDS began: San Francisco and the Making of an Epidemic, Routledge, 2004
[7]    Michael E. St. Louis, George A. Conway, Charles R. Hayman, Carol Miller, Lyle R. Petersen, Timothy J. Dondero,  “Human Immunodeficiency Virus Infection in Disadvantaged Adolescents: Findings From the US Job Corps”, JAMA, 266
(1991): 2387-91;  Fig. 4 [authors’ training: 5 MD, 1 RN]
 [8]   John F. Brundage, Donald S. Burke, Robert Visintine, Michael Peterson, Robert R. Redfield. “HIV Infection among young adults in the New York City area”, New York State Journal of Medicine, May 1988, 232-33; Fig. 3 [authors’ training: 5 MD, 1 DVM]
Donald S. Burke, John F. Brundage, Mary Goldenbaum, Lytt I. Gardner, Michael Peterson, Robert Visintine, Robert R. Redfield, & the Walter Reed Retrovirus Research Group, “Human Immunodeficiency Virus Infections in Teenagers: Seroprevalence Among Applicants for US Military Service”, JAMA, 263 (1990) 2074-77; Table 1 [authors’ training: 4 MD, 1 DVM, 1 MS, 1 PhD]
Burke, D. S., J. F. Brundage, J. R. Herbold, W. Berner,  L. I. Gardner, J. D. Gunzenhauser,  J. Voskovitch, & R. R. Redfield, “Human immunodeficiency virus infections among civilian applicants for United States military service, October 1985 to March 1986”, New England Journal of Medicine, 317 (1987) 131-36; Fig 1 [authors’ training: 5 MD, 1 PhD, 1 DVM]
[9]    Peter H. Duesberg, “Retroviruses as carcinogens and pathogens: expectations and reality”, Cancer Research, 47 (1987) 1199-220
[10]  Peter H. Duesberg, “Human immunodeficiency virus and acquired immunodeficiency syndrome: correlation but not causation”, Proceedings of the National Academy of Sciences, 86 (1989) 755-64.
[11]  Henry H. Bauer, The Origin, Persistence and Failings of HIV/AIDS Theory, McFarland, 2007
[12]  “STS draws on the full range of disciplines in the social sciences and humanities to examine the ways that science and technology shape, and are shaped by, our society, politics, and culture. We study contemporary controversies, historical transformations, policy dilemmas, and broad philosophical questions” (Department of Science, Technology, and Society at Virginia Tech)
[13]  Illustrated in many of the books cited in What’s Wrong with Present-Day Medicine
but see particularly the cited articles by Altman, Ioannidis, Matthews
[14]  See for example in the books listed in [13] those by Angell, Brody, Goldacre, Gøtzsche, Greene, Kendrick, LeFanu, Ravnskov, Smith
[15]      See books by Farber, Hodgkinson, Leitner, Shenton, in The Case against HIV

Posted in consensus, fraud in medicine, funding research, media flaws, medical practices, peer review, politics and science, resistance to discovery, science is not truth, science policy, scientific culture, Uncategorized, unwarranted dogmatism in science | Tagged: | Leave a Comment »

From uncritical about science to skeptical about science: 4

Posted by Henry Bauer on 2021/01/05

Learning about science from beyond the pale

Synopsis for this series of posts:

From post #1:
“Could my opinion be erroneous about a decline in the trustworthiness of science?
If not, why is it that what seems so obvious to me has not been noticed, has been overlooked by the overwhelming majority of practicing researchers, by pundits and by scholars of scientific activity and by science writers and journalists?
That conundrum had me retracing the evolution of my views about science, from my early infatuation with it to my current disillusionment.”

My interest in the Loch Ness Monster led indirectly to learning about other topics that science similarly ignores, dismisses, or denigrates, often by calling them pseudoscience (UFOs, Bigfoot, etc.). Trying to understand how studying such matters differs from doing science automatically meant trying to understand what makes science special; so by learning about pseudo-science one learns as well about science itself. As Rudyard Kipling put it, “And what should they know of England who only England know?” (from poem, The English Flag).

****************************

Continuing the narrative:

 Fortuitously for me, several things happened at about the same time in the mid-1970s: There was a shortage of potential graduate students, because the job market for PhDs had collapsed. My large 5-year grant came to an end, and new grant funds were more and more difficult to come by. There was a widespread infatuation, including at NSF, with the supposed value of interdisciplinary work, and my university was urging faculty to develop interdisciplinary projects as a way of attracting grant money. And some tangible evidence that the Loch Ness Monster is a real animal had been widely publicized: Underwater photographs of large flipper- or paddle-like objects apparently appendages on an indistinct large shape [1].

So I recruited an eminently interdisciplinary team of faculty members — a journalism professor, an historian of science, a philosopher of science, a sociologist — to study how scientific understanding or belief changes as evidence accumulates: Science had long been fairly sure that reports of the Loch Ness Monster were baseless; now that substantive evidence was accumulating, how would the scientific community accommodate it?

Our proposal to NSF was unsuccessful, but one of the reviewers’ comments set me off in a new direction. If we wanted to study how science treats unorthodox claims, a reviewer suggested, why not look into the Velikovsky Affair?

I had never heard of that, and obviously I should have; so I did look into it, and found it very interesting indeed. The psychoanalyst Immanuel Velikovsky had published a popular best seller, Worlds in Collision [2], in which he inferred from legends and myths about heavenly happenings that Jupiter had ejected a comet-like object that had come close to several other planets, producing on Earth effects that included such events reported in the Bible as the parting of the Red Sea and the collapse of the walls of Jericho.

Several things struck me about the Velikovsky Affair.

—> Many a people had found Velikovsky’s scenario plausible or even convincing.
—> That included some quite accomplished historians and social scientists, who had ventured strong criticisms of the scientists who had unceremoniously dismissed Velikovsky’s scenario as utter nonsense.
—> Scientists had indeed been arrogantly dogmatic, making the declaration of nonsense without attempting to address the substantive details in Velikovsky’s book, indeed famously saying that they had not bothered to or needed to read the book. They had behaved unscientifically, in other words.
—> I was struck particularly that everyone was quite wrong in several respects about the nature of science — not only media pundits and humanists but also scientists, including social scientists.

So I resolved to write a book, to be titled Velikovsky and the Loch Ness Monster, setting out the realities about science and illustrated by one example of science getting it right about an unorthodox claim (the Velikovsky Affair) and an example of science getting it wrong (the Loch Ness monster). Altogether, I had found all this so interesting, and the prospects for well-funded scientific research so gloomy, that I decided to make a permanent change of academic career, from chemistry to something like history or philosophy or sociology of science.

It was a very good time for such a move. Historians and philosophers and sociologists of science were teaching interdisciplinary courses together, sometimes establishing joint Centers or Departments, together with some political scientists, engineers, and scientists interested in science policy. The intellectual Zeitgeist was presaging an integration of disciplines that is now the actuality usually named Science & Technology Studies or Science, Technology & Society (the acronym STS works for both; earlier incarnations included “Science Studies”, “Science and Society”, and the like).

These developments in the scholarly world were another sign that the role of science in the wider society was undergoing significant changes following World War II. the Vannevar Bush Report to the President had resulted in dramatic increases in funding of research. The Bulletin of the Atomic Scientists had been founded in 1945 by some of those who had worked on the Manhattan Project and were very conscious that policy makers needed information and insights from the technical community for sound planning.

 To make my intended change of academic field possible, I needed time to learn at least the basics of the history and philosophy of science. But as member of a Chemistry Department, it was my obligation to garner grants and support and mentor graduate students, too time-consuming to allow for much new learning and thinking. So I applied for administrative jobs, which would be undemanding intellectually and leave ample time for reading and learning subjects new to me. After a couple of dozen failed applications, I lucked into what turned out to be perfect for me: Dean of Arts and Sciences at Virginia Polytechnic Institute and State University (VPI&SU, formerly VPI, but now everywhere known as “Virginia Tech”).

It was easy for me to gather an informal group of people interested in interdisciplinary projects and coursework combining Humanities and Social Sciences with Engineering and Physical and Biological Sciences. The agriculture, engineering, and science departments at Virginia Tech were long-established, with strong research components; and several of the faculty in History and Philosophy in particular had already been teaching some interdisciplinary courses with faculty from technical fields.

Soon we created a Center for the Study of Science in Society (A few years later came interdisciplinary degrees, initially undergraduate but soon graduate as well. More recently the Center was replaced by a full-fledged Department of Science, Technology, and Society.

I learned a great deal about science from the discussions leading to the establishment of that Center, but my belief in the trustworthiness of science, or at least the fundamental potential trustworthiness of science, was not at all shaken. Indeed it may have been enhanced by learning how uncertain, by comparison, is the knowledge commanded by social science [3]. I also learned a great deal about differences between the various subjects professed in a College of Arts and Sciences [4]. But first I want to concentrate on what I learned about science — what can in general be learned about science by looking into matters like the Velikovsky Affair.

My planned volume of Velikovsky and the Loch Ness Monster proved far too ambitious, and eventually emerged as two separate books[5, 6]. I was again extraordinarily fortunate that the Velikovsky manuscript had been sent by the publisher for review by Marcello Truzzi, a sociologist of science long interested in scientific unorthodoxies.

 After World War II, there had come much public interest in topics like Velikovsky — the Yeti of the Himalayas, UFOs (unidentified flying objects, at first “flying saucers”), psychic phenomena, and more [7]. On all of those topics of great public interest but ignored or dismissed or denigrated by authoritative science, there were some quite well-established scientists, engineers, and other scholars who believed that there was sufficient substantive evidence, enough sheer facts, to warrant proper scientific investigation. A group of these mavericks was in the process of founding a Society for Scientific Exploration to exchange experiences and learn from one another. Because Truzzi had read my Velikovsky manuscript, I was invited to join in founding that Society .

——————————————————————————–

[1]    Reprinted in many places, for example “The Case for the Loch Ness Monster: The Scientific Evidence”, Journal of Scientific Exploration, 16(2002) 225-246
[2]    Immanuel Velikovsky, Worlds in Collision, Macmillan 1950
[3]    P. 128 ff. in Scientific Literacy and the Myth of the Scientific Method, University of Illinois Press, 1992 ; pp. 151-5 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed, McFarland 2017
[4]    To Rise above Principle: The Memoirs of an Unreconstructed Dean (under the pen-name ‘Josef Martin’), Wipf & Stock, 2012 (1st ed. was University of Illinois Press, 1988)
[5]    Beyond Velikovsky: The History of a Public Controversy, University of Illinois Press 1984
[6]    The Enigma of Loch Ness: Making Sense of a Mystery, University of Illinois Press 1986 [7]    The Literature of Fringe Science, Skeptical Inquirer, 11 (#2, Winter 1986-87) 205-10

Posted in funding research, politics and science, resistance to discovery, science is not truth, science policy, scientific culture | Tagged: , , , | 3 Comments »

The misleading popular myth of science exceptionalism

Posted by Henry Bauer on 2020/12/28

Human beings are fallible; but we suppose the Pope to be infallible on spiritual matters and science to be exceptional among human endeavors as correctly, authoritatively knowledgeable about the workings of the material world. Other sources purporting to offer veritable knowledge may be fallible — folklore, history, legend, philosophy — but science can be trusted to speak the truth.

Scholars have ascribed the infallibility of science to its methodology and to the way scientists behave. Science is thought to employ the scientific method, and behavior among scientists is supposedly described by the Mertonian Norms. Those suppositions have somehow seeped into the conventional wisdom. Actually, however, contemporary scientific activity does not proceed by the scientific method, nor do scientists behave in accordance with the Mertonian Norms. Because the conventional wisdom is so wrong about how science and scientists work, public expectations about science are misplaced, and public policies and actions thought to be based on science may be misguided.

Contemporary science is unrecognizably different from the earlier centuries of modern science (commonly dated as beginning around the 16th century). The popular view was formed by those earlier times, and it has not yet absorbed how radically different the circumstances of scientific activities have become, increasingly since the middle of the 20th century.

Remarkable individuals were responsible for the striking achievements of modern science that brought science its current prestige and status; and there are still some remarkably talented people among today’s scientists. But on the whole, scientists or researchers today are much like other white-collar professionals [1: p. 79], subject to conflicts of interest and myriad annoyances and pressures from patrons and outside interests; 21st century “science” is just as interfered with and corrupted by commercial, ideological, and political forces as are other sectors of society, say education, or justice, or trade.

Modern science developed through the voluntary activities of individuals sharing the aim of understanding how Nature works. The criterion of success was that claimed knowledge be true to reality. Contemporary science by contrast is not a vocation carried on by self-supporting independent individuals; it is done by white-collar workers employed by a variety of for-profit businesses and industries and not-for-profit colleges, universities, and government agencies. Even as some number of researchers still genuinely aim to learn truths about Nature, their prime responsibility is to do what their employers demand, and that can conflict with being wholeheartedly truthful.

The scientific method and the Mertonian Norms
 do not encompass the realities of contemporary science

The myth of the scientific method has been debunked at book length [2]. It should suffice, though, just to point out that the education and training of scientists may not even include mention of the so-called scientific method.

I had experienced a bachelor’s-degree education in chemistry, a year of undergraduate research, and half-a-dozen years of graduate research leading to both a master’s degree and a doctorate before I ever heard of “the scientific method”. When I eventually did, I was doing postdoctoral research in chemistry (at the University of Michigan); and I heard of “the scientific method” not from my sponsor and mentor in the Chemistry Department but from a graduate student in political science. (Appropriately enough, because it is the social and behavioral sciences, as well as some medical doctors, who make a fetish of claiming to follow the scientific method, in the attempt to be granted as much prestige and trustworthiness as physics and chemistry enjoy.)

The scientific method would require individuals to change their beliefs readily whenever the facts seem to call for it. But everything that psychology and sociology can agree on is that it is very difficult and considerably rare for individuals or groups to modify a belief once it has become accepted. The history of science is consonant with that understanding: New and better understanding is persistently resisted by the majority consensus of the scientific community for as long as possible [3, 4]; pessimistically, in the words of Max Planck, until the proponents of the earlier belief have passed away [5]; as one might put it, science progresses one funeral at a time.

The Mertonian norms [6], too, are more myth than actuality. They are, in paraphrase:

Ø     Communality or communalism (Merton had said “communism”): Science is an activity of the whole scientific community and it is a public good — findings are shared freely and openly.
Ø      Universalism: Knowledge about the natural world is universally valid and applicable. There are no separations or distinctions by nationality, religion, race, sex, etc.
Ø      Disinterestedness: Science is done for the public good, not for personal benefit; scientists seek to be impartial, objective, unbiased, not self-serving.
Ø      Skepticism: Claims and reported findings are subject to critical appraisal and testing throughout the scientific community before they can be accepted as proper scientific knowledge.

As with the scientific method, these norms suggest that scientists behave in ways that do not come naturally to human beings. Free communal sharing of everything might perhaps have characterized human society in the days of hunting and foraging [7], but it was certainly not the norm in Western society at the time of the Scientific Revolution and the beginnings of modern science. Disinterestedness is a very strange trait to attribute to a human being, voluntarily doing something without having any personal interest in the outcome; at the very least, there is surely a strong desire that what one does should be recognized as the good and right way to do things, as laudable in some way. Skepticism is no more natural than is the ready willingness to change beliefs demanded by the scientific method.

As to universalism, that goes without saying if claimed knowledge is actually true, it has nothing to do with behavior. If some authority attempts to establish something that is not true, it just becomes a self-defeating, short-lived dead end like the Stalinist “biology” of Lysenko or the Nazi non-Jewish “Deutsche Physik” [8].

Merton wrote that the norms, the ethos of science, “can be inferred from the moral consensus of scientists as expressed in use and wont, in countless writings on the scientific spirit and in moral indignation directed toward contraventions of the ethos” [6]. That falls short of claiming to have found empirically that scientists actually behave like that for the inferred reasons.

Merton’s norms are a sociologist’s speculation that the successes of science could only have come if scientists behaved like that; just as “the scientific method” is a philosophers’ guess that true knowledge could only be arrived at if knowledge seekers proceeded like that.

More compatible with typical human behavior would be the following:

Early modern science became successful after the number of people trying to understand the workings of the natural world reached some “critical mass”, under circumstances in which they could be in fairly constant communication with one another. Those circumstances came about in the centuries following the Dark Ages in Europe. Eventually various informal groups began to meet, then more formal “academies” were established (of which the Royal Society of London is iconic as well as still in existence). Exchanges of observations and detailed information were significantly aided by the invention of inexpensive printing. Relatively informal exchanges became more formal, as Reports and Proceedings of Meetings, leading to what are now scientific journals and periodicals (some of which still bear the time-honored title of “Proceedings of . . .).

Once voluntary associations had been established among individuals whose prime motive was to understand Nature, some competition, some rivalry, and also some cooperation will have followed automatically. Everyone wanted to get it right, and to be among the first to get it right, so the criterion for success was the concurrence and approval of the others who were attempting the same thing. Open sharing was then a matter of self-interest and therefore came naturally, because one could obtain approval and credit only if one’s achievements were known to others. Skepticism was provided by those others: one had to get it right in order to be convincing. There was no need at all for anyone to be unnaturally disinterested. (This scenario is essentially the one Michael Polanyi  described by the analogy of communally putting together a jigsaw puzzle [2: pp. 42-44, passim; 9].)

Such conditions of free, voluntary interactions among individuals sharing the sole aim of understanding Nature, something like a intellectual free-market conditions, simply do not exist nowadays; few if any researchers can be self-supporting, independent, intellectual entrepreneurs, most are employees and thereby beholden to and restricted by the aims and purposes of those who hold the purse-strings.

Almost universally nowadays, the gold standard of reliability is thought to be “the peer-reviewed mainstream literature”. But it would be quite misleading to interpret peer review as the application of organized skepticism, “critical appraisal and testing throughout the scientific community”. As most productive researchers well know, peer review does not guarantee the accuracy or objectivity or honesty of what has passed peer-review. In earlier times, genuine and effective peer-review took place by the whole scientific community after full details of claimed results and discoveries had been published. Nowadays, in sharp contrast, so called peer-review is carried out by a small number of individuals chosen by journal editors to advise on whether reported claims should even be published. Practicing and publishing researchers know that contemporary so-called peer-review is riddled with bias, prejudice, ignorance and general incompetence. But even worse than the failings of peer review in decisions concerning publication is the fact that the same mechanism is used to decide what research should be carried out, and even how it should be carried out [1: pp. 106-9, passim].

Contemporary views of science, and associated expectations about science, are dangerously misplaced because of the pervasive mistaken belief that today’s scientific researchers are highly talented, exceptional individuals in the mold of Galileo, Newton, Einstein, etc.,  and that they are unlike normal human beings in being disinterested, seeking only to serve the public good, disseminating their findings freely, self-correcting by changing their theories whenever the facts call for it, and perpetually skeptical about their own beliefs.

Rather, a majority consensus nowadays exercises dogmatic hegemony, insisting on theories contrary to fact on a number of  topics, including such publicly important ones as climate-change and HIV/AIDS [10].

————————————————-

[1]    Henry H. Bauer, Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed, McFarland, 2017
[2]    Henry H. Bauer, Scientific Literacy and Myth of the Scientific Method, University of Illinois Press, 1992;
“I would strongly recommend this book to anyone who hasn’t yet heard that the scientific method is a myth. Apparently there are still lots of those folks around”
(David L. Goodstein, Science, 256 [1992] 1034-36)
[3]    Bernard Barber, “Resistance by scientists to scientific discovery”,
 Science, 134 (1961) 596-602
[4]    Thomas S. Kuhn, The Structure of Scientific Revolutions, University of Chicago Press, 1970 (2nd ed., enlarged ; 1st ed. 1962)
[5]    Max Planck, Scientific Autobiography and Other Papers, 1949; translated from German by Frank Gaynor, Greenwood Press, 1968
[6]    Robert K. Merton, “The normative structure of science” (1942); pp. 267–78 in The Sociology of Science (ed. N. Storer, University of Chicago Press, 1973)
[7]    Christopher Ryan & Cacilda Jethá, Sex at Dawn: The Prehistoric Origins of Modern Sexuality, HarperCollins, 2010
[8]    Philipp Lenard, Deutsche Physik, J. F. Lehmann (Munich), 1936
[9]    Michael Polanyi, “The Republic of Science: Its political and economic theory”,
Minerva, I (1962) 54-73
[10]  Henry H. Bauer, Dogmatism  in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012

Posted in conflicts of interest, consensus, funding research, media flaws, peer review, politics and science, resistance to discovery, science is not truth, scientific culture, scientism, scientists are human, the scientific method, unwarranted dogmatism in science | Tagged: , | 1 Comment »

Science Court: Why and What

Posted by Henry Bauer on 2020/12/16

The idea for what has come to be called a Science Court was proposed half a century ago by Arthur Kantrowitz [1].

The development of nuclear reactors as part of the atom-bomb project made it natural to contemplate the possibility of generating power for civil purposes by means of nuclear reactors (the reactor at Hanford that made plutonium for the Nagasaki bomb was also the first full-scale nuclear reactor ever built [2]).

The crucial question was whether power-generating nuclear reactors could be operated safely. The technical experts were divided over that, and Kantrowitz proposed that an “Institution for Scientific Judgment” was needed to adjudicate the opposing opinions.

In those years, scientific activity was still rather like in pre-WWII times: A sort of ivory-tower cottage industry of largely independent intellectual entrepreneurs who shared the aim of learning how the material world works. Mediating opposing opinions could then seem like a relatively straightforward matter of comparing data and arguments. Half a century later, however, scientific activity has pervaded business, commerce, and medical practices, and research has become intensely competitive, with cutthroat competition for resources and opportunities for profit-making and achieving personal wealth and influence. Conflicts of interest are ubiquitous and inescapable [3]. Mediating opposing technical opinions is now complicated because public acceptance of a particular view has consequences for personal and institutional power and wealth; deciding what “science” truly says is hindered by personal conflicts of interest, Groupthink, and institutional conflicts of interest.

Moreover, technical disagreements nowadays are not between more or less equally placed technical experts; they are between a hegemonic mainstream consensus and individual dissenters. The consensus elite controls what the media and the public learn about “science”, as the “consensus” dominates “peer review”, which in practice determines all aspects of scientific activity, for instance the allocation of positions and research resources and the publication (or suppression) of observations or results.

It has become quite common for the mainstream consensus to effectively suppress minority views and anomalous research results, often dismissing them out of hand, not infrequently labeling them pejoratively as denialist or flat-earther crackpot [4]. Thereby the media, the public, and policymakers may not even become aware of the existence of competent, plausible dissent from a governing consensus.

The history of science is, however, quite unequivocal: Over the course of time, a mainstream scientific consensus may turn out to be inadequate and to be replaced by previously denigrated and dismissed minority views.

Public actions and policies might bring about considerable damage if based on a possibly mistaken contemporary scientific consensus. Since nowadays a mainstream consensus so commonly renders minority opinions invisible to society at large, some mechanism is needed to enable policymakers to obtain impartial, unbiased, advice as to the possibility that minority views on matters of public importance should be taken into consideration.

That would be the prime purpose of a Science Court. The Court would not be charged with deciding or declaring what “science” truly says. It would serve just to force openly observed substantive engagement among the disagreeing technical experts — “force” because the majority consensus typically refuses voluntarily to engage substantively with dissident contrarians, even in private.

In a Court, as the elite consensus and the dissenters present their arguments and their evidence, points of disagreement would be made publicly visible and also clarified under mutual cross-examination. That would enable lay observers — the general public, the media, policymakers — to arrive at reasonably informed views about the relative credibility of the proponents of the majority and minority opinions, through noting how evasive or responsive or generally confidence-inspiring they are. Even if no immediate resolution of the differences of opinion could be reached, at least policymakers would be sufficiently well-informed about what public actions and policies might plausibly be warranted and which might be too risky for immediate implementation.

A whole host of  practical details can be specified only tentatively at the outset since they will likely need to be modified over time as the Court gains experience. Certain at the beginning is that public funding is needed as well as absolute independence, as with the Supreme Court of the United States. Indeed, a Science Court might well be placed under the general supervision of the Supreme Court. While the latter might not at first welcome accepting such additional responsibilities, that might change since the legal system is currently not well equipped to deal with cases where technical issues are salient [5]. For example, the issue of who should be acceptable as an expert technical witness encounters the same problem of adjudicating between a hegemonic majority consensus and a number of entirely competent expert dissenters as the problem of adjudicating opposing expert opinions.

Many other details need to be worked out: permanent staffing of the Court as well as temporary  staffing for particular cases; appointment or selection of advocates for opposing views; how to choose issues for consideration; the degree and type of authority the Court could exercise, given that a majority consensus would usually be unwilling to engage voluntarily with dissidents. These questions, and more, have been discussed elsewhere [6]. As already noted, however, if a Science Court is actually established, its unprecedented nature would inevitably make desirable progressive modification of its practices in the light of accumulating experience.

————————————————-

[1]    Arthur Kantrowitz, “Proposal for an Institution for Scientific Judgment”, Science, 156 (1967) 763-64

[2]    Steve Olson, The Apocalypse Factory, W. W. Norton, 2020

[3]    Especially chapter 1 in Henry H. Bauer, Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed, McFarland, 2017

[4]    Henry H. Bauer, Dogmatism  in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012

[5]    Andrew W. Jurs, “Science Court: Past proposals, current considerations, and a suggested structure”, Drake University Legal Studies Research Paper Series, Research Paper 11–06 (2010); Virginia Journal of Law and Technology, 15 #1

[6]    Chapter 12 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed, McFarland, 2017

Posted in conflicts of interest, consensus, denialism, funding research, peer review, politics and science, resistance to discovery, science is not truth, science policy, scientific culture, scientism, unwarranted dogmatism in science | Tagged: , | 2 Comments »

Can science regain credibility?

Posted by Henry Bauer on 2020/12/09

Some of the many critiques of contemporary science and medicine [1] have suggested improvements or reforms: among them, ensuring that empiricism and fact determine theory rather than the other way around [2]; more competent application of statistics; awareness of biases as a way of decreasing their influence [1, 2, 3].

Those suggestions call for individuals in certain groups, as well as those groups and institutions as a whole, to behave differently than they have been behaving: researchers, editors, administrators, patrons; universities, foundations, government agencies, and commercial sponsors of research.

Such calls for change are, however, empty whistling in the wind if not based on an understanding of why those individuals and those groups have been behaving in ways that have caused science as a whole to lose credibility — in the eyes of much of the general public, but not only the general public: a significant minority of accomplished researchers and other informed insiders have concluded that on any number of topics the mainstream “consensus” is flawed or downright wrong, not properly based on the available evidence [4].

It is a commonplace to remark that science displaced religion as the authoritative source of knowledge and understanding, at least in Western civilization during the last few centuries. One might then recall the history of religion in the West, and that corruption of its governing institutions eventually brought rebellion: the Protestant Reformation, the Enlightenment, and the enshrining of science and reason as society’s hegemonic authority; so it might seem natural now to call for a Scientific Reformation to repair the institutions of science that seem to have become corrupted.

The various suggestions for reform have indeed called for change in a number of ways: in how academic institutions evaluate the worth of their researchers; in how journals decide what to publish and what not to publish; in how the provision of research resources is decided; and so forth and so on. But such suggestions fail to get to the heart of the matter. The Protestant Reformation was seeking the repair of a single, centrally governed, institution. Contemporary science, however, comprises a whole collection of institutions and groups that interact with one another in ways that are not governed by any central authority.

The way “science” is talked and written about is highly misleading, since no single word can properly encompass all its facets or aspects. The greatest source of misunderstanding comes about because scientific knowledge and understanding do not generate themselves or speak for themselves; so in common discourse, “science” refers to what is said or written about scientific knowledge and theories by people — who are, like all human beings, unavoidably fallible, subject to a variety of innate ambitions and biases as well as external influences; and hindered and restricted by psychological and social factors — psychological factors like confirmation bias, which gets in the way of recognizing errors and gaps, social factors like Groupthink, which pressures individuals not to deviate from the beliefs and actions of any group to which they belong.

So whenever a claim about scientific knowledge or understanding is made, the first reaction that should be, “Who says so?”

It seems natural to presume that the researchers most closely related to a given topic would be the most qualified to explain and interpret it to others. But scientists are just as human and fallible as others, so researchers on any given subject are biased towards thinking they understand it properly even though they may be quite wrong about it.

A better reflection of what the facts actually are would be the view that has become more or less generally accepted within the community of specialist researchers, and thereby in the scientific community as a whole; in other words, what research monographs, review articles, and textbooks say — the “consensus”. Crucially, however, as already noted, any contemporary consensus may be wrong, in small ways or large or even entirely.

Almost invariably there are differences of opinion within the specialist and general scientific communities, particularly but not only about relatively new or recent studies. Unanimity is likely only over quite simple matters where the facts are entirely straightforward and readily confirmed; but such simple and obvious cases are rare indeed. Instead of unanimity, the history of science is a narrative of perpetual disagreements as well as (mostly but not always) their eventual resolution.

On any given issue, the consensus is not usually unanimous as to “what science says”. There are usually some contrarians, some mavericks among the experts and specialist researchers, some unorthodox views. Quite often, it turns out eventually that the consensus was flawed or even entirely wrong, and what earlier were minority views then become the majority consensus [5, 6].

That perfectly normal lack of unanimity, the common presence of dissenters from a “consensus” view, is very rarely noted in the popular media and remains hidden from the conventional wisdom of society as a whole — most unfortunately and dangerously, because it is hidden also from the general run of politicians and policymakers. As a result, laws on all sorts of issues, and many officially approved practices in medicine, may come to be based on a mistaken scientific consensus; or, as President Eisenhower put it [7], public policies might become captive to a scientific-technological elite, those who constitute and uphold the majority consensus.

The unequivocal lesson that modern societies have yet to learn is that any contemporary majority scientific consensus may be misleading. Only once that lesson has been learned will it then be noted that there exists no established safeguard to prevent public policies and actions being based on erroneous opinions. There exists no overarching Science Authority to whom dissenting experts could appeal in order to have the majority consensus subjected to reconsideration in light of evidence offered by the contrarian experts; no overarching Science Authority, and no independent, impartial, unbiased, adjudicators or mediators or interpreters to guide policymakers in what the actual science might indicate as the best direction.

That’s why the time is ripe to consider establishing a Science Court [8].

——————————————–

[1]     CRITIQUES OF CONTEMPORARY SCIENCE AND ACADEME 
WHAT’S WRONG WITH PRESENT-DAY MEDICINE

[2]    See especially, about theoretical physics, Sabine Hossenfelder,Lost in Math: How Beauty Leads Physics Astray, Basic Books, 2018

[3]    Stuart Ritchie, Science Fictions: How FRAUD, BIAS, NEGLIGENCE, and HYPE Undermine the Search for Truth, Metropolitan Books (Henry Holt & Company), 2020

[4]    A number of examples are discussed in Henry H. Bauer, Dogmatism  in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012

[5]    Bernard Barber, “Resistance by scientists to scientific discovery”, Science, 134 (1961) 596-602

[6]    Thomas S. Kuhn, The Structure of Scientific Revolutions, University of Chicago Press, 1970, 2nd (enlarged) ed. [1st ed. was 1962]

[7]    Dwight D. Eisenhower, Farewell speech, 17 January 1961; transcript at http://avalon.law.yale.edu/20th_century/eisenhower001.asp

[8]    Chapter 12 in Henry H. Bauer, Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed, McFarland, 2017

Posted in conflicts of interest, consensus, fraud in science, media flaws, medical practices, peer review, politics and science, resistance to discovery, science is not truth, science policy, scientific culture, scientists are human, unwarranted dogmatism in science | Tagged: , | 3 Comments »

Why skepticism about science and medicine?

Posted by Henry Bauer on 2020/09/06

My skepticism is not about science and medicine as sources or repositories of objective knowledge and understanding. Skepticism is demanded by the fact that what society learns about science and medicine is mediated by human beings. That brings in a host of reasons for skepticism: human fallibility, individual and institutional self-interest, conflicts of interest, sources of bias and prejudice.

I have never come across a better discussion of the realities about science and its role in society than Richard Lewontin’s words in his book, Biology as Ideology (Anansi Press 1991, HarperPerennial 1992; based on 1990 Massey Lectures, Canadian Broadcasting Corporation):

“Science is a social institution about which there is a great deal of misunderstanding, even among those who are part of it. . . [It is] completely integrated into and influenced by the structure of all our other social institutions. The problems that science deals with, the ideas that it uses in investigating those problems, even the so-called scientific results that come out of scientific investigation, are all deeply influenced by predispositions that derive from the society in which we live. Scientists do not begin life as scientists, after all, but as social beings immersed in a family, a state, a productive structure, and they view nature through a lens that has been molded by their social experience.
. . . science is molded by society because it is a human productive activity that takes time and money, and so is guided by and directed by those forces in the world that have control over money and time. Science uses commodities and is part of the process of commodity production. Science uses money. People earn their living by science, and as a consequence the dominant social and economic forces in society determine to a large extent what science does and how it. does it. More than that, those forces have the power to appropriate from science ideas that are particularly suited to the maintenance and continued prosperity of the social structures of which they are a part. So other social institutions have an input into science both in what is done and how it is thought about, and they take from science concepts and ideas that then support their institutions and make them seem legitimate and natural. . . .
Science serves two functions. First, it provides us with new ways of manipulating the material world . . . . [Second] is the function of explanation” (pp. 3-4). And (p. 5) explaining how the world works also serves as legitimation.

Needed skepticism takes into account that every statement disseminated about science or medicine serves in some way the purpose(s), the agenda(s), of the source or sources of that statement.

So the first thing to ask about any assertion about science or medicine is, why is this statement being made by this particular source?

Statements by pharmaceutical companies, most particularly their advertisements, should never be believed, because, as innumerable observers and investigators have documented, the profit motive has outweighed any concern for the harm that unsafe medications cause even as there is no evidence for definite potential benefit. The best way to decide on whether or not to prescribe or use a drug is by comparing NNT and NNH, the odds on getting benefit compared to the odds of being harmed; but NNT and NNH are never reported by drug companies. For example, there is no evidence whatsoever that HPV vaccination decreases the risk of any cancer; all that has been observed is that the vaccines may decrease genital warts. On the other hand, many individuals have suffered grievous harm from “side” effects of these vaccines (see Holland 2018 in the bibliography cited just below, and the documentary, Sacrificial Virgins. TV ads by Merck, for example in August 2020 on MSNBC, cite the Centers for Disease Control & Prevention as recommending the vaccine not only for girls but also for boys.

For fully documented discussions of the pervasive misdeeds of drug companies, consult the books listed in my periodically updated bibliography, What’s Wrong with Present-Day Medicine.
I recommend particularly Angell 2004, Goldacre 2013, Gøtzsche 2013, Healy 2012, Moynihan, & Cassels 2005. Greene 2007 is a very important but little-cited book describing how numbers and surrogate markers have come to dominate medical practice, to the great harm of patients.

Official reports may be less obviously deceitful than drug company advertisements, but they are no more trustworthy, as argued in detail and with examples in “Official reports are not scientific publications”, chapter 3 in my Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth (McFarland 2012):
“reports from official institutions and organizations . . . are productions by bureaucracies . . . . The actual authors of these reports are technical writers whose duties are just like those of press secretaries, advertising writers, and other public-relations personnel: to put on the actual evidence and conclusions the best possible spin to reinforce the bureaucracy’s viewpoint and emphasize the importance of the bureaucracy’s activities.
Most important: The Executive Summaries, Forewords, Prefaces, and the like may tell a very different story than does the actual evidence in the bulk of the reports. It seems that few if any pundits actually read the whole of such documents. The long public record offers sad evidence that most journalists certainly do not look beyond these summaries into the meat of the reports, given that the media disseminate uncritically so many of the self-serving alarums in those Executive Summaries” (p. 213).

So too with press releases from academic institutions.

As for statements direct from academic and professional experts, recall that, as Lewontin pointed out, “people earn their living by science”. Whenever someone regarded as an expert or authority makes public statements, an important purpose is to enhance the status, prestige, career, profitability, of who is making the statement. This is not to suggest that such statements are made with deliberate dishonesty; but the need to preserve status, as well as the usual illusion that what one believes is actually true, ensures that such statements will be dogmatically one-sided assertions, not judicious assessments of the objective state of knowledge.

Retired academic experts like myself no longer suffer conflicts of interest at a personal or institutional-loyalty level. When we venture critiques of drug companies, official institutions, colleges and universities, and even individual “experts” or former colleagues, we will be usually saying what we genuinely believe to be unvarnished truth. Nevertheless, despite the lack of major obvious conflicts of interest, one should have more grounds than that for believing what we have to say. We may still have an unacknowledged agenda, for instance a desire still to do something useful even as our careers are formally over. Beyond that, of course, like any other human beings, we may simply be wrong, no matter that we ourselves are quite sure that we are right. Freedom from frank, obvious conflicts of interest does not bring with it some superhuman capacity for objectivity let alone omniscience.

In short:
Believe any assertion about science or medicine, from any source, at your peril.
If the matter is of any importance to you, you had best do some investigating of evidence and facts, and comparison of diverse interpretations.

Posted in conflicts of interest, consensus, fraud in medicine, fraud in science, medical practices, peer review, politics and science, science is not truth, scientific literacy, scientism, scientists are human, unwarranted dogmatism in science | Tagged: , , , , | Leave a Comment »

Percentages absolute or relative? Politicizing science

Posted by Henry Bauer on 2020/08/24

Convalescent plasma reduces the mortality of CoVID-19 by 35%, citizens of the United States were assured in a press conference on 23 August 2020, and the approval of this treatment for emergency use by the Food and Drug Administration (FDA) underscored that this constituted a breakthrough in treating the pandemic disease.

As usual, critical voices ventured to disagree. One physician reported that he had been using this treatment for a considerable length of time and had noted a perhaps marginal, certainly not great benefit for this intervention. Others pointed out that the use of convalescent plasma in general was nothing new.

That “35%” mortality reduction was emphasized a number of times in the televised official announcement. It was only a few days later that we learned that the original data suggested a reduction of mortality to about 8% from 11-12% for presumably comparable patients not so treated. In other words, 3 to 4% of patients may have derived a benefit in terms of decreased mortality.

Indeed, 8 is about 35% less than 11-12. However, a 3.5% reduction in mortality is nothing like a 35% reduction.

This episode illustrates what is quite commonplace as drug companies seek to impress doctors and patients with the wonderful benefits to be derived from their medications: relative effects rather than absolute ones are reported.

This is just one of the many things wrong with present-day practices in medicine, of course; dozens of works describing the dysfunctions are listed in my periodically updated bibliography.

Investigative reporters also revealed and that the FDA’s emergency use approval had come at the behest of the White House. Historians will recall that the whole science of genetics was derailed in the Soviet Union for a generation as Stalin’s administration enshrined as science the pseudoscience invented by Lysenko.

Posted in conflicts of interest, fraud in medicine, media flaws, medical practices, politics and science, prescription drugs, scientific literacy | Tagged: , , , | 1 Comment »

Corona Conumdrums

Posted by Henry Bauer on 2020/04/12

Something seems wrong about the basis for the current panic over “CoVID-19”.

2019-nCoV, the virus that is said to cause CoVID-19 disease, first appeared in Wuhan, China, in December 2019. Within a few months, it had reached in Britain prime minister Boris Johnson and  Prince Charles (but not his wife) , in Russia the health minister, and in Australia Tom Hanks and his wife . According to the interactive online map at the New York Times, this new virus is now present on all continents and on islands large and small, and according to news reports it had also found its way onto cruise ships and warships.
To have spread so rapidly, it must be effectively carried through the air, on the winds, and perhaps through the oceans, as suggested in the Los Angeles Times.
But if this virus has been so widely distributed for several months, why has it caused serious illness in so few places? And why has the continent of Africa been so little affected (see NYT map)?
This seems more like something endemic, that has been around for a long time, like the normal cold or “flu” viruses say, than like a virus that newly jumped from animal to human only last December in Wuhan.
Isn’t there something wrong with the official story?
Moreover, since the virus appeared all over the globe within a few months, how can social distancing prevent it from spreading further?

 

Posted in media flaws, medical practices, politics and science, science is not truth, science policy, scientific culture, scientific literacy, scientism, Uncategorized, unwarranted dogmatism in science | Tagged: | 10 Comments »

Science: Sins of Commission and of Omission

Posted by Henry Bauer on 2019/04/21

What statisticians call a type-I error is a scientific sin of commission, namely, believing something to be true that is actually wrong. A type-II error, dismissing as false something that happens to be true, could be described as a scientific sin of omission since it neglects to acknowledge a truth and thereby makes impossible policies and actions based on that truth.

The history of science is a long record of both types of errors that were progressively corrected, sooner or later; but, so far as we can know, of course, the latest correction may never be the last word, because of the interdependence of superficially different bits of science. If, for instance, general relativity were found to be flawed, or quantum mechanics, then huge swaths of physics, chemistry, and other sciences would undergo major or minor changes. And we cannot know whether general relativity or quantum mechanics are absolutely true, that they are not a type-I error — all we know is that they have worked usefully up to now. Type-II errors may always be hiding in the vast regions of research not being done, or unorthodox claims being ignored or dismissed.

During the era of modern science — that is, since about the 17th century — type-I errors included such highly consequential and far-reaching dogmas as believing that atoms are indivisible, that they are not composed of smaller units. A socially consequential type-I error in the first quarter of the 20th century was the belief that future generations would benefit if people with less desirable genetic characteristics were prevented from having children, whereby tens of thousands of Americans were forcibly sterilized as late as late as 1980.

A type-II error during the second half of the 19th century was the determined belief that claims of alleviating various ailments by electrical or magnetic treatments were nothing but pseudo-scientific scams; but that was corrected in the second half of the 20th century, when electromagnetic treatment became the standard procedure for curing certain congenital failures of bone growth and for treating certain other bone conditions as well.
Another 19th-century type-II error was the ignoring of Mendel’s laws of heredity, which were then re-discovered half a century later.
During the first half of the 20th century, a type-II error was the belief that continents could not have moved around on the globe, something also corrected in the latter part of the 20th century.

 

Science is held in high regard for its elucidation of a great deal about how the world works, and for many useful applications of that knowledge. But the benefits that society can gain from science are greatly restricted through widespread ignorance of and misunderstanding about the true history of science.

Regarding general social and political history, Santayana’s adage is quite well-known, that those who cannot remember the past are condemned to repeat it. That is equally true for the history of science. Since the conventional wisdom and the policy makers and so many of the pundits are ignorant of the fact that science routinely commits sins of both commission and omission, social and political policies continue to be made on the basis of so-called scientific consensus that may quite often be unsound.

In Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth (McFarland 2012), evidence is cited from well-qualified and respectable sources that the mainstream consensus is flawed on quite a number of topics. Some of these are of immediate concern only to scholars and researchers, for example about the earliest settlements of the Americas, or the extinction of the dinosaurs, or the mechanism of the sense of smell. Other topics, however, are of immediate public concern, for instance a possible biological basis for schizophrenia, or the cause of Alzheimer’s disease, or the possible dangers from mercury in tooth amalgams, or the efficacy of antidepressant drugs, or the hazards posed by second-hand tobacco smoke; and perhaps above all the unproven but dogmatic belief that human-generated carbon dioxide is the prime cause of global warming and climate change, and the long-held hegemonic belief that HIV causes AIDS.

The topic of cold nuclear fusion is an instance of a possible type-II error, a sin of omission, the mainstream refusal to acknowledge the strong evidence for potentially useful applications of nuclear-atomic transformations that can occur under quite ordinary conditions.

On these, and on quite a few other matters * as well, the progress of science and the well-being of people and of societies are greatly hindered by the widespread ignorance of the fact that science always has been and will continue to be fallible,   committing sins of both omission and of commission that become corrected only at some later time — if at all.

On matters that influence public policies directly, policy-makers would be greatly helped if they could draw on historically well-informed, technically insightful, and above all impartial assessments of the contemporary mainstream consensus. A possible approach to providing such assistance would be the establishing of a Science Court; see chapter 12 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017).

 

—————————————————-

*    Type-I errors are rife in the misapplications of statistics in medical matters, including the testing and approval of new drugs and vaccines; see the bibliography, What’s Wrong with Present-Day Medicine
      For a number of possible type-II errors, see for instance The Anomalist  and the publications of the Society for Scientific Exploration  and the Gesellschaft für Anomalistik

Posted in consensus, funding research, global warming, media flaws, medical practices, peer review, politics and science, resistance to discovery, science is not truth, science policy, scientific culture, scientific literacy, scientism, scientists are human, unwarranted dogmatism in science | Tagged: , , , | Leave a Comment »