Skepticism about science and medicine

In search of disinterested science

Archive for December, 2020

From uncritical about science to skeptical about science

Posted by Henry Bauer on 2020/12/31

Science has been so successful at unlocking Nature’s secrets, especially since about the 16th century, that by the early decades of the 20th century, science had become almost universally accepted as the trustworthy touchstone of knowledge about and insight into the material world. In many ways and in many places, science has superceded religion as the ultimate source of truth.
Yet in the 21st century, an increasing number and variety of voices are proclaiming that science is not — or no longer — to be trusted.
Such disillusion is far from unanimous, but I certainly share it [1], as do many others [2, 3], including such well-placed insiders as editors of scientific periodicals.
How drastically different 21st– century science is from the earlier modern science that won such status and prestige seems to me quite obvious; yet the popular view seems oblivious to this difference. Official statements from scientific authorities and institutions are still largely accepted automatically, unquestioningly, by the mass media and, crucially, by policy-makers and governments, including international collaborations.
Could my opinion be erroneous about a decline in the trustworthiness of science?
If not, why is it that what seems so obvious to me has not been noticed, has been overlooked by the overwhelming majority of practicing researchers, by pundits and by scholars of scientific activity and by science writers and journalists?

That conundrum had me retracing the evolution of my views about science, from my early infatuation with it to my current disillusionment.
Almost immediately I realized that I had happened to be in some of the right places at some of the right times [4] with some of the right curiosity to be forced to notice the changes taking place; changes that came piecemeal over the course of decades.
That slow progression will also have helped me to modify my belief, bit by bit, quite slowly. After all, beliefs are not easily changed. From trusting science to doubting science is quite a jump; for that to occur quickly would be like suddenly acquiring a religious belief, Saul struck on the road to Damascus, or perhaps the opposite, losing a faith like the individuals who escape from cults, say Scientology — it happens quite rarely.
So it is natural but worth noting that my views changed slowly just as the circumstances of research were also changing, not all at once but gradually.
Of course I didn’t recognize at the time the cumulating significance of what I was noticing. That comes more easily in hindsight. Certainly I could not have begun to suspect that a book borrowed for light recreational reading would lead a couple of decades later to major changes of professional career.

Beginnings: Science, chemistry, unquestioning trust in science

I had become enraptured by science, and more specifically by chemistry, through an enthusiastic teacher at my high school in Sydney, Australia, in the late 1940s. My ambition was to become a chemist, researching and teaching, and I could imagine nothing more interesting or socially useful.
Being uncritically admiring of science came naturally to my cohort of would-be or potential scientists. It was soon after the end of the second World War; and that science really understands the inner workings of Nature had been put beyond any reasonable doubt by the awesome manner in which the war ended, with the revelation of atomic bombs. I had seen the newspaper headlines, “Atom bomb used over Japan”, as I was on a street-car going home from high-school, and I remember thinking, arrogantly, “Gullible journalism, swallowing propaganda; there’s no such thing as an atomic bomb”.

Learning how it was a thing made science seem yet more wonderful.

The successful ending of that war was also of considerable and quite personal significance for me. By doing it, “science” had brought a feeling of security and relief after years of high personal anxiety, even fear. When I was a 7-year-old school-boy, my family had escaped from Austria, in the nick of time, just before the war had started; and then in Australia, we had experienced the considerable fear of a pending Japanese invasion, a fear is made very real by periodic news of Japanese atrocities in China, for instance civilians being buried alive, as illustrated in photographs.
Trusting science was not only the Zeitgeist of that time and place, it was personally welcome, emotionally appealing.

The way sciences were taught only confirmed that science could be safely equated with truth. For that matter, all subjects were taught quite dogmatically. We just did not question what our teachers said; time and place, again. In elementary school we had sat with arms folded behind our backs until the teacher entered, when we stood up in silent respect. Transgressions of any sort were rewarded by a stroke of a cane on an outstretched hand.
(Fifty years later, in another country if not another world, a university student in one of my classes complained about getting a “B” and not an “A”.)

I think chemistry also conduces to trusting that science gets it right. Many experiments are easy to do, making it seem obvious that what we’ve learned is absolutely true.
After much rote learning of properties of elements and compounds, the Periodic Table came as a wonderful revelation: never would I have to do all that memorizing again, everything can be predicted just from that Table.
Laboratory exercises, in high school and later at university, worked just as expected; failures came only from not being adept or careful enough. The textbooks were right.

Almost nothing at school or university, in graduate as well as undergraduate years, aroused any concerns that science might not get things right. A year of undergraduate research and half-a-dozen years in graduate study brought no reason to doubt that science could learn Nature’s truths. Individuals could make mistakes, of course; I was taken aback when a standard reference resource, Chemical Abstracts, sent me erroneously to an article about NaI instead of NOI — human error, obviously, in transcribing spoken words.

Of course there was still much to learn, but no reason to question that science could eventually come to really understand all the workings of the material world.

Honesty in doing science was taken for granted. We heard the horror story of someone who had cheated in some way; his studying of science was immediately canceled and he had to take a job somewhere as a junior administrator. Something I had written was plagiarized — the historical introduction in my PhD thesis — and the miscreant was roundly condemned, even as he claimed a misunderstanding. Individuals could of course go wrong, but that threw no doubt on the trustworthiness of Science itself.

In many ways, scientific research in Australia in the 1940s and 1950s enjoyed conditions not so different from the founding centuries of modern science when the sole driving aim was to learn how the world works. In the universities, scientific research was very much part of the training of graduate students for properly doing good science. The modest needed resources were provided by the University. No time and effort had to be spent seeking necessary support from outside sources, no need to locate and kowtow to potential patrons, individuals or managers at foundations or government agencies.
Research of a more applied sort was carried out by the government-funded Council for Scientific and Industrial Research, CSIR (which later became a standard government agency, the Commonwealth Scientific and Industrial Research Organization, CSIRO). There the atmosphere was quite like that in academe: people more or less happily working at a self-chosen vocation. The aims of research were sometimes quite practical, typically how better to exploit Australia’s natural resources: plentiful coal, soft brown as well as hard black; or the wool being produced in abundance by herds of sheep. CSIR also made some significant “pure science” discoveries, for example the importance of nutritional trace elements in agricultural soils [5] and in the development of radio astronomy [6].

In retrospect the lack of money-grubbing is quite striking. At least as remarkable, and not unrelated, is that judgments were made qualitatively, not quantitatively. People were judged by the quality, the significance, the importance of what they accomplished, rather than by how much of something they did. We judged our university teachers by their mastery of the subjects they taught and on how they treated us. Faculty appointments and promotions relied on personal recommendations. Successful researchers might often — and naturally— publish more than others, but not necessarily. Numbers of publications were not the most important thing, nor how often one’s publications were cited by others: The Science Citation Index was founded only in 1963, followed by the Social Sciences Citation Index in 1973 and the Arts and Humanities Citation Index a few years later. “Impact factors” of scientific journals had begun to be calculated in the early 1970s.

So in my years of learning chemistry and beginning research, nothing interfered with having an idealistic view of science, implicitly “pure” science, sheer knowledge-seeking. For my cohort of students, it was an attractive, worthy vocation. The most desired prospect was to be able to work at a university or a research institute. If one was less fortunate, it might be a necessary to take a job in industry, which in those years was little developed in Australia, involving the manufacture of such uncomplicated or unsophisticated products as paint, or the processing of sugar cane or technicalities associated with brewing beer, making wine, or distilling spirits.

The normal path to an academic career in Australia began with post-doctoral experience in either Britain or the United States. My opportunity came in the USA; there, in the late 1950s, I caught my first glimpses of what science would become, with an influx of funds from government and industry and the associated consequences, then unforeseen if not unforeseeable but at any rate not of any apparent concern.

——————————————-

[1]    Henry H. Bauer, Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed, McFarland, 2017
[2]    Critiques of Contemporary Science and Academe
https://mega.nz/file/NfwkSR7S#K7llqDfA9JX_mVEWjPe4W-uMM53aMr2XMhDP6j0B208
[3]    What’s Wrong With Medicine; https://mega.nz/file/gWoCWTgK#1gwxo995AyYAcMTuwpvP40aaB3DuA5cvYjK11k3KKSU
[4]    Insight borrowed from Paula E. Stephen & Sharon G. Levin, Striking the Mother Lode in Science: The Importance of Age, Place, and Time, Oxford University Press, 1992
[5]    Best known is the discovery that cobalt supplements avoided “coast disease”, a wasting condition of sheep; see Gerhard N. Schrauzer, “The discovery of the essential trace elements: An outline of the history of biological trace element research”, chapter 2, pp. 17-31, in Earl Frieden, Biochemistry of the Essential Ultratrace Elements, Plenum Press, 1984; and the obituary, “Hedley Ralph Marston 1900-1965”; https://www.science.org.au/fellowship/fellows/biographical-memoirs/hedley-ralph-marston-1900-1965
[6] Stories of Australian Astronomy: Radio Astronomy; https://stories.scienceinpublic.com.au/astronomy/radio-astronomy/

Posted in conflicts of interest, fraud in science, funding research, scientific culture, scientism | Tagged: , , , | Leave a Comment »

The misleading popular myth of science exceptionalism

Posted by Henry Bauer on 2020/12/28

Human beings are fallible; but we suppose the Pope to be infallible on spiritual matters and science to be exceptional among human endeavors as correctly, authoritatively knowledgeable about the workings of the material world. Other sources purporting to offer veritable knowledge may be fallible — folklore, history, legend, philosophy — but science can be trusted to speak the truth.

Scholars have ascribed the infallibility of science to its methodology and to the way scientists behave. Science is thought to employ the scientific method, and behavior among scientists is supposedly described by the Mertonian Norms. Those suppositions have somehow seeped into the conventional wisdom. Actually, however, contemporary scientific activity does not proceed by the scientific method, nor do scientists behave in accordance with the Mertonian Norms. Because the conventional wisdom is so wrong about how science and scientists work, public expectations about science are misplaced, and public policies and actions thought to be based on science may be misguided.

Contemporary science is unrecognizably different from the earlier centuries of modern science (commonly dated as beginning around the 16th century). The popular view was formed by those earlier times, and it has not yet absorbed how radically different the circumstances of scientific activities have become, increasingly since the middle of the 20th century.

Remarkable individuals were responsible for the striking achievements of modern science that brought science its current prestige and status; and there are still some remarkably talented people among today’s scientists. But on the whole, scientists or researchers today are much like other white-collar professionals [1: p. 79], subject to conflicts of interest and myriad annoyances and pressures from patrons and outside interests; 21st century “science” is just as interfered with and corrupted by commercial, ideological, and political forces as are other sectors of society, say education, or justice, or trade.

Modern science developed through the voluntary activities of individuals sharing the aim of understanding how Nature works. The criterion of success was that claimed knowledge be true to reality. Contemporary science by contrast is not a vocation carried on by self-supporting independent individuals; it is done by white-collar workers employed by a variety of for-profit businesses and industries and not-for-profit colleges, universities, and government agencies. Even as some number of researchers still genuinely aim to learn truths about Nature, their prime responsibility is to do what their employers demand, and that can conflict with being wholeheartedly truthful.

The scientific method and the Mertonian Norms
 do not encompass the realities of contemporary science

The myth of the scientific method has been debunked at book length [2]. It should suffice, though, just to point out that the education and training of scientists may not even include mention of the so-called scientific method.

I had experienced a bachelor’s-degree education in chemistry, a year of undergraduate research, and half-a-dozen years of graduate research leading to both a master’s degree and a doctorate before I ever heard of “the scientific method”. When I eventually did, I was doing postdoctoral research in chemistry (at the University of Michigan); and I heard of “the scientific method” not from my sponsor and mentor in the Chemistry Department but from a graduate student in political science. (Appropriately enough, because it is the social and behavioral sciences, as well as some medical doctors, who make a fetish of claiming to follow the scientific method, in the attempt to be granted as much prestige and trustworthiness as physics and chemistry enjoy.)

The scientific method would require individuals to change their beliefs readily whenever the facts seem to call for it. But everything that psychology and sociology can agree on is that it is very difficult and considerably rare for individuals or groups to modify a belief once it has become accepted. The history of science is consonant with that understanding: New and better understanding is persistently resisted by the majority consensus of the scientific community for as long as possible [3, 4]; pessimistically, in the words of Max Planck, until the proponents of the earlier belief have passed away [5]; as one might put it, science progresses one funeral at a time.

The Mertonian norms [6], too, are more myth than actuality. They are, in paraphrase:

Ø     Communality or communalism (Merton had said “communism”): Science is an activity of the whole scientific community and it is a public good — findings are shared freely and openly.
Ø      Universalism: Knowledge about the natural world is universally valid and applicable. There are no separations or distinctions by nationality, religion, race, sex, etc.
Ø      Disinterestedness: Science is done for the public good, not for personal benefit; scientists seek to be impartial, objective, unbiased, not self-serving.
Ø      Skepticism: Claims and reported findings are subject to critical appraisal and testing throughout the scientific community before they can be accepted as proper scientific knowledge.

As with the scientific method, these norms suggest that scientists behave in ways that do not come naturally to human beings. Free communal sharing of everything might perhaps have characterized human society in the days of hunting and foraging [7], but it was certainly not the norm in Western society at the time of the Scientific Revolution and the beginnings of modern science. Disinterestedness is a very strange trait to attribute to a human being, voluntarily doing something without having any personal interest in the outcome; at the very least, there is surely a strong desire that what one does should be recognized as the good and right way to do things, as laudable in some way. Skepticism is no more natural than is the ready willingness to change beliefs demanded by the scientific method.

As to universalism, that goes without saying if claimed knowledge is actually true, it has nothing to do with behavior. If some authority attempts to establish something that is not true, it just becomes a self-defeating, short-lived dead end like the Stalinist “biology” of Lysenko or the Nazi non-Jewish “Deutsche Physik” [8].

Merton wrote that the norms, the ethos of science, “can be inferred from the moral consensus of scientists as expressed in use and wont, in countless writings on the scientific spirit and in moral indignation directed toward contraventions of the ethos” [6]. That falls short of claiming to have found empirically that scientists actually behave like that for the inferred reasons.

Merton’s norms are a sociologist’s speculation that the successes of science could only have come if scientists behaved like that; just as “the scientific method” is a philosophers’ guess that true knowledge could only be arrived at if knowledge seekers proceeded like that.

More compatible with typical human behavior would be the following:

Early modern science became successful after the number of people trying to understand the workings of the natural world reached some “critical mass”, under circumstances in which they could be in fairly constant communication with one another. Those circumstances came about in the centuries following the Dark Ages in Europe. Eventually various informal groups began to meet, then more formal “academies” were established (of which the Royal Society of London is iconic as well as still in existence). Exchanges of observations and detailed information were significantly aided by the invention of inexpensive printing. Relatively informal exchanges became more formal, as Reports and Proceedings of Meetings, leading to what are now scientific journals and periodicals (some of which still bear the time-honored title of “Proceedings of . . .).

Once voluntary associations had been established among individuals whose prime motive was to understand Nature, some competition, some rivalry, and also some cooperation will have followed automatically. Everyone wanted to get it right, and to be among the first to get it right, so the criterion for success was the concurrence and approval of the others who were attempting the same thing. Open sharing was then a matter of self-interest and therefore came naturally, because one could obtain approval and credit only if one’s achievements were known to others. Skepticism was provided by those others: one had to get it right in order to be convincing. There was no need at all for anyone to be unnaturally disinterested. (This scenario is essentially the one Michael Polanyi  described by the analogy of communally putting together a jigsaw puzzle [2: pp. 42-44, passim; 9].)

Such conditions of free, voluntary interactions among individuals sharing the sole aim of understanding Nature, something like a intellectual free-market conditions, simply do not exist nowadays; few if any researchers can be self-supporting, independent, intellectual entrepreneurs, most are employees and thereby beholden to and restricted by the aims and purposes of those who hold the purse-strings.

Almost universally nowadays, the gold standard of reliability is thought to be “the peer-reviewed mainstream literature”. But it would be quite misleading to interpret peer review as the application of organized skepticism, “critical appraisal and testing throughout the scientific community”. As most productive researchers well know, peer review does not guarantee the accuracy or objectivity or honesty of what has passed peer-review. In earlier times, genuine and effective peer-review took place by the whole scientific community after full details of claimed results and discoveries had been published. Nowadays, in sharp contrast, so called peer-review is carried out by a small number of individuals chosen by journal editors to advise on whether reported claims should even be published. Practicing and publishing researchers know that contemporary so-called peer-review is riddled with bias, prejudice, ignorance and general incompetence. But even worse than the failings of peer review in decisions concerning publication is the fact that the same mechanism is used to decide what research should be carried out, and even how it should be carried out [1: pp. 106-9, passim].

Contemporary views of science, and associated expectations about science, are dangerously misplaced because of the pervasive mistaken belief that today’s scientific researchers are highly talented, exceptional individuals in the mold of Galileo, Newton, Einstein, etc.,  and that they are unlike normal human beings in being disinterested, seeking only to serve the public good, disseminating their findings freely, self-correcting by changing their theories whenever the facts call for it, and perpetually skeptical about their own beliefs.

Rather, a majority consensus nowadays exercises dogmatic hegemony, insisting on theories contrary to fact on a number of  topics, including such publicly important ones as climate-change and HIV/AIDS [10].

————————————————-

[1]    Henry H. Bauer, Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed, McFarland, 2017
[2]    Henry H. Bauer, Scientific Literacy and Myth of the Scientific Method, University of Illinois Press, 1992;
“I would strongly recommend this book to anyone who hasn’t yet heard that the scientific method is a myth. Apparently there are still lots of those folks around”
(David L. Goodstein, Science, 256 [1992] 1034-36)
[3]    Bernard Barber, “Resistance by scientists to scientific discovery”,
 Science, 134 (1961) 596-602
[4]    Thomas S. Kuhn, The Structure of Scientific Revolutions, University of Chicago Press, 1970 (2nd ed., enlarged ; 1st ed. 1962)
[5]    Max Planck, Scientific Autobiography and Other Papers, 1949; translated from German by Frank Gaynor, Greenwood Press, 1968
[6]    Robert K. Merton, “The normative structure of science” (1942); pp. 267–78 in The Sociology of Science (ed. N. Storer, University of Chicago Press, 1973)
[7]    Christopher Ryan & Cacilda Jethá, Sex at Dawn: The Prehistoric Origins of Modern Sexuality, HarperCollins, 2010
[8]    Philipp Lenard, Deutsche Physik, J. F. Lehmann (Munich), 1936
[9]    Michael Polanyi, “The Republic of Science: Its political and economic theory”,
Minerva, I (1962) 54-73
[10]  Henry H. Bauer, Dogmatism  in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012

Posted in conflicts of interest, consensus, funding research, media flaws, peer review, politics and science, resistance to discovery, science is not truth, scientific culture, scientism, scientists are human, the scientific method, unwarranted dogmatism in science | Tagged: , | 1 Comment »

Science Court: Why and What

Posted by Henry Bauer on 2020/12/16

The idea for what has come to be called a Science Court was proposed half a century ago by Arthur Kantrowitz [1].

The development of nuclear reactors as part of the atom-bomb project made it natural to contemplate the possibility of generating power for civil purposes by means of nuclear reactors (the reactor at Hanford that made plutonium for the Nagasaki bomb was also the first full-scale nuclear reactor ever built [2]).

The crucial question was whether power-generating nuclear reactors could be operated safely. The technical experts were divided over that, and Kantrowitz proposed that an “Institution for Scientific Judgment” was needed to adjudicate the opposing opinions.

In those years, scientific activity was still rather like in pre-WWII times: A sort of ivory-tower cottage industry of largely independent intellectual entrepreneurs who shared the aim of learning how the material world works. Mediating opposing opinions could then seem like a relatively straightforward matter of comparing data and arguments. Half a century later, however, scientific activity has pervaded business, commerce, and medical practices, and research has become intensely competitive, with cutthroat competition for resources and opportunities for profit-making and achieving personal wealth and influence. Conflicts of interest are ubiquitous and inescapable [3]. Mediating opposing technical opinions is now complicated because public acceptance of a particular view has consequences for personal and institutional power and wealth; deciding what “science” truly says is hindered by personal conflicts of interest, Groupthink, and institutional conflicts of interest.

Moreover, technical disagreements nowadays are not between more or less equally placed technical experts; they are between a hegemonic mainstream consensus and individual dissenters. The consensus elite controls what the media and the public learn about “science”, as the “consensus” dominates “peer review”, which in practice determines all aspects of scientific activity, for instance the allocation of positions and research resources and the publication (or suppression) of observations or results.

It has become quite common for the mainstream consensus to effectively suppress minority views and anomalous research results, often dismissing them out of hand, not infrequently labeling them pejoratively as denialist or flat-earther crackpot [4]. Thereby the media, the public, and policymakers may not even become aware of the existence of competent, plausible dissent from a governing consensus.

The history of science is, however, quite unequivocal: Over the course of time, a mainstream scientific consensus may turn out to be inadequate and to be replaced by previously denigrated and dismissed minority views.

Public actions and policies might bring about considerable damage if based on a possibly mistaken contemporary scientific consensus. Since nowadays a mainstream consensus so commonly renders minority opinions invisible to society at large, some mechanism is needed to enable policymakers to obtain impartial, unbiased, advice as to the possibility that minority views on matters of public importance should be taken into consideration.

That would be the prime purpose of a Science Court. The Court would not be charged with deciding or declaring what “science” truly says. It would serve just to force openly observed substantive engagement among the disagreeing technical experts — “force” because the majority consensus typically refuses voluntarily to engage substantively with dissident contrarians, even in private.

In a Court, as the elite consensus and the dissenters present their arguments and their evidence, points of disagreement would be made publicly visible and also clarified under mutual cross-examination. That would enable lay observers — the general public, the media, policymakers — to arrive at reasonably informed views about the relative credibility of the proponents of the majority and minority opinions, through noting how evasive or responsive or generally confidence-inspiring they are. Even if no immediate resolution of the differences of opinion could be reached, at least policymakers would be sufficiently well-informed about what public actions and policies might plausibly be warranted and which might be too risky for immediate implementation.

A whole host of  practical details can be specified only tentatively at the outset since they will likely need to be modified over time as the Court gains experience. Certain at the beginning is that public funding is needed as well as absolute independence, as with the Supreme Court of the United States. Indeed, a Science Court might well be placed under the general supervision of the Supreme Court. While the latter might not at first welcome accepting such additional responsibilities, that might change since the legal system is currently not well equipped to deal with cases where technical issues are salient [5]. For example, the issue of who should be acceptable as an expert technical witness encounters the same problem of adjudicating between a hegemonic majority consensus and a number of entirely competent expert dissenters as the problem of adjudicating opposing expert opinions.

Many other details need to be worked out: permanent staffing of the Court as well as temporary  staffing for particular cases; appointment or selection of advocates for opposing views; how to choose issues for consideration; the degree and type of authority the Court could exercise, given that a majority consensus would usually be unwilling to engage voluntarily with dissidents. These questions, and more, have been discussed elsewhere [6]. As already noted, however, if a Science Court is actually established, its unprecedented nature would inevitably make desirable progressive modification of its practices in the light of accumulating experience.

————————————————-

[1]    Arthur Kantrowitz, “Proposal for an Institution for Scientific Judgment”, Science, 156 (1967) 763-64

[2]    Steve Olson, The Apocalypse Factory, W. W. Norton, 2020

[3]    Especially chapter 1 in Henry H. Bauer, Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed, McFarland, 2017

[4]    Henry H. Bauer, Dogmatism  in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012

[5]    Andrew W. Jurs, “Science Court: Past proposals, current considerations, and a suggested structure”, Drake University Legal Studies Research Paper Series, Research Paper 11–06 (2010); Virginia Journal of Law and Technology, 15 #1

[6]    Chapter 12 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed, McFarland, 2017

Posted in conflicts of interest, consensus, denialism, funding research, peer review, politics and science, resistance to discovery, science is not truth, science policy, scientific culture, scientism, unwarranted dogmatism in science | Tagged: , | 2 Comments »

Can science regain credibility?

Posted by Henry Bauer on 2020/12/09

Some of the many critiques of contemporary science and medicine [1] have suggested improvements or reforms: among them, ensuring that empiricism and fact determine theory rather than the other way around [2]; more competent application of statistics; awareness of biases as a way of decreasing their influence [1, 2, 3].

Those suggestions call for individuals in certain groups, as well as those groups and institutions as a whole, to behave differently than they have been behaving: researchers, editors, administrators, patrons; universities, foundations, government agencies, and commercial sponsors of research.

Such calls for change are, however, empty whistling in the wind if not based on an understanding of why those individuals and those groups have been behaving in ways that have caused science as a whole to lose credibility — in the eyes of much of the general public, but not only the general public: a significant minority of accomplished researchers and other informed insiders have concluded that on any number of topics the mainstream “consensus” is flawed or downright wrong, not properly based on the available evidence [4].

It is a commonplace to remark that science displaced religion as the authoritative source of knowledge and understanding, at least in Western civilization during the last few centuries. One might then recall the history of religion in the West, and that corruption of its governing institutions eventually brought rebellion: the Protestant Reformation, the Enlightenment, and the enshrining of science and reason as society’s hegemonic authority; so it might seem natural now to call for a Scientific Reformation to repair the institutions of science that seem to have become corrupted.

The various suggestions for reform have indeed called for change in a number of ways: in how academic institutions evaluate the worth of their researchers; in how journals decide what to publish and what not to publish; in how the provision of research resources is decided; and so forth and so on. But such suggestions fail to get to the heart of the matter. The Protestant Reformation was seeking the repair of a single, centrally governed, institution. Contemporary science, however, comprises a whole collection of institutions and groups that interact with one another in ways that are not governed by any central authority.

The way “science” is talked and written about is highly misleading, since no single word can properly encompass all its facets or aspects. The greatest source of misunderstanding comes about because scientific knowledge and understanding do not generate themselves or speak for themselves; so in common discourse, “science” refers to what is said or written about scientific knowledge and theories by people — who are, like all human beings, unavoidably fallible, subject to a variety of innate ambitions and biases as well as external influences; and hindered and restricted by psychological and social factors — psychological factors like confirmation bias, which gets in the way of recognizing errors and gaps, social factors like Groupthink, which pressures individuals not to deviate from the beliefs and actions of any group to which they belong.

So whenever a claim about scientific knowledge or understanding is made, the first reaction that should be, “Who says so?”

It seems natural to presume that the researchers most closely related to a given topic would be the most qualified to explain and interpret it to others. But scientists are just as human and fallible as others, so researchers on any given subject are biased towards thinking they understand it properly even though they may be quite wrong about it.

A better reflection of what the facts actually are would be the view that has become more or less generally accepted within the community of specialist researchers, and thereby in the scientific community as a whole; in other words, what research monographs, review articles, and textbooks say — the “consensus”. Crucially, however, as already noted, any contemporary consensus may be wrong, in small ways or large or even entirely.

Almost invariably there are differences of opinion within the specialist and general scientific communities, particularly but not only about relatively new or recent studies. Unanimity is likely only over quite simple matters where the facts are entirely straightforward and readily confirmed; but such simple and obvious cases are rare indeed. Instead of unanimity, the history of science is a narrative of perpetual disagreements as well as (mostly but not always) their eventual resolution.

On any given issue, the consensus is not usually unanimous as to “what science says”. There are usually some contrarians, some mavericks among the experts and specialist researchers, some unorthodox views. Quite often, it turns out eventually that the consensus was flawed or even entirely wrong, and what earlier were minority views then become the majority consensus [5, 6].

That perfectly normal lack of unanimity, the common presence of dissenters from a “consensus” view, is very rarely noted in the popular media and remains hidden from the conventional wisdom of society as a whole — most unfortunately and dangerously, because it is hidden also from the general run of politicians and policymakers. As a result, laws on all sorts of issues, and many officially approved practices in medicine, may come to be based on a mistaken scientific consensus; or, as President Eisenhower put it [7], public policies might become captive to a scientific-technological elite, those who constitute and uphold the majority consensus.

The unequivocal lesson that modern societies have yet to learn is that any contemporary majority scientific consensus may be misleading. Only once that lesson has been learned will it then be noted that there exists no established safeguard to prevent public policies and actions being based on erroneous opinions. There exists no overarching Science Authority to whom dissenting experts could appeal in order to have the majority consensus subjected to reconsideration in light of evidence offered by the contrarian experts; no overarching Science Authority, and no independent, impartial, unbiased, adjudicators or mediators or interpreters to guide policymakers in what the actual science might indicate as the best direction.

That’s why the time is ripe to consider establishing a Science Court [8].

——————————————–

[1]     CRITIQUES OF CONTEMPORARY SCIENCE AND ACADEME 
WHAT’S WRONG WITH PRESENT-DAY MEDICINE

[2]    See especially, about theoretical physics, Sabine Hossenfelder,Lost in Math: How Beauty Leads Physics Astray, Basic Books, 2018

[3]    Stuart Ritchie, Science Fictions: How FRAUD, BIAS, NEGLIGENCE, and HYPE Undermine the Search for Truth, Metropolitan Books (Henry Holt & Company), 2020

[4]    A number of examples are discussed in Henry H. Bauer, Dogmatism  in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012

[5]    Bernard Barber, “Resistance by scientists to scientific discovery”, Science, 134 (1961) 596-602

[6]    Thomas S. Kuhn, The Structure of Scientific Revolutions, University of Chicago Press, 1970, 2nd (enlarged) ed. [1st ed. was 1962]

[7]    Dwight D. Eisenhower, Farewell speech, 17 January 1961; transcript at http://avalon.law.yale.edu/20th_century/eisenhower001.asp

[8]    Chapter 12 in Henry H. Bauer, Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed, McFarland, 2017

Posted in conflicts of interest, consensus, fraud in science, media flaws, medical practices, peer review, politics and science, resistance to discovery, science is not truth, science policy, scientific culture, scientists are human, unwarranted dogmatism in science | Tagged: , | 3 Comments »

Dilemmas for a skeptical scientist living in CoVID-19 USA

Posted by Henry Bauer on 2020/12/06

Anthony Fauci was and remains wrong about HIV/AIDS [1]. But everyone can be wrong about one thing and yet right about another; so might Fauci be essentially right about CoVID-19?

Robert Redfield, current director of the Centers for Disease Control & Prevention (CDC), was a member of the HIV Research Group that failed to follow up conundrums about “HIV tests” in the earliest days: the very conundrums that reveal the inadequacies of the accepted views about HIV. Nothing in Redfield’s record inspires confidence in his judgment, quite the contrary [2].

Moreover, even before Redfield, the CDC had failed miserably concerning CoVID-19 tests in the early days. How can I now trust any of the data and analyses issued by the CDC? It was their faulty, statistically incompetent, classification of the early AIDS sufferers that laid the basis for the mistaken view of an infectious disease [3]; and they ignored the HIV-test conundrums when they were pointed out to them [#514 in The Case against HIV ].

A large proportion of my colleagues in Rethinking AIDS [4] have extrapolated the lack of credibility of Fauci, CDC, et al. to conclude that CoVID-19 is not dangerously different from the normal influenza-like illnesses (ILI) of every global winter season. Certainly the age-dependent relationship of CoVID-19 mortality seems to be much like that of ILI mortality.

As against that, the number of deaths attributed to CoVID-19 in the USA is, by the end of 2020, significantly greater than the worst ILI season — according to CDC data, of course. Furthermore, comparison of the United States with other countries,  particularly Taiwan and Australia and New Zealand, seems to support the view that CoVID-19 is exceptionally contagious and that its spread can be greatly restricted by lockdowns, social distancing, and mask-wearing.

On the other hand,  HIV/AIDS-based understanding (as well as a priori reasoning) discredits RT/PCR-CoVID-19 testing as a reliable diagnosis of infection. And yet there does seem to be a strong correlation between reported positive CoVID-19 tests and observed morbidity and mortality. Perhaps indeed the DNA bits found or postulated to be characteristic of CoVID-19 do occur predominantly in individuals who have at some time been infected; some sources have suggested that the DNA or RNA sequences being looked for are fairly lengthy ones and thereby fairly specific to CoVID-19.

To resolve at all conclusively the differences between the official view and the dissident ones, far better data are needed than are presently available. Instead of numbers, one needs to know how those vary by age, by co-morbidities, by diagnoses of actual causes of morbidity and ultimate mortality; together with truly comparable data for ILI. Those data and comparisons are unlikely to be available until far in the future, when historians of medicine do the sort of retrospective investigative work that Michelle Cochrane did for AIDS patients [5].

So what to believe? Who to believe?

Official sources discredited themselves over HIV/AIDS and have not apparently learned from that; HIV=AIDS has never been disavowed, and that mistaken belief and invalid tests continue to bring unnecessary and toxic “treatment” to innumerable individuals.

That officialdom has become widely discredited, including official science and medical science In general, is illustrated by the public hand-wringing by many officials and commentators about the public lack of confidence in vaccines that is expected to interfere with widespread uptake of CoVID-19 vaccination.

The loss of credibility by official sources  has been well earned. A selective bibliography [6] of critiques of contemporary science by scientists and researchers and science writers and other commentators lists dozens of books as well as many articles, as well as a couple of specialist journals concerned solely with breaches of ethics and accountability in science. A companion bibliography [7] lists books, articles, and reports describing the failings of contemporary medicine and medical science.

As to vaccines, the case of HPV vaccines (Gardasil, Cervarix) demonstrates that not only can unproven and even unsafe vaccines be officially approved by the Food and Drug Administration for marketing, they can also then be vigorously promoted by the CDC [8].

In the absence of credible official authorities or sources, What to believe? Who to believe?

Needed reforms are suggested in many of the critical works [7,8], but no significant actions have followed those suggestions.

————————————————

[1]    That HIV does not cause AIDS can be convincingly demonstrated to anyone who is willing to look at the actual facts available in the official literature including peer-reviewed journals collated in the bibliography at The Case against HIV; included are a couple of dozen books analyzing the data.
    My own book (#5 in The Case against HIV) came about because I followed up a statement clearly incompatible with the official view, searching the records of about two decades of reported HIV tests and finding that the results of those tests show that what the tests detect is not an infectious agent; see also my narrative of that emotionally stressful research (#514 in The Case against HIV).

 [2]   Laurie Garrett, “Meet Trump’s new, homophobic public health quack”, 23 March 2018;
     Laurie Garrett, “Why Trump’s new CDC director is an abysmal choice”, 13 May 2018;
    Kristen Holmes, Nick Valencia & Curt Devine (CNN), “CDC woes bring Director Redfield’s troubled past as an AIDS researcher to light”, 5 June 2020;
    Tim Murphy, “Robert Redfield’s epic COVID failure is not a surprise to many HIV and public health experts”, 28 September 2020

[3]    John Lauritsen, chapter 1 in The AIDS War: Propaganda, Profiteering and Genocide from the Medical-Industrial Complex, ASKLEPIOS, 1993

[4]     Established to promote understanding that HIV does not cause AIDS, http://www.virusmyth.com/aids. Up-to-date website is https://rethinkingaids.com

[5]    Michelle Cochrane, When AIDS Began: San Francisco and the Making of an Epidemic, Routledge, 2004

[6]    CRITIQUES OF CONTEMPORARY SCIENCE AND ACADEME

[7]    WHAT’S WRONG WITH PRESENT-DAY MEDICINE

[8]    Sacrificial Virgins  (a documentary);
    Mary Holland & Kim Mack Rosenberg, The HPV Vaccine On Trial: Seeking Justice For A Generation Betrayed, Skyhorse, 2018
    HPV vaccines: risks exceed benefits; HPV vaccination: a thalidomide-type scandal;   
    HPV does not cause cervical cancer; HPV, Cochrane review, and the meaning of “cause”

Posted in media flaws, medical practices, science policy, scientists are human, unwarranted dogmatism in science | Tagged: , , , | 9 Comments »