Skepticism about science and medicine

In search of disinterested science

Archive for the ‘peer review’ Category

Science is NOT self-correcting (How science has changed — VII)

Posted by Henry Bauer on 2018/05/06

One of the common and popular shibboleths about science is that it is self-correcting. That implies happening inevitably and automatically. But despite the existence of innumerable scientific organizations and institutions, there is no overarching system or set of protocols or hierarchy that governs all scientific activity. Nothing about scientific activity is automatic or inevitable.

The illusion of self-correction may trace back to the fact that science has surely progressed over time, to better and deeper understanding of how the world works, superseding and rejecting mistakes and misunderstandings. However, this correcting of earlier mis-steps was never automatic; more important, it was never a sure thing. Barber [1] surveyed the long history of hegemonic scientific consensuses vigorously resisting correction. Stent [2] described the phenomenon of “premature discovery” whereby some hegemonic scientific consensuses have forestalled correction for decades — about 40 years with Mendel’s quantitative insight into heredity, about half a century with Wegener’s insight into continental movements.

Barber and Stent dealt with the more-or-less classic modern science that subsisted up until about the middle of the 20th century, the sort of science whose ethos could be fairly adequately described by the Mertonian Norms [3]; a cottage industry of independent, voluntarily cooperating, largely disinterested ivory-tower intellectual entrepreneurs in which science was free to do its own thing, seeking truths about the natural world. Individuals were free to publish their results with little or no hindrance. There were plenty of journals and plenty of journal space, and editors were keen to receive contributions: “From the mid-1800s, there was more journal space than there were articles . . . . assistant editors [had the] . . . primary responsibility . . . to elicit articles and reviews to fill the pages of the publication” [4].

The onus for ensuring that published work was sound rested on the authors, there was not the contemporary gauntlet of “peer reviewers” to run: “for most of the history of scientific journals, it has been editors — not referees — who have been the key decision-makers and gatekeepers. . . . It was only in the late 20th century that refereeing was rebranded as ‘peer review’ and acquired (or reacquired) its modern connotation of proof beyond reasonable doubt. . . . A Google ngram — which charts yearly frequencies of any phrase in printed documents — makes the point starkly visible: it was in the 1970s that the term ‘peer review’ became widely used in English. [We] . . . do not yet know enough about why the post-war expansion of scientific research . . . led to . . . ‘peer review’ [coming] . . . to dominate the evaluation of scholarly research” [5].

Nowadays, by contrast, where publication makes a career and lack of publication means career failure, journals are swamped with submissions at the same time as costs have exploded and libraries are hard pressed to satisfy their customers’ wishes for everything that gets published. Journals are now ranked in prestige by how small a proportion of submissions they accept, and “peer review” is pervaded by conflicts of interest. The overall consequence is that the “leading journals” hew to the current “scientific consensus” so that unorthodoxies, radical novelties, minority views find it difficult to get published. How extreme can be the efforts of “the consensus” to suppress dissent has been profusely documented on a number of topics, including the very publicly visible issues of HIV/AIDS and climate change [6, 7, 8].

Where the consensus happens to be in need of “self-correction”, in other words, today’s circumstances within the scientific community work against any automatic or easy or quick correction.

That situation is greatly exacerbated by the fact that correction nowadays is no simple revising of views within the scientific community. “Science” has become so entwined with matters of great public concern that particular beliefs about certain scientific issues have large groups of influential supporters outside the scientific community who seek actively to suppress dissent from “the consensus”; over HIV/AIDS, those groupies who abet the consensus include the pharmaceutical industry and activist organizations largely supported by drug companies; over climate change, environmentalists have seized on “carbon emissions” as a weapon in their fight for sustainability and stewardship of nature.

Science is not inevitably or automatically self-correcting. Its official agencies, such as the Food and Drug Administration, the Centers for Disease Control & Prevention, the National Institutes of Health, the World Health Organization, etc., are captives of the contemporary scientific consensus and thereby incapable of drawing on the insights offered by minority experts, which is also the case with the peer-review system and the professional journals.

Even when outright fraud or demonstrated honest mistakes have been published, there is no way to ensure that the whole scientific community becomes aware of subsequent corrections or retractions, so errors may continue to be cited as though they were reliable scientific knowledge. Even the journals regarded as the most reliable (e.g. Nature journals, Cell, Proceedings of the National Academy) make it quite difficult for retractions or corrections to be published [9], and even complete retraction seemed to reduce later citation by only about one-third, very far from “self-correcting” the whole corpus of science [10].

 

==========================================

[1]    Bernard Barber, “Resistance by scientists to scientific discovery”, Science, 134 (1961) 596–602

[2]    Gunther Stent, “Prematurity and uniqueness in scientific discovery”, Scientific American, December 1972, 84–93

[3]    How science has changed — II. Standards of Truth and of Behavior

[4]    Ray Spier, “The history of the peer-review process”, TRENDS in Biotechnology, 20 (2002) 357-8

[5]    Aileen Fyfe, “Peer review: not as old as you might think”, 25 June 2015

[6]    Henry H. Bauer, The Origin, Persistence and Failings of HIV/AIDS Theory, McFarland, 2007

[7]    Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012

[8]    Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017)

[9]    “Science is self-correcting” (ed.) Lab Times, 2012. #1: 3

[10]  Mark P. Pfeifer & Gwendolyn L. Snodgrass, “The continued use of retracted, invalid scientific literature”, JAMA, 263 (1990) 1420-3)

 

Advertisements

Posted in conflicts of interest, consensus, peer review, politics and science, resistance to discovery, science is not truth, science policy, scientific culture | Tagged: | Leave a Comment »

How science changed — III. DNA: disinterest loses, competition wins

Posted by Henry Bauer on 2018/04/10

The Second World War marked a shift of economic and political power from Europe to the United States, with associated changes in the manner and style with which those powers are deployed. Science began to change at about the same time and in somewhat analogous and perhaps associated ways.

The change in the norms of science, from CUDOS to PLACE, that Ziman had described (How science has changed — II. Standards of Truth and of Behavior) began with what happened in the middle of the 20th century. The first of the Mertonian norms to fade away was disinterestedness: Science came to be like other spheres of human activity in that some people chose to pursue it as an avenue for satisfying personal ambition rather than as an opportunity to serve the public good.

My cohort of science students in Australia in the early 1950s had been notably idealistic about science. We could imagine no finer future then the opportunity to earn a living doing science. The relative absence of excessive personal ambition may have stemmed in large part from the fact that Australia was at that time a profoundly egalitarian society; no one should imagine himself to be “better” than anyone else [1].

Our ideals about science included taking honesty for granted, as Merton had.

Our ranking of desirable occupations had doing research in a university setting at the top. Those who were not good enough to do innovative self-directed research would still be able to have a place in science by working in industry. If one were not talented enough even for that, one would have to make do with teaching science. And if one could not even do that, then it would have to be some sort of administrative job. I still recall the minor functionary at the University of Sydney who represented a living lesson for us in the wages of sin: As a graduate student in chemistry, he had faked some of his results, and so he had been condemned to lifelong labor as a paper pusher.

The sea change in science around the middle of the 20th century is illustrated in microcosm by the circumstances of the discovery of the structure of DNA by James Watson and Francis Crick. Watson’s description of that discovery in his memoir, The Double Helix (Atheneum, 1968), and the reactions to that book in the scientific community, illustrate the profound changes in scientific activity beginning to take place around that time. Gunther Stent’s annotated edition of The Double Helix [2] provides a ready source for appreciating how the DNA discovery touches on many aspects of how scientific activity changed profoundly, beginning in the middle of the 20th century; the edition includes the original text of the book, commentaries, many of the original book reviews, and pertinent articles.

Watson himself, as portrayed in his own memoir, exemplifies the brash, personally ambitious American ignorant of or simply ignoring the traditional ways of doing things, in personal behavior as well as in doing science [3].

In Watson’s memoir, traditional ways including disinterestedness are exemplified by the Europeans Max Perutz and Erwin Chargaff. Perutz had been working diligently for a decade or so, gradually refining what could be learned about the structure of proteins through the technique of X-ray crystallography. With similar diligence Erwin Chargaff had been analyzing the chemical constitutions of DNA from a variety of different sources. Both those research approaches comported with traditional experience that carefully accumulating sufficient pertinent information would eventually be rewarded by important new understanding. In Britain, since Maurice Wilkins and Rosalind Franklin were working on DNA structure via X-ray crystallography, no other British lab would trespass onto that research project.

Watson of course had no such scruples, nor was he prepared to wait for the traditional ways to pay off; Watson’s own words make it appear that his prime motivation was to make a name for himself — any advance in human understanding, for the public good, would be a byproduct.

To short-circuit old-fashioned laborious approaches, he and his co-worker Francis Crick looked to what had been pioneered by another American, Linus Pauling, who is often still regarded as the outstanding chemist of the 20th century. Pauling did also use X-ray crystallography, but only as a secondary adjunct. He had laid the foundations for an understanding of chemical bonding and had been interested from the beginning in the three-dimensional structures of molecules; applying his insights to the study of macromolecules, he succeeded in elucidating the configuration of protein molecules in part by constructing feasible molecular models.

Traditional cosmopolitan European culture could be disdainful and snobbish toward the parvenu, nouveau-riche American ways that were taking over the world, including the world of science. Erwin Chargaff provides an apposite, rather sad illustration. He disliked not only Watson’s personality and actions, he led himself to believe that his own diligent traditional work on the chemical composition of DNA should have been rewarded by a share of the Nobel Prize. Chargaff’s review [4] of The Double Helix flaunts his cultured erudition and also reveals his personal disappointment; later he refused Gunther Stent permission to reprint his review, in company with all the others, in Stent’s annotated edition.

The technical point at issue is that Chargaff had been content to allow results to accumulate until insight revealed itself rather than to take a gamble on some premature interpretation: he had merely remarked on an apparently consistent ratio of purines to pyrimidines in the DNA from a variety of sources [5]: “It is . . . noteworthy — whether this is more than accidental cannot yet be said — that in all deoxypentose nucleic acids examined thus far the molar ratios of total purines to total pyrimidines, and also of adenine to thymine and of guanine to cytosine, were not far from 1”.

The important insight, however, is that the numbers are exactly equal; adenine faces thymine, and guanine faces cytosine in the molecular structure of DNA, and that is the central and crucial feature of the double helix. In hindsight, Chargaff wanted his tentative statement of approximate equality to be construed as “the discovery of   the base-pairing regularities” [4].

Erwin Chargaff may have been acerbic and ungenerous in his book review, but he will also have spoken for generations of scientists in his regret for the passing of the more idealistic, disinterested, traditional order and distaste for what was replacing it: “in our time a successful cancer researcher is not one who ‘solves the riddle,’ but rather one who gets a lot of money to do so” [6]; “Watson’s book may contribute to the much-needed demythologization of modern science”; “with very few exceptions, it is not the men that make science; it is science that makes the men” [4].

That disappearing idealistic traditional order might be exemplified in Sinclair Lewis’s Arrowsmith. Published in 1925 by Harcourt, Brace, according to amazon.com there have been more than 80 later editions, including a 2008 paperback. Evidently the yearning remains strong for disinterested science for the public good. The book’s protagonist, after some early mis-steps and yieldings to commercial temptations, opts for pure research for the good of humankind. Even a couple of decades ago, an academic of my generation (a biochemist) told me that he still gave his graduate students Arrowsmith to read as a guide to the proper ethos of science.

That occasion for being reminded of Arrowsmith was a series of seminars I was then holding on our campus about ethics in research [7], a topic that was just becoming prominent as instances of dishonesty in scientific work were beginning to be noted with increasing frequency.

More about that in a future blog post.

========================================

[1]    A widely shared view was that “tall poppies” should be decapitated. A highly educated Labor-Party leader was careful to adopt a working-class accent in public to hide his normal “educated”, British-BBC-type dialect. I personally saw fisticuffs occasioned by one party feeling that the other had thought themselves better in some way
[2]    Gunther S. Stent (ed.), The Double Helix — Text, Commentary, Reviews, Original Papers, W. W. Norton, 1980
[3]    I had begun to sense the new self-serving ethos in science in the late 1960s, after a career move from Australia to the USA. I encountered ambitious young go-getters who luxuriated in the [then!] largesse of research support, inserting personal pleasures into publicly funded research travel, for example studying aspects of marine environments in ways that made possible scuba-diving and general cavorting in the Caribbean. I participated in the WETS, one of the informal associations of young up-and-comers who used to sample fleshly diversions as part of research-grant-paid trips to professional conferences
[4]    Erwin Chargaff, “A quick climb up Mount Olympus”, Science, 159 (1968) 1448-9
[5]    Erwin Chargaff, “Chemical specificity of nucleic acids and mechanism of their enzymatic degradation”, Experientia, 6 (1950) 201-40
[6]    Erwin Chargaff, Voices in the Labyrinth, Seabury, 1977, p. 89
[7]    For instance, “Ethics in Science” under “Current topics in analytical chemistry: critical analysis of the literature”, 15 & 17 March 1994;
reprinted at pp. 169-182 in Against the Tide, ed. Martín López Corredoira & Carlos Castro Perelman, Universal Publishers, 2008;

 

Posted in peer review, scientific culture, scientists are human | Tagged: , , , , , | Leave a Comment »

How science has changed — II. Standards of Truth and of Behavior

Posted by Henry Bauer on 2018/04/08

The scientific knowledge inherited from ancient Babylon and Greece and from medieval Islam was gained by individuals or by groups isolated from one another in time as well as geography. Perhaps the most consequential feature of the “modern” science that we date from the 17th-century Scientific Revolution is the global interaction of the people who are doing science, and especially the continuity over time of their collective endeavors.
These interactions among scientists began in quite informal and individual ways. An important step was the formation of academies and societies, among which the Royal Society of London is usually acknowledged to be the earliest (founded 1660) that has remained active up to the present time — though it was not the earliest such institution and even the claim of “longest continually active” has been challenged [1].
Even nowadays, the global community of scientists remains in many ways informal despite the host of scientific organizations and institutions, national and international: the global scientific community is not governed by any formal structure that lays down how science should be done and how scientists should behave.
However, observing the actualities of scientific activity indicates that there had evolved some agreed-on standards generally seen within the community of scientists as proper behavior. Around the time of the Second World War, sociologist Robert Merton described those informal standards, and they came to be known as the “Mertonian Norms” of science [2]. They comprise:

Ø    Communality or communalism (Merton had said “communism”): Science is an activity of the whole scientific community and it is a public good — findings are shared freely and openly.
Ø    Universalism: Knowledge about the natural world is universally valid and applicable. There are no separations or distinctions by nationality or religion race or anything of that sort.
Ø    Disinterestedness: Science is done for the public good and not for personal benefit; scientists seek to be impartial, objective, unbiased, and not self-serving.
Ø    Skepticism: Claims and reported findings are subject to critical appraisal and testing throughout the scientific community before they can be accepted as proper scientific knowledge.

Note that honesty is not mentioned; it was simply taken for granted.
These norms clearly make sense for a cottage industry, as ideal behavior that individuals should aim for; but they are not appropriate for a corporate environment, they cannot guide the behavior of individuals who are part of some hierarchical enterprise.
In the late 1990s, John Ziman [3] discussed the change in scientific activity as it had morphed from the activities of an informal, voluntary collection of individuals seeking to understand how the world works to a highly organized activity with assigned levels of responsibility and authority and where sources of research funding have a say in what gets done, and which often expect to get something useful in return for their investments, something profitable.
The early cottage industry of science had been essentially self-supporting. Much could be done without expensive equipment. People studied what was conveniently at hand, so there was little need for funds to support travel. Interested patrons and local benefactors could provide the small resources needed for occasional meetings and the publication of findings.
Up to about the middle of the 20th century, universities were able to provide the funds needed for basic research in chemistry and biology and physics. The first sign that exceptional resources could be needed had come in the 1920s when Lawrence constructed the first large “atom-smashing machine”; but that and the need for expensive astronomical telescopes remained outliers in the requirements for the support of scientific research overall.
From about the time of the Second World War, however, research going beyond what had already been accomplished began to require ever more expensive and specialized equipment as well as considerable infrastructure: technicians to support the equipment, glass-blowers and secretaries and book-keepers and librarians, and managers of such ancillary staff; so researchers increasingly came to need support beyond that available from individual patrons or universities. Academic research came to rely increasingly on getting grants for specific research projects from public agencies or from wealthy private foundations.
Although those sources of research funds typically claim that they want to support simply “the best science”, their view of what the best science is does not necessarily jibe with the judgments of the individual researchers [4].
At the same time as research in universities was calling on outside sources of funding, an increasing number of industries were setting up their own laboratories for research specifically toward creating and improving their products and services. Such product-specific “R&D” (research and development) sometimes turned up novel basic knowledge, or revealed the need for such fundamentally new understanding. One consequence has been that some really striking scientific advances have come from such famous industrial laboratories as Bell Telephone Laboratories or the Research Laboratory of General Electric. Researchers employed in industry have received a considerable number of Nobel Prizes, often jointly with academics [5].
Under these new circumstances, as Ziman [3] pointed out, the traditional distinction between “applied” research and “pure” or “basic” research lost its meaning.
Ziman rephrased the Mertonian norms as the nice acronym CUDOS, adding the “O” for originality, quite appropriately since within the scientific community credit was and is given to for the most innovative, original contributions; CUDOS, or preferably “kudos”, being the Greek term for acclaim of exceptional accomplishment. By contrast, Ziman proposed for the norms that obtain in a corporate scientific enterprise, be it government or private, the acronym PLACE: Researchers nowadays get their rewards not by adhering to the Mertonian norms but by producing Proprietary findings whose significance may be purely Local rather than universal, the subject of research having been chosen under the Authority of an employer or patron and not by the individual researcher, who is Commissioned to do the work as an Expert employee.

Ziman too did not mention honesty; like Merton he simply took it for granted.
Ziman had made an outstanding career in solid-state physics before, in his middle years, he began to publish, starting in 1968 [6] highly insightful works about how science functions, in particular what makes it reliable. In the late 1960s, it had still been reasonable to take honesty in science for granted; but by the time Ziman published Prometheus Bound, honesty in science could no longer be taken for granted; Ziman had failed to notice some of what was happening in scientific activity. Competition for resources and for career advancement had increased to a quite disturbing extent, presumably the impetus for the increasing frequency with which scientists were found to have cheated in some way. Even published, supposedly peer-reviewed research failed later attempted confirmation in many cases, and all too often it was revealed as simply false, faked [7].
More about that in a following blog post.

==========================================

[1]    “The Royal Societies [sic] claim to be the oldest is based on the fact that they developed out of a group that started meeting in Gresham College in 1645 but unlike the Leopoldina this group was informal and even ceased to meet for two years between 1658 and 1660” — according to The Renaissance Mathematicus, “It wasn’t the first but…”
[2]    Robert K. Merton, “The normative structure of science” (1942); most readily accessible as pp. 267–78 in The Sociology of Science (ed. N. Storer, University of Chicago Press, 1973) a collection of Merton’s work
[3]    John Ziman, Prometheus Bound: Science in a Dynamic Steady State, Cambridge University Press, 1994
[4]    Richard Muller, awarded a prize by the National Science Foundation, pointed out that truly innovative studies are unlikely to be funded and need to be carried out more or less surreptitiously; and Charles Townes, who developed masers and lasers, testified to his difficulty in getting research support for that ground-breaking work, or even encouragement from some of his distinguished older colleagues —
Richard A. Muller, “Innovation and scientific funding”, Science, 209 (1980) 880–3
Charles Townes, How the Laser Happened: Adventures of a Scientist, Oxford University Press , 1999
[5]    Karina Cummings, “Nobel Science Prizes in industry”;
Nobel Laureates and Research Affiliations
[6]    John Ziman, Public Knowledge (1968); followed by The Force of
Knowledge
(1976); Reliable Knowledge (1978); An Introduction to Science
Studies
(1984); Prometheus Bound (1994); Real Science (2000);
all published by Cambridge University Press
[7]    John P. A. Ioannidis, “Why most published research findings are false”,
         PLoS Medicine, 2 (2005) e124
Daniele Fanelli, “How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data”,
PLoS ONE, 4(#5, 2009): e5738

Posted in conflicts of interest, fraud in medicine, fraud in science, funding research, peer review, resistance to discovery, science is not truth, scientific culture, scientists are human | Tagged: , | Leave a Comment »

How science has changed: Who are the scientists?

Posted by Henry Bauer on 2018/04/07

Scientists are people who do science, Nowadays scientists are people who work at science as a full-time occupation and who earn their living at it.
Science means studying and learning about the natural world, and human beings have been doing that since time immemorial; indeed, in a sense all animals do that, but humans have developed efficient means to transmit gained knowledge to later generations.
At any rate, there was science long before [1] there were scientists, full-time professional students of Nature. Our present-day store of scientific knowledge includes things that have been known for at least thousands of years. For example, from more than 6,000 years ago in Mesopotamia (Babylon, Sumer) we still use base-60 mathematics for the number of degrees in the arcs of a circle (360) and the number of seconds in a minute and the number of minutes in an hour. We still cry “Eureka” (found!!) for a new discovery, as supposedly Archimedes did more than 2000 years ago when he recognized that floating an object in water was an easy way to measure its volume (by the increase in height of the water) and that the object’s weight equaled the weight of the water it displaced. The Islamic science of the Middle Ages has left its mark in language with, for instance, algebra or alchemy.
Despite those early pieces of science that are still with us today, most of what the conventional wisdom thinks it knows about science is based on what historians call “modern” science, which is generally agreed to have emerged around the 17th century in what is usually called The Scientific Revolution.
The most widely known bits of science are surely the most significant advances. Those are typically associated with the names of people who either originated them or made them popular [2]; so many school-children hear about Archimedes and perhaps Euclid and Ptolemy; and for modern science, even non-science college students are likely to hear of Galileo and Newton and Darwin and Einstein. Chemistry students will certainly hear about Lavoisier and Priestley and Wöhler and Haber; and so on, just as most of us have learned about general history in terms of the names of important individuals. So far as science is concerned, most people are likely to gain the general impression that it has been done and is being done by a relatively small number of outstanding individuals, geniuses in fact. That impression could only be entrenched by the common thought-bite that “science” overthrew “religion” sometime in the 19th century, leading to the contemporary role of science as society’s ultimate arbiter of true knowledge.
The way in which scientists in modern times have been featured in books and in films also gives the impression that scientists are somehow special, that they are by no means ordinary people. Roslynn Haynes [3] identified several stereotypes of scientists, for example “adventurer” or “the noble scientist as hero or savior of society”, with most stereotypes however being less than favorable — “mad, bad, dangerous scientist, unscrupulous in the exercise of power”. But no matter whether good or bad in terms of morals or ethics, society’s stereotype of “scientist” is “far from an ordinary person”.
That is accurate enough for the founders of modern science, but it became progressively less true as more and more people came to take part in some sort of scientific activity. Real change began in the early decades of the 19th century, when the term “scientist” seems to have been used for the first time [4].
By the end of the 19th century it had become possible to earn a living through being a scientist, through teaching or through doing research that led to commercially useful results (as in the dye-stuff industry) or through doing both in what nowadays are called research universities. By the early 20th century, scientists no longer deserved to be seen as outstanding individual geniuses, but they were still a comparatively elite group of people with quite special talents and interests. Nowadays, however, there is nothing distinctly elite about being a scientist. In terms of numbers (in the USA), scientists at roughly 2.7 million are comparable to engineers at 2.1 million (in ~2001), less elite than lawyers (~ 1 million) or doctors (~800,000); and teachers, at ~3.5 million, are almost as elite as scientists.
Nevertheless, so far as the general public and the conventional wisdom are concerned, there is still an aura of being special and distinctly elite associated with science and being a scientist, no doubt because science is so widely acknowledged as the ultimate authority on what is true about the workings of the natural world; and because “scientist” brings to most minds someone like Darwin or Einstein or Galileo or Newton.
So the popular image of scientists is wildly wrong about today’s world. Scientists today are unexceptional white-collar workers. Certainly a few of them could still be properly described as geniuses, just as a few engineers or doctors could be — or those at the high tail-end of any distribution of human talent; but by and large, there is nothing exceptional about scientists nowadays. That is an enormous change from times past, and the conventional wisdom has not begun to be aware of that change.
One aspect of that change is that the first scientists were amateurs seeking to satisfy their curiosity about how the world works, whereas nowadays scientists are technicians or technical experts who do what they are told to do by employers or enabled to do by patrons. A very consequential corollary is that the early scientists had nothing to gain by being untruthful, whereas nowadays the rewards potentially available to prominent scientists have tempted a significant number to practice varying degrees of dishonesty.
Another way of viewing the change that science and scientists have undergone is that science used to be a cottage industry largely self-supported by independent entrepreneurial workers, whereas nowadays science is a corporate behemoth whose workers are apparatchiks, cogs in bureaucratic machinery; and in that environment, individual scientists are subject to conflicts of interest and a variety of pressures owing to their membership in a variety of groups.

Science today is not a straightforward seeking of truth about how the world works; and claims emerging from the scientific community are not necessarily made honestly; and even when made honestly, they are not necessarily true. More about those things in future posts.

=======================================

[1]    For intriguing tidbits about pre-scientific developments, see “Timeline Outline View”
[2]    In reality, most discoveries hinge on quite a lot of work and learning that prefigured them and made them possible, as discussed for instance by Tony Rothman in Everything’s Relative: And Other Fables from Science and Technology (Wiley, 2003). That what matters most is not the act of discovery but the making widely known is the insight embodied in Stigler’s Law, that discoveries are typically named after the last person who discovered them, not the first (S. M. Stigler, “Stigler’s Law of Eponymy”, Transactions of the N.Y. Academy of Science, II: 39 [1980] 147–58)
[3]    Roslynn D. Haynes, From Faust to Strangelove: Representations of the Scientist in Western Literature, Johns Hopkins University Press, 1994; also “Literature Has shaped the public perception of science”, The Scientist, 12 June 1989, pp. 9, 11
[4]    William Whewell is usually credited with coining the term “scientist” in the early 1830s

Posted in conflicts of interest, fraud in science, funding research, media flaws, peer review, science is not truth, scientific culture, scientists are human | Tagged: , , | 4 Comments »

Dangerous knowledge IV: The vicious cycle of wrong knowledge

Posted by Henry Bauer on 2018/02/03

Peter Duesberg, universally admired scientist, cancer researcher, and leading virologist, member of the National Academy of Sciences, recipient of a seven-year Outstanding Investigator Grant from the National Institutes of Health, was astounded when the world turned against him because he pointed to the clear fact that HIV had never been proven to cause AIDS and to the strong evidence that, indeed, no retrovirus could behave in the postulated manner.

Frederick Seitz, at one time President of the National Academy of Sciences and for some time President of Rockefeller University, became similarly non grata for pointing out that parts of an official report contradicted one another about whether human activities had been proven to be the prime cause of global warming (“A major deception on global warming”, Wall Street Journal, 12 June 1996).

A group of eminent astronomers and astrophysicists (among them Halton Arp, Hermann Bondi, Amitabha Ghosh, Thomas Gold, Jayant Narlikar) had their letter pointing to flaws in Big-Bang theory rejected by Nature.

These distinguished scientists illustrate (among many other instances involving less prominent scientists) that the scientific establishment routinely refuses to acknowledge evidence that contradicts contemporary theory, even evidence proffered by previously lauded fellow members of the elite establishment.

Society’s dangerous wrong knowledge about science includes the mistaken belief that science hews earnestly to evidence and that peer review — the behavior of scientists — includes considering new evidence as it comes in.

Not so. Refusal to consider disconfirming facts has been documented on a host of topics less prominent than AIDS or global warming: prescription drugs, Alzheimer’s disease, extinction of the dinosaurs, mechanism of smell, human settlement of the Americas, the provenance of Earth’s oil deposits, the nature of ball lightning, the evidence for cold nuclear fusion, the dangers from second-hand tobacco smoke, continental-drift theory, risks from adjuvants and preservatives in vaccines, and many more topics; see for instance Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, Jefferson (NC): McFarland 2012. And of course society’s officialdom, the conventional wisdom, the mass media, all take their cue from the scientific establishment.

The virtually universal dismissal of contradictory evidence stems from the nature of contemporary science and its role in society as the supreme arbiter of knowledge, and from the fact of widespread ignorance about the history of science, as discussed in earlier posts in this series (Dangerous knowledge; Dangerous knowledge II: Wrong knowledge about the history of science; Dangerous knowledge III: Wrong knowledge about science).

The upshot is a vicious cycle. Ignorance of history makes it seem incredible that “science” would ignore evidence, so claims to that effect on any given topic are brushed aside — because it is not known that science has ignored contrary evidence routinely. But that fact can only be recognized after noting the accumulation of individual topics on which this has happened, evidence being ignored. That’s the vicious cycle.

Wrong knowledge about science and the history of science impedes recognizing that evidence is being ignored in any given actual case. Thereby radical progress is nowadays being greatly hindered, and public policies are being misled by flawed interpretations enshrined by the scientific consensus. Society has succumbed to what President Eisenhower warned against (Farewell speech, 17 January 1961) :

in holding scientific research and discovery in respect, as we should,
we must also be alert to the equal and opposite danger
that public policy could itself become the captive
of a scientific-technological elite.

The vigorous defending of established theories and the refusal to consider contradictory evidence means that once theories have been widely enough accepted, they soon become knowledge monopolies, and support for research establishes the contemporary theory as a research cartel(“Science in the 21st Century: Knowledge Monopolies and Research Cartels”).

The presently dysfunctional circumstances have been recognized only by two quite small groups of people:

  1. Observers and critics (historians, philosophers, sociologists of science, scholars of Science & Technology Studies)
  2. Researchers whose own experiences and interests happened to cause them to come across facts that disprove generally accepted ideas — for example Duesberg, Seitz, the astronomers cited above, etc. But these researchers only recognize the unwarranted dismissal of evidence in their own specialty, not that it is a general phenomenon (see my talk, “HIV/AIDS blunder is far from unique in the annals of science and medicine” at the 2009 Oakland Conference of Rethinking AIDS; mov file can be downloaded at http://ra2009.org/program.html, but streaming from there does not work).

Such dissenting researchers find themselves progressively excluded from mainstream discourse, and that exclusion makes it increasingly unlikely that their arguments and documentation will gain attention. Moreover, frustrated by a lack of attention from mainstream entities, dissenters from a scientific consensus find themselves listened to and appreciated increasingly only by people outside the mainstream scientific community to whom the conventional wisdom also pays no attention, for instance the parapsychologists, ufologists, cryptozoologists. Such associations, and the conventional wisdom’s consequent assigning of guilt by association, then entrenches further the vicious cycle of dangerous knowledge that rests on the acceptance of contemporary scientific consensuses as not to be questioned — see chapter 2 in Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth and “Good Company and Bad Company”, pp. 118-9 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017).

Posted in conflicts of interest, consensus, denialism, funding research, global warming, media flaws, peer review, resistance to discovery, science is not truth, science policy, scientific culture, scientism, scientists are human, unwarranted dogmatism in science | Tagged: , | 2 Comments »

Dangerous knowledge III: Wrong knowledge about science

Posted by Henry Bauer on 2018/01/29

In the first post of this series (Dangerous knowledge) I pointed to a number of specific topics on which the contemporary scientific consensus is doubtfully in tune with the actual evidence. That disjunction is ignored or judged unimportant both by most researchers and by most observers; and that, I believe, is because the fallibility of science is not common knowledge; which in turn stems from ignorance and wrong knowledge about the history of science and, more or less as a consequence, about science itself.

The conventional wisdom regards science as a thing that is characterized by the scientific method. An earlier post (Dangerous knowledge II: Wrong knowledge about the history of science) mentioned that the scientific method is not a description of how science is done, it was thought up in philosophical speculation about how science could have been so successful, most notably in the couple of centuries following the Scientific Revolution of the 17th century.

Just as damaging as misconceptions about how science is done is the wrong knowledge that science is even a thing that can be described without explicit attention to how scientific activity has changed over time, how the character of the people doing science has changed over time, most drastically since the middle of the 20th century. What has happened since then, since World War II, affords the clearest, most direct understanding of why contemporary official pronouncements about matter of science and medicine need to be treated with similar skepticism as are official pronouncements about matters of economics, say, or politics. As I wrote earlier (Politics, science, and medicine),

In a seriously oversimplified nutshell:

The circumstances of scientific activity have changed, from about pre-WWII to nowadays, from a cottage industry of voluntarily cooperating, independent, largely disinterested ivory-tower intellectual entrepreneurs in which science was free to do its own thing, namely the unfettered seeking of truth about the natural world, to a bureaucratic corporate-industry-government behemoth in which science has been pervasively co-opted by outside interests and is not free to do its own thing because of the pervasive conflicts of interest. Influences and interests outside science now control the choices of research projects and the decisions of what to publish and what not to make public.

 

For a detailed discussion of these changes in scientific activity, see Chapter 1 of Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017); less comprehensive descriptions are in Three Stages of Modern Science  and The Science Bubble.

Official pronouncements are not made primarily to tell the truth for the public good. Statements from politicians are often motivated by the desire to gain favorable attention, as is widely understood. But less widely understood is that official statements from government agencies are also often motivated by the desire to gain favorable attention, to make the case for the importance of the agency (and its Director and other personnel) and the need for its budget to be considered favorably. Press releases from universities and other research institutions have the same ambition. And anything from commercial enterprises is purely self-interested, of course.

The stark corollary is that no commercial or governmental entity, nor any sizable not-for-profit entity, is devoted primarily to the public good and the objective truth. Organizations with the most laudable aims, Public Citizen,  say, or the American Heart Association, etc. etc. etc., are admittedly devoted to doing good things, to serving the public good, but it is according to their own particular definition of the public good, which may not be at all the same as others’ beliefs about what is best for the public, for society as a whole.

Altogether, a useful generalization is that all corporate entities, private or governmental, commercial or non-profit, have a vested self-interest in the status quo, since that represents the circumstances of their raison d’être, their prestige, their support from particular groups in society or from society as a whole.

The hidden rub is that a vested interest in the status quo means defending things as they are, even when objective observers might note that those things need to be modified, superseded, abandoned. Examples from the past are legion and well known: in politics, say, the American involvement in Vietnam and innumerable analogous matters. But not so well known is that unwarranted defense of the status quo is also quite common on medical and scientific issues. The resistance to progress, the failure to correct mis-steps in science and medicine in any timely way, has been the subject of many books and innumerable articles; for selected bibliographies, see Critiques of Contemporary Science and Academe and What’s Wrong with Present-Day Medicine. Note that all these critiques have been effectively ignored to the present day, the flaws and dysfunctions remain as described.

Researchers who find evidence that contradicts the status quo, the established theories, learn the hard way that such facts don’t count. As noted in my above-mentioned book,  science has a love-hate relationship with the facts: they are welcomed before a theory has been established, but after that only if they corroborate the theory; contradictory facts are anathema. Yet researchers never learn that unless they themselves uncover such unwanted evidence; scientists and engineers and doctors are trained to believe that their ventures are essentially evidence-based.

Contributing to the resistance against rethinking established theory is today’s hothouse, overly competitive, rat-race research climate. It is no great exaggeration to say that researchers are so busy applying for grants and contracts and publishing that they have no time to think new thoughts.

Posted in conflicts of interest, consensus, medical practices, peer review, resistance to discovery, science is not truth, scientists are human, the scientific method, unwarranted dogmatism in science | Tagged: | 1 Comment »

HPV vaccination: a thalidomide-type scandal

Posted by Henry Bauer on 2017/09/17

I’ve posted a number of times about the lack of proof that HPV causes cervical cancer and that the anti-HPV vaccines are being touted widely by officialdom as well as manufacturers even though the vaccines have been associated with an unusually high number of adverse reactions, some of them very severe, literally disabling.

Long-time medical journalist and producer of award-winning documentaries, Joan Shenton, has just made available the first of a projected trilogy, Sacrificial Virgins, about the dangers of anti-HPV vaccines: https://www.youtube.com/watch?v=KAzcMHaBvLs&feature=youtu.be

The website, WHAT DOCTORS WON’T TELL YOU, comments in this way: “HPV vaccine ‘a second thalidomide scandal’, says new YouTube documentary”

 

Posted in medical practices, peer review, prescription drugs | Tagged: , | 2 Comments »

How to interpret statistics; especially about drug efficacy

Posted by Henry Bauer on 2017/06/06

How (not) to measure the efficacy of drugs  pointed out that the most meaningful data about a drug are the number of people needed to be treated for one person to reap benefit, NNT, and the number needed to be treated for one person to be harmed, NNH.

But this pertinent, useful information is rarely disseminated, and most particularly not by drug companies. Most commonly cited are statistics about drug performance relative to other drugs or relative to placebo. Just how misleading this can be is described in easily understood form in this discussion of the use of anti-psychotic drugs.

 

That article (“Psychiatry defends its antipsychotics: a case study of institutional corruption” by Robert Whitaker) has many other points of interest. Most important, of course, the potent demonstration that official psychiatric practice is not evidence-based, rather, its aim is to defend the profession’s current approach.

 

In these ways, psychiatry differs only in degree from the whole of modern medicine — see WHAT’S WRONG WITH PRESENT-DAY MEDICINE  — and indeed from contemporary science on too many matters: Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, Jefferson (NC): McFarland 2012.

Posted in conflicts of interest, consensus, media flaws, medical practices, peer review, prescription drugs, scientific culture, unwarranted dogmatism in science | Tagged: , | Leave a Comment »

Climate-change orthodoxy: alternative facts, uncertainty equals certainty, projections are not predictions, and other absurdities of the “scientific consensus”

Posted by Henry Bauer on 2017/05/10

G. K. Chesterton once suggested that the best argument for accepting the Christian faith lies in the reasons offered by atheists and skeptics against doing so. That interesting slant sprang to mind as I was trying to summarize the reasons for not believing the “scientific consensus” that blames carbon dioxide for climate change.

Of course the very best reason for not believing that CO2 causes climate change are the data, as summarized in an earlier post

–>      Global temperatures have often been high while CO2 levels were low, and vice versa

–>     CO2 levels rise or fall after temperatures have risen or fallen

–>     Temperatures decreased between the 1940s and 1970s, and since about 1998 there has been a pause in warming, perhaps even cooling, while CO2 levels have risen steadily.

But disbelieving the official propaganda becomes much easier when one recognizes the sheer absurdities and illogicalities and self-contradictions committed unceasingly by defenders of the mainstream view.

1940s-1970s cooling
Mainstream official climate science is centered on models: computer programs that strive to simulate real-world phenomena. Any reasonably detailed description of such models soon reveals that there are far too many variables and interactions to make that feasible; and moreover that a host of assumptions are incorporated in all the models (1). In any case, the official models do not simulate the cooling trend of these three decades.
“Dr. James Hansen suspects the relatively sudden, massive output of aerosols from industries and power plants contributed to the global cooling trend from 1940-1970” (2).
But the models do not take aerosols into account; they are so flawed that they are unable to simulate a thirty-year period in which carbon emissions were increasing and temperatures decreasing. An obvious conclusion is that no forecast based on those models deserves to be given any credence.

One of the innumerable science-groupie web-sites expands on the aerosol speculation:
“40’s to 70’s cooling, CO2 rising?
This is a fascinating denialist argument. If CO2 is rising, as it was in the 40’s through the 70’s, why would there be cooling?
It’s important to understand that the climate has warmed and cooled naturally without human influence in the past. Natural cycle, or natural variability need to be understood if you wish to understand what modern climate forcing means. In other words modern or current forcing is caused by human industrial output to the atmosphere. This human-induced forcing is both positive (greenhouse gases) and negative (sulfates and aerosols).”

Fair enough; but the models fail to take account of natural cycles.

Rewriting history
The Soviet Union had an official encyclopedia that was revised as needed, for example by rewriting history to delete or insert people and events to correspond with a given day’s political correctness. Some climate-change enthusiasts also try to rewrite history: “There was no scientific consensus in the 1970s that the Earth was headed into an imminent ice age. Indeed, the possibility of anthropogenic warming dominated the peer-reviewed literature even then” (3). Compare that with a host of reproductions and citations of headlines from those cold times when media alarms were set off by what the “scientific consensus” indeed then was (4). And the cooling itself was, of course, real, as is universally acknowledged nowadays.

The media faithfully report what officialdom disseminates. Routinely, any “extreme” weather event is ascribed to climate change — anything worth featuring as “breaking news”, say tsunamis, hurricanes, bushfires in Australia and elsewhere. But the actual data reveal no increase in extreme events in recent decades: not Atlantic storms, nor Australian cyclones, nor US tornadoes, nor “global tropical cyclone accumulated energy”, nor extremely dry periods in the USA, in the last 150 years during which atmospheric carbon dioxide increased by 40% (pp. 46-51 in (1)). Nor have sea levels been rising in any unusual manner (Chapter 6 in (1)).

Defenders of climate-change dogma tie themselves in knots about whether carbon dioxide has already affected climate, whether its influence is to be seen in short-term changes or only over the long term. For instance, the attempt to explain 1940s-70s cooling presupposes that CO2 is only to be indicted for changes over much longer time-scales than mere decades. Perhaps the ultimate demonstration of wanting to have it both ways — only long-term, but also short-term — is illustrated by a pamphlet issued jointly by the Royal Society of London and the National Academy of Science of the USA (5, 6).

No warming since about 1998
Some official sources deny that there has been any cessation of warming in the new century or millennium. Others admit it indirectly by attempting to explain it away or dismiss it as irrelevant, for instance “slowdowns and accelerations in warming lasting a decade or more will continue to occur. However, long- term climate change over many decades will depend mainly on the total amount of CO2 and other greenhouse gases emitted as a result of human   activities” (p. 2 in (5)); “shorter-term variations are mostly due to natural causes, and do not contradict our fundamental understanding that the long-term warming trend is primarily due to human-induced changes in the atmospheric levels of CO2 and other greenhouse gases” (p. 11 in (5)).

Obfuscating and misdirecting
The Met Office, the UK’s National Meteorological Service, is very deceptive about the recent lack of warming:

“Should climate models have predicted the pause?
Media coverage … of the launch of the 5th Assessment Report of the IPCC has again said that global warming is ‘unequivocal’ and that the pause in warming over the past 15 years is too short to reflect long-term trends.

[No one disputes the reality of long-term global warming — the issue is whether natural forces are responsible as opposed to human-generated carbon dioxide]

… some commentators have criticised climate models for not predicting the pause. …
We should not confuse climate prediction with climate change projection. Climate prediction is about saying what the state of the climate will be in the next few years, and it depends absolutely on knowing what the state of the climate is today. And that requires a vast number of high quality observations, of the atmosphere and especially of the ocean.
On the other hand, climate change projections are concerned with the long view; the impact of the large and powerful influences on our climate, such as greenhouse gases.

[Implying sneakily and without warrant that natural forces are not “large and powerful”. That is quite wrong and it is misdirection, the technique used by magicians to divert attention from what is really going on. By far the most powerful force affecting climate is the energy coming from the sun.]

Projections capture the role of these overwhelming influences on climate and its variability, rather than predict the current state of the variability itself.
The IPCC model simulations are projections and not predictions; in other words the models do not start from the state of the climate system today or even 10 years ago. There is no mileage in a story about models being ‘flawed’ because they did not predict the pause; it’s merely a misunderstanding of the science and the difference between a prediction and a projection.
[Misdirection again. The IPCC models failed to project or predict the lack of warming since 1998, and also the cooling of three decades after 1940. The point is that the models are inadequate, so neither predictions nor projections should be believed.]

… the deep ocean is likely a key player in the current pause, effectively ‘hiding’ heat from the surface. Climate model projections simulate such pauses, a few every hundred years lasting a decade or more; and they replicate the influence of the modes of natural climate variability, like the Pacific Decadal Oscillation (PDO) that we think is at the centre of the current pause.
[Here is perhaps the worst instance of misleading. The “Climate model projections” that are claimed to “simulate such pauses, a few every hundred years lasting a decade or more” are not made with the models that project alarming human-caused global warming, they are ad hoc models that explore the possible effects of variables not taken into account in the overall climate models.]”

The projections — which the media (as well as people familiar with the English language) fail to distinguish from predictions — that indict carbon dioxide as cause of climate change are based on models that do not incorporate possible effects of deep-ocean “hidden heat” or such natural cycles as the Pacific Decadal Oscillation. Those and other such factors as aerosols are considered only in trying to explain why the climate models are wrong, which is the crux of the matter. The climate models are wrong.

Asserting that uncertainty equals certainty
The popular media disseminated faithfully and uncritically from the most recent official report that “Scientists are 95% certain that human are responsible for the ‘unprecedented’ warming experienced by the Earth over the last few decades

Leave aside that the warming cannot be known to be “unprecedented” — global temperatures have been much higher in the past, and historical data are not fine-grained enough to compare rates of warming over such short time-spans as mere decades or centuries.

There is no such thing as “95% certainty”.
Certainty means 100%; anything else is a probability, not a certainty.
A probability of 95% may seem very impressive — until it is translated into its corollary: 5% probability of being wrong; and 5% is 1 in 20. I wouldn’t bet on anything that’s really important to me if there’s 1 chance in 20 of losing the bet.
So too with the frequent mantra that 97% or 98% of scientists, or some other superficially impressive percentage, support the “consensus” that global warming is owing to carbon dioxide (7):

 

“Depending on exactly how you measure the expert consensus, it’s somewhere between 90% and 100% that agree humans are responsible for climate change, with most of our studies finding 97% consensus among publishing climate scientists.”

In other words, 3% (“on average”) of “publishing climate scientists” disagree. And the history of science teaches unequivocally that even a 100% scientific consensus has in the past been wrong, most notably on the most consequential matters, those that advanced science spectacularly in what are often called “scientific revolutions” (8).
Furthermore, “publishing climate scientists” biases the scales a great deal, because peer review ensures that dissenting evidence and claims do not easily get published. In any case, those percentages are based on surveys incorporating inevitable flaws (sampling bias as with peer review, for instance). The central question is, “How convinced are you that most recent and near future climate change is, or will be, the result of anthropogenic causes”? On that, the “consensus” was only between 33% and 39%, showing that “the science is NOT settled” (9; emphasis in original).

Science groupies — unquestioning accepters of “the consensus”
The media and countless individuals treat the climate-change consensus dogma as Gospel Truth, leading to such extraordinary proposals as that by Professor of Law, Philippe Sands, QC, that “False claims from climate sceptics that humans are not responsible for global warming and that sea level is not rising should be scotched by an international court ruling”.

I would love to see any court take up the issue, which would allow us to make defenders of the orthodox view attempt to explain away all the data which demonstrate that global warming and climate change are not driven primarily by carbon dioxide.

The central point

Official alarms and established scientific institutions rely not on empirical data, established facts about temperature and CO2, but on computer models that are demonstrably wrong.

Those of us who believe that science should be empirical, that it should follow the data and change theories accordingly, become speechless in the face of climate-change dogma defended in the manner described above. It would be screamingly funny, if only those who do it were not our own “experts” and official representatives (10). Even the Gods are helpless in the face of such determined ignoring of reality (11).

___________________________________

(1)    For example, chapter 10 in Howard Thomas Brady, Mirrors and Mazes, 2016; ISBN 978-1522814689. For a more general argument that models are incapable of accurately simulating complex natural processes, see, O. H. Pilkey & L. Pilkey-Jarvis, Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future, Columbia University Press, 2007
(2)    “40’s to 70’s cooling, CO2 rising?”
(3)    Thomas C. Peterson, William M. Connolley & John Fleck, “The myth of the 1970s global cooling scientific consensus”, Bulletin of the American Meteorological Society, September 2008, 1325-37
(4)    “History rewritten, Global Cooling from 1940 – 1970, an 83% consensus, 285 papers being ‘erased’”; 1970s Global Cooling Scare; 1970s Global Cooling Alarmism
(5)    Climate Change: Evidence & Causes—An Overview from the Royal   Society and the U.S. National Academy of Sciences, National Academies Press; ISBN 978-0-309-30199-2
(6)    Relevant bits of (e) are cited in a review, Henry H. Bauer, “Climate-change science or climate-change propaganda?”, Journal of Scientific Exploration, 29 (2015) 621-36
(7)    The 97% consensus on global warming
(8) Thomas S. Kuhn, The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1970; Bernard Barber, “Resistance by scientists to scientific discovery”, Science, 134 (1961) 596–602. Gunther Stent, “Prematurity and uniqueness in   scientific discovery”, Scientific American, December 1972, pp. 84-93. Hook, Ernest B. (ed), Prematurity in Scientific Discovery: On Resistance and Neglect, University of California Press, 2002
(9)    Dennis Bray, “The scientific consensus of climate change revisited”, Environmental Science & Policy, 13 (2010) 340 –50; see also “The myth of the Climate Change ‘97%’”, Wall Street Journal, 27 May 2014, p. A.13, by Joseph Bast & Roy Spencer
(10) My mother’s frequent repetitions engraved in my mind the German folk-saying, “Wenn der Narr nicht mein wär’, lacht’ ich mit”. Google found it in the Deutsches sprichwörter-lexikon edited by Karl Friedrich Wilhelm Wander (#997, p. 922)
(11)  “Mit der Dummheit kämpfen Götter selbst vergebens”; Friedrich Schiller, Die Jungfrau von Orleans.

 

Posted in consensus, denialism, global warming, media flaws, peer review, resistance to discovery, science is not truth, science policy, scientism, unwarranted dogmatism in science | Tagged: , , | 6 Comments »

The banality of evil — Psychiatry and ADHD

Posted by Henry Bauer on 2017/04/25

“The banality of evil” is a phrase coined by Hannah Arendt when writing about the trial of Adolf Eichmann who had supervised much of the Holocaust. The phrase has been much misinterpreted and misunderstood. Arendt was pointing to the banality of Eichmann, who “had no motives at all” other than “an extraordinary diligence in looking out for his personal advancement”; he “never realized what he was doing … sheer thoughtlessness … [which] can wreak more havoc than all the evil instincts” (1). There was nothing interesting about Eichmann. Applying Wolfgang Pauli’s phrase, Eichmann was “not even wrong”: one can learn nothing from him other than that evil can result from banality, from thoughtlessness. As Edmund Burke put it, “The only thing necessary for the triumph of evil is for good men to do nothing” — and not thinking is a way of doing nothing.

That train of thought becomes quite uncomfortable with the realization that sheer thoughtlessness nowadays pervades so much of the everyday practices of science, medicine, psychiatry. Research simply — thoughtlessly — accepts contemporary theory as true, and pundits, practitioners, teachers, policy makers all accept the results of research without stopping to think about fundamental issues, about whether the pertinent contemporary theories or paradigms make sense.

Psychiatrists, for example, prescribe Ritalin and other stimulants as treatment for ADHD — Attention-Deficit/Hyperactivity Disorder — without stopping to think about whether ADHD is even “a thing” that can be defined and diagnosed unambiguously (or even at all).

The official manual, which one presumes psychiatrists and psychologists consult when assigning diagnoses, is the Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the American Psychiatric Association, now (since 2013) in its 5th edition (DSM-5). DSM-5 has been quite widely criticized, including by such prominent psychiatrists as Allen Frances who led the task force for the previous, fourth, edition (2).

Even casual acquaintance with the contents of this supposedly authoritative DSM-5 makes it obvious that criticism is more than called for. In DSM-5, the Diagnostic Criteria for ADHD are set down in five sections, A-E.

A: “A persistent pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development, as characterized by (1) and/or (2):
     1.   Inattention: Six (or more) of the following symptoms have persisted for at least 6 months to a degree that is inconsistent with developmental level and that negatively impacts directly on social and academic/occupational activities
           Note: The symptoms are not solely a manifestation of oppositional behavior, defiance, hostility, or failure to understand tasks or instructions. For older adolescents and adults (age 17 and older), at least five symptoms are required.
a.     Often fails to give close attention to details or makes careless mistakes in schoolwork, at work, or during other activities (e.g., overlooks or misses details, work is inaccurate)
b.     Often has difficulty sustaining attention in tasks or play activities (e.g., has difficulty remaining focused during lectures, conversations, or lengthy reading).”
and so on through c-i, for a total of nine asserted characteristics of inattention.

Paying even cursory attention to these “criteria” makes plain that they are anything but definitive. Why, for example, are six symptoms required up to age 16 when five are sufficient at 17 years and older? There is nothing clear-cut about “inconsistent with developmental level”, which depends on personal judgment about both the consistency and the level of development. Different people, even different psychiatrists no matter how trained, are likely to judge inconsistently in any given case whether the attention paid (point “a”) is “close” or not. So too with “careless”, “often”, “difficulty”; and so on.

It is if anything even worse with Criteria A(2):

“2.    Hyperactivity and Impulsivity:
Six (or more) of the following symptoms have persisted for at least 6 months to a degree that is inconsistent with developmental level and that negatively impacts directly on social and academic/occupational activities
       Note: The symptoms are not solely a manifestation of oppositional behavior, defiance, hostility, or failure to understand tasks or instructions. For older adolescents and adults (age 17 and older), at least five symptoms are required.
a.    Often fidgets with or taps hands or feet or squirms in seat.”
and so on through b-i, for again a total of nine supposed characteristics of inattention. There is no need to cite any of those since “a” amply reveals the absurdity of designating as the symptom of a mental disorder a type of behavior that is perfectly normal for the majority of young boys. This “criterion” makes self-explanatory the reported finding that boys are three times more likely than girls to be diagnosed with ADHD, though experts make heavier weather of it by suggesting that sex hormones may be among the unknown causes of ADHD (3).

A(1) and (2) are followed by
“B. Several inattentive or hyperactivity-impulsivity symptoms were present prior to age 12 years.
C. Several inattentive or hyperactivity-impulsivity symptoms are present in two or more
settings  (e.g., at home, school, or work; with friends or relatives; in other activities).
D. There is clear evidence that the symptoms interfere with, or reduce the quality of, social,
academic, or occupational functioning.
E. The symptoms do not occur exclusively during the course of schizophrenia or another
psychotic disorder and are not better explained by another mental disorder (e.g., mood
disorder, anxiety disorder, dissociative disorder, personality disorder, substance
intoxication or withdrawal).”

It should be plain enough that this set of so-called criteria is not based on any definitive empirical data, as a simple thought experiment shows: What clinical (or any other sort of) trial could establish by observation that six symptoms are diagnostic up to age 17 whereas five can be decisive from that age on? What if the decisive symptoms were apparent for only 5 months rather than six; or five-and-three-quarters months? How remarkable, too, that “inattention” and hyperactivity and impulsivity” are both characterized by exactly nine possible symptoms.

Leaving aside the deplorable thoughtlessness of the substantive content of DSM-5, it is also saddening that something published by an authoritative medical society should reflect such carelessness or thoughtlessness in presentation. Competent copy-editing would have helped, for example by eliminating the many instances of “and/or”: “this ungraceful phrase … has no right to intrude in ordinary prose” (4) since just “or” would do nicely; if, for instance, I tell you that I’ll be happy with A or with B, obviously I’ll be perfectly happy also if I get both.
Good writing and proper syntax are not mere niceties; their absence indicates a lack of clear substantive thought in what is being written about, as Richard Mitchell ( “The Underground Grammarian”), liked to illustrate by quoting Ben Jonson: “Neither can his Mind be thought to be in Tune, whose words do jarre; nor his reason in frame, whose sentence is preposterous”.

At any rate, ADHD is obviously an invented condition that has no clearly measurable characteristics. Assigning that diagnosis to any given individual is an entirely subjective, personal judgment. That this has been done for some large number of individuals strikes me as an illustration of the banality of evil. Countless parents have been told that their children have a mental illness when they are behaving just as children naturally do. Countless children have been fed mind-altering drugs as a consequence of such a diagnosis. Some number have been sent to special schools like Eagle Hill, where annual tuition and fees can add up to $80,000 or more.

Websites claim to give information that is patently unfounded or wrong, for example:

“Researchers still don’t know the exact cause, but they do know that genes, differences in brain development and some outside factors like prenatal exposure to smoking might play a role. … Researchers looking into the role of genetics in ADHD say it can run in families. If your biological child has ADHD, there’s a one in four chance you have ADHD too, whether it’s been diagnosed or not. … Some external factors affecting brain development have also been linked to ADHD. Prenatal exposure to smoke may increase your child’s risk of developing ADHD. Exposure to high levels of lead as a toddler and preschooler is another possible contributor. … . It’s a brain-based biological condition”.

Those who establish such websites simply follow thoughtlessly, banally, what the professional literature says; and some number of academics strive assiduously to ensure the persistence of this misguided parent-scaring and children-harming. For example, by claiming that certain portions of the brains of ADHD individuals are characteristically smaller:

“Subcortical brain volume differences in participants with attention deficit hyperactivity disorder in children and adults: a cross-sectional mega-analysis” by Martine Hoogman et al., published in Lancet Psychiatry (2017, vol. 4, pp. 310–19). The “et al.” stands for 81 co-authors, 11 of whom declared conflicts of interest with pharmaceutical companies. The conclusions are stated dogmatically: “The data from our highly powered analysis confirm that patients with ADHD do have altered brains and therefore that ADHD is a disorder of the brain. This message is clear for clinicians to convey to parents and patients, which can help to reduce the stigma that ADHD is just a label for difficult children and caused by incompetent parenting. We hope this work will contribute to a better understanding of ADHD in the general public”.

An extensive detailed critique of this article has been submitted to the journal as a basis for retracting that article: “Lancet Psychiatry Needs to Retract the ADHD-Enigma Study” by Michael Corrigan & Robert Whitaker. The critique points to a large number of failings in methodology, including that the data were accumulated from a variety of other studies with no evidence that diagnoses of ADHD were consistent or that controls were properly chosen or available — which ought in itself be sufficient reason not to find publication.

Perhaps worst of all: Nowhere in the article is IQ mentioned; yet the Supplementary Material contains a table revealing that the “ADHD” subjects had on average higher IQ scores than the “normal” controls. “Now the usual assumption is that ADHD children, suffering from a ‘brain disorder,’ are less able to concentrate and focus in school, and thus are cognitively impaired in some way. …. But if the mean IQ score of the ADHD cohort is higher than the mean score for the controls, doesn’t this basic assumption need to be reassessed? If the participants with ADHD have smaller brains that are riddled with ‘altered structures,’ then how come they are just as smart as, or even smarter than, the participants in the control group?”

[The Hoogman et al. article in many places refers to “(appendix)” for details, but the article — which costs $31.50 — does not include an appendix; one must get it separately from the author or the journal.]

As usual, the popular media simply parroted the study’s claims, illustrated by headlines cited in the critique:

And so the thoughtless acceptance by the media of anything published in an established, peer-reviewed journal contributes to making this particular evil a banality. The public, including parents of children, are further confirmed in the misguided, unproven, notion that something is wrong with the brains of children who have been designated with a diagnosis that is no more than a highly subjective opinion.

The deficiencies of this article also illustrate why those of us who have published in peer-reviewed journals know how absurd it is to regard “peer review” as any sort of guarantee of quality, or even of minimal standards of competence and honesty. As Richard Horton, himself editor of The Lancet, has noted, “Peer review . . . is simply a way to collect opinions from experts in the field. Peer review tells us about the acceptability, not the credibility, of a new finding” (5).

The critique of the Hoogman article is just one of the valuable pieces at the Mad in America website. I also recommend highly Robert Whitaker’s books, Anatomy of an Epidemic and Mad in America.


(1)  Hannah Arendt, Eichmann in Jerusalem — A Report on the Banality of Evil. Viking Press,
1964 (rev. & enlarged ed.). Quotes are at p. 134 of PDF available at
https://platypus1917.org/wp-content/uploads/2014/01/arendt_eichmanninjerusalem.pdf
(2)  Henry H. Bauer, “The Troubles With Psychiatry — essay review of Saving Normal by Allen
Frances and The Book Of Woe by Gary Greenberg”, Journal of Scientific Exploration,
29  (2015) 124-30
(3)   Donald W. Pfaff, Man and Woman: An Inside Story, Oxford University Press, 2010: p. 147
(4)   Modern American Usage (edited & completed by Jacques Barzun et al. from the work of
Wilson Follett), Hill & Wang 1966
(5)    Health Wars: On the Global Front Lines of Modern Medicine, New York Review Books,
2003, p. 306

 

Posted in conflicts of interest, consensus, media flaws, medical practices, peer review, prescription drugs, science is not truth, unwarranted dogmatism in science | Tagged: , , , , | Leave a Comment »

 
%d bloggers like this: