Skepticism about science and medicine

In search of disinterested science

Archive for the ‘media flaws’ Category

How science changed — IV. Cutthroat competition and outright fraud

Posted by Henry Bauer on 2018/04/15

The discovery of the structure of DNA was a metaphorical “canary in the coal mine”, warning of the intensely competitive environment that was coming to scientific activity. The episode illustrates in microcosm the seismic shift in the circumstances of scientific activity that started around the middle of the 20th century [1], the replacement of one set of unwritten rules by another set [2].
The structure itself was discovered by Watson and Crick around 1950, but it was only in 1968, with the publication of Watson’s personal recollections, that attention was focused on how Watson’s approach and behavior marked a break from the traditional unwritten rules of scientific activity.
It took even longer for science writers and journalists to realize just how cutthroat the competition had become in scientific and medical research. Starting around 1980 there appeared a spate of books describing fierce fights for priority on a variety of specific topics:
Ø    The role of the brain in the release of hormones; Guillemin vs. Schally — Nicholas Wade, The Nobel Duel: Two Scientists’ 21-year Race to Win the World’s Most Coveted Research Prize, Anchor Press/Doubleday, 1981.
Ø    The nature and significance of a peculiar star-like object — David H. Clark, The Quest for SS433, Viking, 1985.
Ø    “‘Mentor chains’, characterized by camaraderie and envy, for example in neuroscience and neuropharmacology” — Robert Kanigel, Apprentice to Genius: The Making of a Scientific Dynasty, Macmillan, 1986.
Ø    High-energy particle physics, atom-smashers — Gary Taubes, Nobel Dreams: Power, Deceit, and the Ultimate Experiment, Random House, 1986.
Ø    “Soul-searching, petty rivalries, ridiculous mistakes, false results as rivals compete to understand oncogenes” — Natalie Angier, Natural Obsessions: The Search for the Oncogene, Houghton Mifflin, 1987.
Ø    “The brutal intellectual darwinism that dominates the high-stakes world of molecular genetics research” — Stephen S. Hall, Invisible Frontiers: The Race to Synthesize a Human Gene, Atlantic Monthly Press, 1987.
Ø    “How the biases and preconceptions of paleoanthropologists shaped their work” — Roger Lewin, Bones of Contention: Controversies in the Search for Human Origins, Simon & Schuster, 1987.
Ø    “The quirks of . . . brilliant . . . geniuses working at the extremes of thought” — Ed Regis, Who Got Einstein’s Office: Eccentricity and Genius at the Institute for Advanced Study, Addison-Wesley, 1987.
Ø    High-energy particle physics — Sheldon Glashow with Ben Bova, Interactions: A Journey Through the Mind of a Particle Physicist and the Matter of the World, Warner, 1988.
Ø    Discovery of endorphins — Jeff Goldberg, Anatomy of a Scientific Discovery, Bantam, 1988.
Ø    “Intense competition . . . to discover superconductors that work at practical temperatures “ — Robert M. Hazen, The Breakthrough: The Race for the Superconductor, Summit, 1988.
Ø    Science is done by human beings — David L. Hull, Science as a Process, University of Chicago Press, 1988.
Ø    Competition to get there first — Charles E. Levinthal, Messengers of Paradise: Opiates and the Brain, Anchor/Doubleday 1988.
Ø    “Political machinations, grantsmanship, competitiveness” — Solomon H. Snyder, Brainstorming: The Science and Politics of Opiate Research, Harvard University Press, 1989.
Ø    Commercial ambitions in biotechnology — Robert Teitelman, Gene Dreams: Wall Street, Academia, and the Rise of Biotechnology, Basic Books, 1989.
Ø    Superconductivity, intense competition — Bruce Schechter, The Path of No Resistance: The Story of the Revolution in Superconductivity, Touchstone (Simon & Schuster), 1990.
Ø    Sociological drivers behind scientific progress, and a failed hypothesis — David M. Raup, The Nemesis Affair: A Story of the Death of Dinosaurs and the Ways of Science, Norton 1999.

These titles illustrate that observers were able to find intense competitiveness wherever they looked in science; though mostly in medical or biological science, with physics including astronomy the next most frequently mentioned field of research.
Watson’s memoir had not only featured competition most prominently, it had also revealed that older notions of ethical behavior no longer applied: Watson was determined to get access to competitors’ results even if those competitors were not yet anxious to reveal all to him [3]. It was not only competitiveness that increased steadily over the years; so too did the willingness to engage in behavior that not so long before had been regarded as improper.
Amid the spate of books about how competitive research had become, there also was published. Betrayers of the Truth: Fraud and Deceit in the Halls of Science by science journalists William Broad and Nicholas Wade (Simon & Schuster, 1982). This book argued that dishonesty has always been present in science, citing in an appendix 33 “known or suspected” cases of scientific fraud from 1981 back to the 2nd century BC. These actual data could not support the book’s sweeping generalizations [4], but Broad and Wade had been very early to draw attention to the fact that dishonesty in science was a significant problem. What they failed to appreciate was why: not that there had always been a notable frequency of fraud in science but that scientific activity was changing in ways that were in process of making it a different kind of thing than in the halcyon few centuries of modern science from the 17th century to the middle of the 20th century.
Research misconduct had featured in Congressional Hearings as early as 1981. Soon the Department of Health and Human Services established an Office of Scientific Integrity, now the Office of Research Integrity. Its mission is to instruct research institutions about preventing fraud and dealing with allegations of it. Scientific periodicals began to ask authors to disclose conflicts of interest, and co-authors to state specifically what portions of the work were their individual responsibility.
Academe has proliferated Centers for Research and Medical Ethics [5], and there are now periodicals entirely devoted to such matters [6]. Courses in research ethics have become increasingly common; it is even required that such courses be available at institutions that receive research funds from federal agencies.
In 1989, the Committee on the Conduct of Science of the National Academy of Sciences issued the booklet On Being a Scientist, which describes proper behavior; that booklet’s 3rd edition, titled A Guide to Responsible Conduct in Research, makes even clearer that the problem of scientific misconduct is now widely seen as serious.
Another indication that dishonesty has increased is the quite frequent retraction of published research reports: Retraction Watch estimates that 500-600 published articles are retracted annually. John Ioannidis has made a specialty of reviewing literature for consistency, and reported: “Why most published research findings are false” [7]. Nature has an archive devoted to this phenomenon [8].

Researchers half a century ago would have been aghast and disbelieving at all this, that science could have become so untrustworthy. It has happened because science changed from an amateur avocation to a career that can bring fame and wealth [9]; and scientific activity changed from a cottage industry to a highly bureaucratic corporate industry, with pervasive institutional as well as individual conflicts of interest; and researchers’ demands for support have far exceeded the available supply.

And as science changed, it drew academe along with it. More about that later.

===============================================

[1]    How science changed — III. DNA: disinterest loses, competition wins
[2]    How science has changed— II. Standards of Truth and of Behavior
[3]    The individuals Watson mentioned as getting him access corrected his recollections: they shared with him nothing that was confidential. The significant point remains that Watson had no such scruples.
[4]    See my review, “Betrayers of the truth: a fraudulent and deceitful title from the journalists of science”, 4S Review, 1 (#3, Fall) 17–23.
[5]   There is an Online Ethics Center for Engineering and Science. Physical Centers have been established at: University of California, San Diego (Center for Ethics in Science and Technology); University of Delaware (Center for Science, Ethics and Public Policy); Michigan State University (Center for Ethics and Humanities in the Life Sciences); University of Notre Dame (John J. Reilly Center for Science, Technology, and Values).
[6]    Accountability in Research (founded 1989); Science and Engineering Ethics (1997); Ethics and Information Technology (1999); BMC Medical Ethics (2000); Ethics in Science and Environmental Politics (2001).
[7]    John P. A. Ioannidis, “Why Most Published Research Findings Are False”, PLoS Medicine, 2 (2005): e124. 
[8]    “Challenges in irreproducible research”
[9]    How science has changed: Who are the scientists?

Advertisements

Posted in conflicts of interest, fraud in medicine, fraud in science, funding research, media flaws, science is not truth, scientific culture, scientists are human | Tagged: , | Leave a Comment »

How science has changed: Who are the scientists?

Posted by Henry Bauer on 2018/04/07

Scientists are people who do science, Nowadays scientists are people who work at science as a full-time occupation and who earn their living at it.
Science means studying and learning about the natural world, and human beings have been doing that since time immemorial; indeed, in a sense all animals do that, but humans have developed efficient means to transmit gained knowledge to later generations.
At any rate, there was science long before [1] there were scientists, full-time professional students of Nature. Our present-day store of scientific knowledge includes things that have been known for at least thousands of years. For example, from more than 6,000 years ago in Mesopotamia (Babylon, Sumer) we still use base-60 mathematics for the number of degrees in the arcs of a circle (360) and the number of seconds in a minute and the number of minutes in an hour. We still cry “Eureka” (found!!) for a new discovery, as supposedly Archimedes did more than 2000 years ago when he recognized that floating an object in water was an easy way to measure its volume (by the increase in height of the water) and that the object’s weight equaled the weight of the water it displaced. The Islamic science of the Middle Ages has left its mark in language with, for instance, algebra or alchemy.
Despite those early pieces of science that are still with us today, most of what the conventional wisdom thinks it knows about science is based on what historians call “modern” science, which is generally agreed to have emerged around the 17th century in what is usually called The Scientific Revolution.
The most widely known bits of science are surely the most significant advances. Those are typically associated with the names of people who either originated them or made them popular [2]; so many school-children hear about Archimedes and perhaps Euclid and Ptolemy; and for modern science, even non-science college students are likely to hear of Galileo and Newton and Darwin and Einstein. Chemistry students will certainly hear about Lavoisier and Priestley and Wöhler and Haber; and so on, just as most of us have learned about general history in terms of the names of important individuals. So far as science is concerned, most people are likely to gain the general impression that it has been done and is being done by a relatively small number of outstanding individuals, geniuses in fact. That impression could only be entrenched by the common thought-bite that “science” overthrew “religion” sometime in the 19th century, leading to the contemporary role of science as society’s ultimate arbiter of true knowledge.
The way in which scientists in modern times have been featured in books and in films also gives the impression that scientists are somehow special, that they are by no means ordinary people. Roslynn Haynes [3] identified several stereotypes of scientists, for example “adventurer” or “the noble scientist as hero or savior of society”, with most stereotypes however being less than favorable — “mad, bad, dangerous scientist, unscrupulous in the exercise of power”. But no matter whether good or bad in terms of morals or ethics, society’s stereotype of “scientist” is “far from an ordinary person”.
That is accurate enough for the founders of modern science, but it became progressively less true as more and more people came to take part in some sort of scientific activity. Real change began in the early decades of the 19th century, when the term “scientist” seems to have been used for the first time [4].
By the end of the 19th century it had become possible to earn a living through being a scientist, through teaching or through doing research that led to commercially useful results (as in the dye-stuff industry) or through doing both in what nowadays are called research universities. By the early 20th century, scientists no longer deserved to be seen as outstanding individual geniuses, but they were still a comparatively elite group of people with quite special talents and interests. Nowadays, however, there is nothing distinctly elite about being a scientist. In terms of numbers (in the USA), scientists at roughly 2.7 million are comparable to engineers at 2.1 million (in ~2001), less elite than lawyers (~ 1 million) or doctors (~800,000); and teachers, at ~3.5 million, are almost as elite as scientists.
Nevertheless, so far as the general public and the conventional wisdom are concerned, there is still an aura of being special and distinctly elite associated with science and being a scientist, no doubt because science is so widely acknowledged as the ultimate authority on what is true about the workings of the natural world; and because “scientist” brings to most minds someone like Darwin or Einstein or Galileo or Newton.
So the popular image of scientists is wildly wrong about today’s world. Scientists today are unexceptional white-collar workers. Certainly a few of them could still be properly described as geniuses, just as a few engineers or doctors could be — or those at the high tail-end of any distribution of human talent; but by and large, there is nothing exceptional about scientists nowadays. That is an enormous change from times past, and the conventional wisdom has not begun to be aware of that change.
One aspect of that change is that the first scientists were amateurs seeking to satisfy their curiosity about how the world works, whereas nowadays scientists are technicians or technical experts who do what they are told to do by employers or enabled to do by patrons. A very consequential corollary is that the early scientists had nothing to gain by being untruthful, whereas nowadays the rewards potentially available to prominent scientists have tempted a significant number to practice varying degrees of dishonesty.
Another way of viewing the change that science and scientists have undergone is that science used to be a cottage industry largely self-supported by independent entrepreneurial workers, whereas nowadays science is a corporate behemoth whose workers are apparatchiks, cogs in bureaucratic machinery; and in that environment, individual scientists are subject to conflicts of interest and a variety of pressures owing to their membership in a variety of groups.

Science today is not a straightforward seeking of truth about how the world works; and claims emerging from the scientific community are not necessarily made honestly; and even when made honestly, they are not necessarily true. More about those things in future posts.

=======================================

[1]    For intriguing tidbits about pre-scientific developments, see “Timeline Outline View”
[2]    In reality, most discoveries hinge on quite a lot of work and learning that prefigured them and made them possible, as discussed for instance by Tony Rothman in Everything’s Relative: And Other Fables from Science and Technology (Wiley, 2003). That what matters most is not the act of discovery but the making widely known is the insight embodied in Stigler’s Law, that discoveries are typically named after the last person who discovered them, not the first (S. M. Stigler, “Stigler’s Law of Eponymy”, Transactions of the N.Y. Academy of Science, II: 39 [1980] 147–58)
[3]    Roslynn D. Haynes, From Faust to Strangelove: Representations of the Scientist in Western Literature, Johns Hopkins University Press, 1994; also “Literature Has shaped the public perception of science”, The Scientist, 12 June 1989, pp. 9, 11
[4]    William Whewell is usually credited with coining the term “scientist” in the early 1830s

Posted in conflicts of interest, fraud in science, funding research, media flaws, peer review, science is not truth, scientific culture, scientists are human | Tagged: , , | 4 Comments »

Denialism and pseudo-science

Posted by Henry Bauer on 2018/03/31

Nowadays, questioning whether HIV causes AIDS, or whether carbon dioxide causes global warming, is often deplored and attacked as “denialism” or pseudo-science. Yet questioning those theories is perfectly good, normal science.

Science is many things, including a human activity, an institution, an authority, but most centrally science means knowledge and understanding. Pseudo-science correspondingly means false claims dressed up as though they were reliable, genuine science. Denialism means refusing to believe what is unquestionably known to be true.

Knowledge means facts; understanding means theories or interpretations; and an essential adjunct to both is methodology, the means by which facts can be gathered.

There is an important connection not only between methods and facts but also between facts and theories: Un-interpreted facts carry no meaning. They are made meaningful only when connected to a conceptual framework, which is inevitably subjective. That is typically illustrated by diagrams where the facts consist of black and white lines and areas whose meaning depends on interpretations by the viewer. Different observers offer different interpretations.

The meanings of these facts — black-and-white lines and areas — are supplied by the viewer:
A young lady with extravagant hair treatment facing left — OR an old crone looking downwards;
A duck facing left OR a rabbit facing right;
Twin black profiles looking at one another OR a white vase.

In science, researchers often differ over the interpretation of the evidence: the facts are not disputed but different theories are offered to explain them.

At any rate, in considering what science can tell us we need to consider the three facets of science: facts, methods, and theories [1]. Normal scientific activity is guided by established theories and applies established methods to enlarge the range of factual knowledge.
Every now and again, something unconventional and unforeseen turns up in one of those three facets of science. It might be a new interpretation of existing facts, as in the theory of relativity; or it may be the application of a novel method as in radio-astronomy; or it may be the observation of previously unsuspected happenings, facts, for instance that atoms are not eternally stable and sometimes decompose spontaneously. When something of that sort happens, it is often referred to later as having been a scientific revolution, overturning what had been taken for granted in one facet of science while remaining content with what has been taken for granted in the other two facets.
The progress of science can be viewed as revolutions in facts, or in method followed by the gaining of possibly revolutionary facts, followed eventually by minor or major revisions of theory. Over a sufficiently long time — say, the several centuries of modern (post-17th-century) science — the impression by hindsight is of continual accumulation of facts and improvement of methods; the periodic changes in theoretical perspective are all that tends to be remembered by other than specialist historians of science.

(from “Why minority views should be listened to”)

The history of science also records episodes in which researchers proposed something novel simultaneously in two facets of science, for example when Gregor Mendel applied simple arithmetic to observations of plant breeding, an unprecedented methodology in biology that thereby uncovered entirely new facts. Another example might be the suggestion by Alfred Wegener in the early decades of the 20th century that the Earth’s continents must have moved, since the flora and fauna and geological formations are so alike on continents that are now far apart; making comparisons across oceans was an entirely novel methodology, and there was no theory to accommodate the possibility of continents moving. Episodes of that sort, where two of the three facets of science are unorthodox, have been labeled “premature science” by Gunther Stent [2]; the scientific community did not accept these suggestions for periods of several decades, until something more conventional showed that those unorthodox proposals had been sound.

When claims are made that do not fit with established theory or established methods or established facts, then those claims are typically dismissed out of hand and labeled pseudo-science. For example, claims of the existence of Loch Ness “monsters” involve unorthodox facts obtained by methods that are unorthodox in biology, namely eyewitness accounts, sonar echoes, photographs, and films, instead of the established way of certifying the existence of a species through the examination of an actual specimen; and the theory of evolution and the accepted fossil record have no place for the sort of creature that eyewitnesses describe.

In recent years it has it has been quite common see dissent from established scientific theories referred to as “denialism”. The connotation of that term “denialism” is not only that something is wrong but that it is reprehensibly wrong, that those who question the established view should know better, that it would be damaging to pay attention to them; moreover that denying (for example) that HIV causes AIDS is as morally distasteful as denying the fact of the Holocaust in which millions of Jews, Gypsies, and others were killed.

As Google N-grams for “denialism” indicate, until the last couple of decades, “denialism” meant to deny historical facts of genocide or something like it:

In the 1930s, “denialism” was applied to the refusal to acknowledge the millions of deaths in the Soviet Union caused by enforcement of collectivized agriculture and associated political purges, for example the 1932-33 Ukraine famine [3]. Holocaust denial was prominent for a while around 1970 but then faded away from mention in books until it re-appeared in the late 1980s [4]. But soon “denialism” directed at questioning of HIV/AIDS theory and the theory of carbon-dioxide-induced global warming swamped all other applications of the term:


This recent usage of “denialism” is consciously and specifically intended to arouse the moral outrage associated with denial of genocides, as admitted (for example) by the South African jurist Edwin Cameron [5]. But those genocides are facts, proved beyond doubt by the records of deaths as well as remains and various artefacts at concentration camps. By contrast, so-called “AIDS denialism” and so-called “climate-change denialism” or “global warming denialism” are the questioning or disputing of theories, not facts.

That questioning, moreover, is perfectly consonant with normal science:
⇒⇒   On the matter of whether HIV causes AIDS, dissidents do not question anything about established methods of virology, and they do not claim that HIV tests do not measure proteins, antibodies, and bits of genetic material; they merely assert that the results of HIV tests do not fit the theory that HIV is an infectious agent, and they assert that the methods used in HIV AIDS research are not sound methods for studying viruses since they have not been verified against experiments with authentic pure HIV virions derived directly from HIV+ individuals or from AIDS patients (The Case against HIV).
⇒⇒   On the matter of whether the liberation of carbon dioxide and by the burning of fossil fuels is the primary cause of global warming and climate change (AGW, Anthropogenic Global Warming and climate change [ACC]), those who question that theory do not question the facts about amounts of carbon dioxide present over time and they do not question the changes that have taken place in temperatures; they merely point out that the known and accepted facts show that there have been periods of time during which carbon-dioxide levels were very high while temperatures were very low, and that during several periods when carbon-dioxide levels were increasing the Earth’s temperature was not increasing or perhaps even cooling [6]. Furthermore, those who question AGW point out that the prime evidence offered for the theory is no evidence at all, merely the outputs of computer models that are supposed to take into account all the important variables — even as it is obvious that they do not do that, since those computer models do not provide an accurate record of the actual temperature changes that have been observed over many centuries.

Denialism means to deny something that is unquestionably true, but theories, interpretations, can never be known to be unquestionably true. Labeling as denialists those who question whether HIV causes AIDS, or those who question whether human-caused generation of carbon dioxide is the prime cause of global warming and climate change, is an attempt to finesse having to properly demonstrate the validity of those theories. Another attempt at such evasion is the oft-heard assertion that there is an “overwhelming consensus” on those matters. As Michael Crichton put it:
the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. . . . Consensus is invoked only in situations where the science is not solid enough. Nobody says the consensus of scientists agrees that E=mc2. Nobody says the consensus is that the sun is 93 million miles away. It would never occur to anyone to speak that way [7].

When the assertion of consensus does not suffice, then the ad hominem tactic of crying “denialism” is invoked: the last refuge of intellectual scoundrels who cannot prove their case by evidence and logic.

=================================================
[1]    I first suggested this in “Velikovsky and the Loch Ness Monster: Attempts at demarcation in two controversies”, in a symposium on “The Demarcation between Science and Pseudo-Science” (ed. Rachel Laudan), published as Working Papers of the Center for the Study of Science in Society (VPI&SU), 2 (#1, April 1983) 87-106. The idea was developed further in The Enigma of Loch Ness: Making Sense of a Mystery (University of Illinois Press, 1986/88; reprint, Wipf & Stock, 2012; pp. 152-3); see also Science or Pseudoscience: Magnetic Healing, Psychic Phenomena, and Other Heterodoxies (University of Illinois Press, 2001); Science Is Not What You Think (McFarland, 2017)
[2]    Gunther Stent, “Prematurity and uniqueness in scientific discovery”, Scientific American, December 1972, pp. 84–93
[3]    Described as the Holodomor
[4]    Holocaust Denial Timeline
[5]    Edwin Cameron, Witness to AIDS, I. B. Tauris, 2005; see book review in Journal of Scientific Exploration, 20 (2006) 436-444
[6]    Climate-change facts: Temperature is not determined by carbon dioxide
[7]    Michael Crichton,  “Aliens cause global warming”, Caltech Michelin Lecture, 17 January 2003

 

Posted in consensus, denialism, global warming, media flaws, politics and science, science is not truth, science policy, scientific culture, scientific literacy, scientism, unwarranted dogmatism in science | Tagged: , , | 2 Comments »

The consensus against human causation of global warming and climate change

Posted by Henry Bauer on 2018/03/18

Anthropogenic Global Warming (AGW) is the theory that global warming is caused primarily by human actions that liberate carbon dioxide and other greenhouse gases; similarly, Anthropogenic Climate Change (ACC). Proponents of AGW/ACC like to claim that 97% of climate scientists agree and that the science is settled . Both those claims are factually incorrect.

How many dissenting individuals?

Tens of thousands of scientists as well as many informed observers dispute AGW/ACC, for example in the Oregon Petition or Global Warming Petition Project: “There is no convincing scientific evidence that human release of carbon dioxide, methane, or other greenhouse gases is causing or will, in the foreseeable future, cause catastrophic heating of the Earth’s atmosphere and disruption of the Earth’s climate. Moreover, there is substantial scientific evidence that increases in atmospheric carbon dioxide produce many beneficial effects upon the natural plant and animal environments of the Earth”.

Similar points were made in the Leipzig Declaration signed by dozens of prominent scientists and television meteorologists, and in several other public statements and petitions — 1992 “Statement by atmospheric scientists on greenhouse warming” and the 1992 “Heidelberg Appeal,” circulated at the Rio de Janeiro Earth Summit (Heidelberg Appeal’s Anniversary – 4,000+ scientists, 70 Nobel Laureates).

Dissenting literature:
Scores of books and thousands of articles dispute AGW/ACC. Dunlap and Jacques list 108 such books published up to 2010 (“Climate change denial books and conservative think tanks: Exploring the connection”, American Behavioral Scientist, 57 [2013] 699–731). At least another 10 books have been published more recently, see below.

Some “1350+ peer-reviewed papers supporting skeptic arguments against ACC/AGW alarmism” are listed on-line at http://www.populartechnology.net/2009/10/peer-reviewed-papers-supporting.html.

Selected blogs:
There are innumerable blogs about AGW/ACC. In a study of arguments over how polar bears are or are not being affected, 45 pro and 45 con blogs were identified (but not named) [1].
I recommend unreservedly two blogs:

Watts Up With That (WUWT), which is notable for being centrally concerned with evidence relating to weather and climate and having no political agenda or axe to grind; Anthony Watts is a meteorologist.

Climate Etc. too has no political agenda or axe to grind. Judith Curry is a geoscientist and climatologist, recently retired after a notably distinguished career [2]. She does not deny that human activity may contribute to global warming, but shows that proponents of AGW/ACC go far beyond the evidence in raising alarms about impending catastrophes just around the corner or already here.

The actual facts:
Actual data over the life of the Earth show that CO2 levels have often been higher than now during periods when temperatures were lower. Moreover, it seems that changes in temperature occur before changes in CO2 levels and not after. Global temperatures were cooling while CO2 levels were rising during ~1880-1910 and ~1940s-1970s. Since roughly the end of the 1990s, global temperatures have not increased significantly [3]. Popular media and many proponents of AGW/ACC deny that lack of significant warming of the last couple of decades, but it is acknowledged by the National Academy of Sciences of the USA and the Royal Society of London: in a jointly published pamphlet [4] they offer excuses intended to explain why this “pause” in warming does not disprove AGW/ACC.

As against these actual data, proponents of AGW/ACC rely on computer models that are obviously and patently inadequate because they are unable to retrodict (calculate even by hindsight) the historical temperature record.

Books arguing against AGW and ACC
published since 2010 and not listed by
Dunlap & Jacques, American Behavioral Scientist, 57 (2013) 699–731

2012:    Global Warming-Alarmists, Skeptics and Deniers: A Geoscientist Looks at the Science of Climate Change, G. Dedrick Robinson &,‎ Gene D. Robinson III, Moonshine Cove Publishing

2014:    The Deliberate Corruption of Climate Science, Tim Ball, Stairway Press

2015:    Climate Change: The Facts, J. Abbot et al. (24 contributors), Stockade Books

2015:    A Disgrace to the Profession, Mark Steyn,‎ Stockade Books

2017:     Inconvenient Facts: proving Global Warming is a Hoax, Jack Madden, CreateSpace

2017:     Inconvenient Facts: The science that Al Gore doesn’t want you to know (audio book), Gregory Wrightstone, Blackstone Audio

2017:    Climate Change: The Facts, Jennifer Marohasy (ed.; 22 contributors), Connor Court Publishing

2018:    The Politically Incorrect Guide to Climate Change, Marc Morano, Regnery

2018:    The Climate Chronicles: Inconvenient Revelations You Won’t Hear from Al Gore — and Others, Joe Bastardi, CreateSpace

2018:    The Polar Blankets: The real power behind climate change, Rex Coffin, ISBN 978-1980416470 (independently published)

—————————————————————————–

[1]    “Internet blogs, polar bears, and climate-change denial by proxy”, by Jeffrey A. Harvey, by Daphne van den Berg, Jacintha Ellers, Remko Kampen, Thomas W. Crowther, Peter Roessingh, Bart Verheggen, Rascha J. M. Nuijten, Eric Post, Stephan Lewandowsky, Ian Stirling, Meena Balgopal, Steven C. Amstrup & Michael E. Mann, BioScience, bix133, https://doi.org/10.1093/biosci/bix133 (published 29 November 2017);

[2]  “Judith Curry retires, citing ‘craziness’ of climate science”, Scott Waldman, Climatewire, 4 January, 2017

[3]  “Climate-change facts: Temperature is not determined by carbon dioxide”

[4]  Climate Change: Evidence & Causes — An Overview from the Royal Society and the U.S. National Academy of Sciences, National Academies Press, 2014; see critical review, “Climate-change science or climate-change propaganda?”, Journal of Scientific Exploration, 29 (2015) 621–636

 

Posted in consensus, denialism, global warming, media flaws, politics and science, science is not truth, science policy, unwarranted dogmatism in science | Tagged: , | Leave a Comment »

Where to turn for disinterested scientific knowledge and insight?

Posted by Henry Bauer on 2018/02/11

The “vicious cycle of wrong knowledge” illustrates the dilemma we face nowadays: Where to turn for disinterested scientific knowledge and insight?

In centuries past in the intellectual West, religious authorities had offered unquestionable truth. In many parts of the world, religious authorities or political authorities still do. But in relatively emancipated, socially and politically open societies, the dilemma is inescapable. We accept that religion doesn’t have final answers on everything about the natural world, even if we accept the value of religious teachings about how we should behave as human beings. Science, it seemed, knew what religion didn’t, about the age of the Earth, about the evolution of living things, about all sorts of physical, material things. So “science” became the place to turn for reliable knowledge. We entered the Age of Science (Knight, 1983). But we (most of us) recognize that scientific knowledge cannot be absolutely and finally true because, ultimately, it rests on experience, on induction from observations, which can never be a complete reflection of the natural world; there remain always the known unknown and the unknown unknown.

Nevertheless, for practical purposes we want to be guided by the best current understanding that science can afford. The problem becomes, how to glean the best current understanding that science can offer?

Society’s knee-jerk response is to consult the scientific community: scientific associations, lauded scientists, government agencies, scientific literature. What society hears, however, is not a disinterested analysis or filtering of what those sources say, because all of them conform to whatever the contemporary “scientific consensus” happens to be. And, as earlier discussed (Dangerous knowledge II: Wrong knowledge about the history of science), that consensus is inevitably fallible, albeit the conventional wisdom is not on guard against that, largely because of misconceptions stemming from an holistic ignorance of the history of science.

The crux of the problem is that scientific knowledge and ideas that do not conform to the scientific consensus are essentially invisible in the public sphere. In any case, society has no mechanism for ensuring that what the scientific consensus holds at any given time is the most faithful, authoritative reflection of the available evidence and its logical interpretation. That represents clear and present danger as “science” is increasingly turned to for advice on public policies, in an environment replete with claims of truth from many sides, people claiming to speak for religion or for science, or organizations claiming to do so, including sophisticated advertisements by commercial and political groups.

In less politically partisan times, Congress and the administration had the benefit of the Office of Technological Assessment (OTA), founded in 1972 to provide policy makers with advice, as objective and up-to-date as possible, about technical issues; but OTA was disbanded in 1995 for reasons of partisan politics, and no substitute has been established. Society needs badly some authoritative, disinterested, non-partisan mechanism for analyzing, filtering, and interpreting scientific claims.

The only candidate so far on offer for that task is a Science Court, apparently first mooted half a century ago by Arthur Kantrowitz (1967) in the form of an “institute for scientific judgment”, soon named by others as a Science Court (Cavicchi 1993; Field 1993; Mazur 1993; Task Force 1993). Such a Court’s sole mission would be to assess the validity of conflicting contemporary scientific and technical claims and advice.

The need for such a Court is most obvious in the context of impassioned controversy in the public arena where political and ideological interests confuse and obfuscate the purely technical points, as for instance nowadays over global warming (A politically liberal global-warming skeptic?). Accordingly, a Science Court would need complete independence, for which the best available appropriate model is the United States Supreme Court. Indeed, perhaps a Science Court could be managed and supervised by the Supreme Court.

Many knotty issue beside independence present themselves in considering how a Science Court might function: choice of judges or panels or juries; choice of issues to take on; possibilities for appealing findings. For an extended discussion of such matters, see chapter 12 of Science Is Not What You Think and further sources given there. But the salient point is this:

Society needs but lacks an authoritative, disinterested, non-partisan mechanism for adjudicating conflicting scientific advice. A Science Court seems the only conceivable possibility.

———————————————————–

Jon R. Cavicchi, “The Science Court: A Bibliography”, RISK — Issues in Health and Safety, 4 [1993] 171–8.

Thomas G. Field, Jr., “The Science Court Is Dead; Long Live the Science Court!” RISK — Issues in Health and Safety, 4 [1993] 95–100.

Arthur Kantrowitz, “Proposal for an Institution for Scientific Judgment”, Science,
156 [1967] 763–4.

David Knight, The Age of Science, Basil Blackwell, 1986.

Allan Mazur, “The Science Court: Reminiscence and Retrospective”, RISK — Issues in Health and Safety, 4 [1993] 161–70.

Task Force of the Presidential Advisory Group on Anticipated Advances in Science and Technology, “The Science Court Experiment: An Interim Report”, RISK — Issues in Health and Safety, 4 [1993] 179–88

Posted in consensus, legal considerations, media flaws, politics and science, science is not truth, science policy, scientific culture, unwarranted dogmatism in science | Tagged: | 2 Comments »

Dangerous knowledge IV: The vicious cycle of wrong knowledge

Posted by Henry Bauer on 2018/02/03

Peter Duesberg, universally admired scientist, cancer researcher, and leading virologist, member of the National Academy of Sciences, recipient of a seven-year Outstanding Investigator Grant from the National Institutes of Health, was astounded when the world turned against him because he pointed to the clear fact that HIV had never been proven to cause AIDS and to the strong evidence that, indeed, no retrovirus could behave in the postulated manner.

Frederick Seitz, at one time President of the National Academy of Sciences and for some time President of Rockefeller University, became similarly non grata for pointing out that parts of an official report contradicted one another about whether human activities had been proven to be the prime cause of global warming (“A major deception on global warming”, Wall Street Journal, 12 June 1996).

A group of eminent astronomers and astrophysicists (among them Halton Arp, Hermann Bondi, Amitabha Ghosh, Thomas Gold, Jayant Narlikar) had their letter pointing to flaws in Big-Bang theory rejected by Nature.

These distinguished scientists illustrate (among many other instances involving less prominent scientists) that the scientific establishment routinely refuses to acknowledge evidence that contradicts contemporary theory, even evidence proffered by previously lauded fellow members of the elite establishment.

Society’s dangerous wrong knowledge about science includes the mistaken belief that science hews earnestly to evidence and that peer review — the behavior of scientists — includes considering new evidence as it comes in.

Not so. Refusal to consider disconfirming facts has been documented on a host of topics less prominent than AIDS or global warming: prescription drugs, Alzheimer’s disease, extinction of the dinosaurs, mechanism of smell, human settlement of the Americas, the provenance of Earth’s oil deposits, the nature of ball lightning, the evidence for cold nuclear fusion, the dangers from second-hand tobacco smoke, continental-drift theory, risks from adjuvants and preservatives in vaccines, and many more topics; see for instance Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, Jefferson (NC): McFarland 2012. And of course society’s officialdom, the conventional wisdom, the mass media, all take their cue from the scientific establishment.

The virtually universal dismissal of contradictory evidence stems from the nature of contemporary science and its role in society as the supreme arbiter of knowledge, and from the fact of widespread ignorance about the history of science, as discussed in earlier posts in this series (Dangerous knowledge; Dangerous knowledge II: Wrong knowledge about the history of science; Dangerous knowledge III: Wrong knowledge about science).

The upshot is a vicious cycle. Ignorance of history makes it seem incredible that “science” would ignore evidence, so claims to that effect on any given topic are brushed aside — because it is not known that science has ignored contrary evidence routinely. But that fact can only be recognized after noting the accumulation of individual topics on which this has happened, evidence being ignored. That’s the vicious cycle.

Wrong knowledge about science and the history of science impedes recognizing that evidence is being ignored in any given actual case. Thereby radical progress is nowadays being greatly hindered, and public policies are being misled by flawed interpretations enshrined by the scientific consensus. Society has succumbed to what President Eisenhower warned against (Farewell speech, 17 January 1961) :

in holding scientific research and discovery in respect, as we should,
we must also be alert to the equal and opposite danger
that public policy could itself become the captive
of a scientific-technological elite.

The vigorous defending of established theories and the refusal to consider contradictory evidence means that once theories have been widely enough accepted, they soon become knowledge monopolies, and support for research establishes the contemporary theory as a research cartel(“Science in the 21st Century: Knowledge Monopolies and Research Cartels”).

The presently dysfunctional circumstances have been recognized only by two quite small groups of people:

  1. Observers and critics (historians, philosophers, sociologists of science, scholars of Science & Technology Studies)
  2. Researchers whose own experiences and interests happened to cause them to come across facts that disprove generally accepted ideas — for example Duesberg, Seitz, the astronomers cited above, etc. But these researchers only recognize the unwarranted dismissal of evidence in their own specialty, not that it is a general phenomenon (see my talk, “HIV/AIDS blunder is far from unique in the annals of science and medicine” at the 2009 Oakland Conference of Rethinking AIDS; mov file can be downloaded at http://ra2009.org/program.html, but streaming from there does not work).

Such dissenting researchers find themselves progressively excluded from mainstream discourse, and that exclusion makes it increasingly unlikely that their arguments and documentation will gain attention. Moreover, frustrated by a lack of attention from mainstream entities, dissenters from a scientific consensus find themselves listened to and appreciated increasingly only by people outside the mainstream scientific community to whom the conventional wisdom also pays no attention, for instance the parapsychologists, ufologists, cryptozoologists. Such associations, and the conventional wisdom’s consequent assigning of guilt by association, then entrenches further the vicious cycle of dangerous knowledge that rests on the acceptance of contemporary scientific consensuses as not to be questioned — see chapter 2 in Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth and “Good Company and Bad Company”, pp. 118-9 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017).

Posted in conflicts of interest, consensus, denialism, funding research, global warming, media flaws, peer review, resistance to discovery, science is not truth, science policy, scientific culture, scientism, scientists are human, unwarranted dogmatism in science | Tagged: , | 2 Comments »

Dangerous knowledge II: Wrong knowledge about the history of science

Posted by Henry Bauer on 2018/01/27

Knowledge of history among most people is rarely more than superficial; the history of science is much less known even than is general (political, social) history. Consequently, what many people believe they know about science is typically wrong and dangerously misleading.

General knowledge about history, the conventional wisdom about historical matters, depends on what society as a whole has gleaned from historians, the people who have devoted enormous time and effort to assemble and assess the available evidence about what happened in the past.

Society on the whole does not learn about history from the specialists, the primary research historians. Rather, teachers of general national and world histories in schools and colleges have assembled some sort of whole story from all the specialist bits, perforce taking on trust what the specialist cadres have concluded. The interpretations and conclusions of the primary specialists are filtered and modified by second-level scholars and teachers. So what society as a whole learns about history as a whole is a sort of third-hand impression of what the specialists have concluded.

History is a hugely demanding pursuit. Its mission is so vast that historians have increasingly had to specialize. There are specialist historians of economics, of   mathematics, and of other aspects of human cultures; and there are historians who specialize in particular eras in particular places, say Victorian Britain. Written material still extant is an important resource, of course, but it cannot be taken literally, it has to be evaluated for the author’s identity, and clues as to bias and ignorance. Artefacts provide clues, and various techniques from chemistry and physics help to discover dates or to test putative dates. What further makes doing history so demanding is the need to capture the spirit of a different time and place, an holistic sense of it; on top of which the historian needs a deep, authentic understanding of the particular aspect of society under scrutiny. So doing economic history, for example, calls not only for a good sense of general political history, it requires also a good understanding of the whole subject of economics itself in its various stages of development.

The history of science is a sorely neglected specialty within history. There are History Departments in colleges and universities without a specialist in the history of science — which entails also that many of the people who — at both school and college levels — teach general history or political or social or economic history, or the history of particular eras or places, have never themselves learned much about the history of science, not even as to how it impinges on their own specialty. One reason for the incongruous place — or lack of a place — for the history of science with respect to the discipline of history as a whole is the need for historians to command an authentic understanding of the particular aspect of history that is their special concern. Few if any people whose career ambition was to become historians have the needed familiarity with any science; so a considerable proportion of historians of science are people whose careers began in a science and who later turned to history.

Most of the academic research in the history of science has been carried on in separate Departments of History of Science, or Departments of History and Philosophy of Science, or Departments of History and Sociology of Science, or in the relatively new (founded within the last half a century) Departments of Science & Technology Studies (STS).

Before there were historian specialists in the history of science, some historical aspects were typically mentioned within courses in the sciences. Physicists might hear bits about Galileo, Newton, Einstein. Chemists would be introduced to thought-bites about alchemy, Priestley and oxygen, Haber and nitrogen fixation, atomic theory and the Greeks. Such anecdotes were what filtered into general knowledge about the history of science; and the resulting impressions are grossly misleading. Within science courses, the chief interest is in the contemporary state of known facts and established theories, and historical aspects are mentioned only in so far as they illustrate progress toward ever better understanding, yielding an overall sense that science has been unswervingly progressive and increasingly trustworthy. In other words, science courses judge the past in terms of what the present knows, an approach that the discipline of history recognizes as unwarranted, since the purpose of history is to understand earlier periods fully, to know about the people and events in their own terms, under their own values.

*                   *                   *                  *                    *                   *

How to explain that science, unlike other human ventures, has managed to get better all the time? It must be that there is some “scientific method” that ensures faithful adherence to the realities of Nature. Hence the formulaic “scientific method” taught in schools, and in college courses in the behavioral and social sciences (though not in the natural sciences).

Specialist historians of science, and philosophers and sociologists of science and scholars of Science & Technology Studies all know that science is not done by any such formulaic scientific method, and that the development of modern science owes as much to the precursors and ground-preparers as to such individual geniuses as Newton, Galileo, etc. — Newton, by the way, being so fully aware of that as to have used the modest “If I have seen further it is by standing on the shoulders of giants” mentioned in my previous post (Dangerous knowledge).

*                     *                   *                   *                   *                   *

Modern science cannot be understood, cannot be appreciated without an authentic sense of the actual history of science. Unfortunately, for the reasons outlined above, contemporary culture is pervaded by partly ignorance and partly wrong knowledge of the history of science. In elementary schools and in high schools, and in college textbooks in the social sciences, students are mis-taught that science is characterized, defined, by use of “the scientific method”. That is simply not so: see Chapter 2 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017)  and sources cited there. The so-called the scientific method is an invention of philosophical speculation by would-be interpreters of the successes of science; working scientists never subscribed to this fallacy, see for instance Reflections of a Physicist (P. W. Bridgman, Philosophical Library, 1955), or in 1992 the physicist David Goodstein, “I would strongly recommend this book to anyone who hasn’t yet heard that the scientific method is a myth. Apparently there are still lots of those folks around” (“this book” being my Scientific Literacy and Myth of the Scientific Method).

The widespread misconception about the scientific method is compounded by the misconception that the progress of science has been owing to individual acts of genius by the people whose names are common currency — Galileo, Newton, Darwin, Einstein, etc. — whereas in reality those unquestionably outstanding individuals were not creating out of the blue but rather placing keystones, putting final touches, synthesizing; see for instance Tony Rothman’s Everything’s Relative: And Other Fables from Science and Technology (Wiley, 2003). The same insight is expressed in Stigler’s Law, that discoveries are typically named after the last person who discovered them, not the first (S. M. Stigler, “Stigler’s Law of Eponymy”, Transactions of the N.Y. Academy of Science, II, 39 [1980] 147–58).

That misconception about science progressing by lauded leaps by applauded geniuses is highly damaging since it hides the crucially important lesson that the acts of genius that we praise in hindsight were vigorously, often even viciously resisted by their contemporaries, their contemporary scientific establishment and scientific consensus; see “Resistance by scientists to scientific discovery” (Bernard Barber, Science, 134 [1961] 596–602); “Prematurity and uniqueness in scientific discovery” (Gunther Stent, Scientific American, December 1972, 84–93); Prematurity in Scientific Discovery: On Resistance and Neglect (Ernest B. Hook (ed)., University of California Press, 2002).

What is perhaps most needed nowadays, as the authority of science is invoked in so many aspects of everyday affairs and official policies, is clarity that any contemporary scientific consensus is inherently and inevitably fallible; and that the scientific establishment will nevertheless defend it zealously, often unscrupulously, even when it is demonstrably wrong.

 

Recommended reading: The historiography of the history of science, its relation to general history, and related issues, as well as synopses of such special topics as evolution or relativity, are treated authoritatively in Companion to the History of Modern Science (eds.: Cantor, Christie, Hodge, Olby; Routledge, 1996) [not to be confused with the encyclopedia titled Oxford Companion to the History of Modern Science, ed. Heilbron, Oxford University Press, 2003).

Posted in consensus, media flaws, resistance to discovery, science is not truth, scientific culture, scientific literacy, scientism, scientists are human, the scientific method, unwarranted dogmatism in science | Tagged: , , | 2 Comments »

Dangerous knowledge

Posted by Henry Bauer on 2018/01/24

It ain’t what you don’t know that gets you into trouble.
It’s what you know for sure that just ain’t so.

That’s very true.

In a mild way, the quote also illustrates itself since it is so often attributed wrongly; perhaps most often to Mark Twain but also to other humorists — Will Rogers, Artemus Ward, Kin Hubbard — as well as to inventor Charles Kettering, pianist Eubie Blake, baseball player Yogi Berra, and more (“Bloopers: Quote didn’t really originate with Will Rogers”).

Such mis-attributions of insightful sayings are perhaps the rule rather than any exception; sociologist Robert Merton even wrote a whole book (On the Shoulders of Giants, Free Press 1965 & several later editions) about mis-attributions over many centuries of the modest acknowledgment that “If I have seen further it is by standing on the shoulders of giants”.

No great harm comes from mis-attributing words of wisdom. Great harm is being done nowadays, however, by accepting much widely believed and supposedly scientific medical knowledge; for example about hypertension, cholesterol, prescription drugs, and more (see works listed in What’s Wrong with Present-Day Medicine).

The trouble is that “science” was so spectacularly successful in elucidating so much about the natural world and contributing to so many useful technologies that it has come to be regarded as virtually infallible.

Historians and other specialist observers of scientific activity — philosophers, sociologists, political scientists, various others — of course know that science, no less than all other human activities, is inherently and unavoidably fallible.

Until the middle of the 20th century, science was pretty much an academic vocation not venturing very much outside the ivory towers. Consequently and fortunately, the innumerable things on which science went wrong in past decades and centuries did no significant damage to society as a whole; the errors mattered only within science and were corrected as time went by. Nowadays, however, science has come to pervade much of everyday life through its influences on industry, medicine, and official policies on much of what governments are concerned with: agriculture, public health, environmental matters, technologies of transport and of warfare, and so on. Official regulations deal with what is permitted to be in water and in the air and in innumerable man-made products; propellants in spray cans and refrigerants in cooling machinery have been banned, globally, because science (primarily chemists) persuaded the world that those substances were reaching the upper atmosphere and destroying the natural “layer” of ozone that absorbs some of the ultraviolet radiation from the sun, thereby protecting us from damage to eyes and skin. For the last three decades, science (primarily physicists) has convinced the world that human generation of carbon dioxide is warming the planet and causing irreversible climate change.

So when science goes wrong nowadays, that can do untold harm to national economies, and to whole populations of people if the matter has to do with health.

Yet science remains as fallible as it ever was, because it continues to be done by human beings. The popular illusion that science is objective and safeguarded from error by the scientific method is simply that, an illusion: the scientific method describes how science perhaps ought to be done, but how it is done depends on the human beings doing it, none of whom never make mistakes.

When I wrote that “science persuaded the world” or “convinced the world”, of course it was not science that did that, because science cannot speak for itself. Rather, the apparent “scientific consensus” at any given time is generally taken a priori as “what science says”. But it is rare that any scientific consensus represents what all pertinent experts think; and consensus is appealed to only when there is controversy, as Michael Crichton pointed out so cogently: “the claim of consensus has been the first refuge of scoundrels[,] … invoked only in situations where the science is not solid enough. Nobody says the consensus of scientists agrees that E=mc2. Nobody says the consensus is that the sun is 93 million miles away. It would never occur to anyone to speak that way”.

Yet the scientific consensus represents contemporary views incorporated in textbooks and disseminated by science writers and the mass media. Attempting to argue publicly against it on any particular topic encounters the pervasive acceptance of the scientific consensus as reliably trustworthy. What reason could there be to question “what science says”? There seems no incentive for anyone to undertake the formidable task of seeking out and evaluating the actual evidence for oneself.

Here is where real damage follows from what everyone knows that just happens not to be so. It is not so that a scientific consensus is the same as “what science says”, in other words what the available evidence is, let alone what it implies. On any number of issues, there are scientific experts who recognize flaws in the consensus and dissent from it. That dissent is not usually mentioned by the popular media, however; and if it should be mentioned then it is typically described as misguided, mistaken, “denialism”.

Examples are legion. Strong evidence and expert voices dissent from the scientific consensus on many matters that the popular media regard as settled: that the universe began with a Big Bang about 13 billion years ago; that anti-depressant drugs work specifically and selectively against depression; that human beings (the “Clovis” people) first settled the Americas about 13,000 years ago by crossing the Bering Strait; that the dinosaurs were brought to an end by the impact of a giant asteroid; that claims of nuclear fusion at ordinary temperatures (“cold fusion”) have been decisively disproved; that Alzheimer’s disease is caused by the build-up of plaques of amyloid protein; and more. Details are offered in my book, Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth (McFarland, 2012). That book also documents the widespread informed dissent from the views that human-generated carbon dioxide is the prime cause of global warming and climate change, and that HIV is not the cause of AIDS (for which see the compendium of evidence and sources at The Case against HIV).

The popular knowledge that just isn’t so is, directly, that it is safe to accept as true for all practical purposes what the scientific consensus happens to be. That mistaken knowledge can be traced, however, to knowledge that isn’t so about the history of science, for that history is a very long story of the scientific consensus being wrong and later modified or replaced, quite often more than once.

Further posts will talk about why the real history of science is so little known.

 

Posted in consensus, denialism, global warming, media flaws, medical practices, prescription drugs, science is not truth, scientific literacy, scientism, scientists are human, the scientific method, unwarranted dogmatism in science | Tagged: , | 4 Comments »

Science is broken: Illustrations from Retraction Watch

Posted by Henry Bauer on 2017/12/21

I commented before about Science is broken: Perverse incentives and the misuse of quantitative metrics have undermined the integrity of scientific research.  The magazine The Scientist published on 18 December “Top 10 Retractions of 2017 —
Making the list: a journal breaks a retraction record, Nobel laureates Do the Right Thing, and Seinfeld characters write a paper”, compiled by Retraction Watch. It should be widely read and digested for an understanding of the jungle of unreliable stuff nowadays put out under the rubric of “science”.

See also “Has all academic publishing become predatory? Or just useless? Or just vanity publishing?”

 

Posted in conflicts of interest, fraud in medicine, fraud in science, media flaws, science is not truth, scientific culture, scientists are human | Tagged: , | Leave a Comment »

Fog Facts: Side effects and re-positioning of drugs

Posted by Henry Bauer on 2017/11/23

Fog Facts: things that are known and yet not known —
[not known to the conventional wisdom, the general public, the media
but known to those (few) who are genuinely informed about the subject]

For that delightful term, Fog Facts, I’m grateful to Larry Beinhart who introduced me to it in his novel “The Librarian”. There it’s used in connection with political matters, but it’s entirely appropriate for the disconnect between “what everyone knows” about blood pressure, cholesterol, prescription drugs, and things of that ilk, and what the actual facts are in the technical literature.

For example, the popular shibboleth is that drug companies spend hundreds of millions of dollars in the development of a new drug, and that’s why they need to make such large profits to plough back into research. The truth of the matter is that most new drugs originate in academic research, conducted to a great extent at public expense; and drug companies spend more on advertising and marketing than they do on research. All that is known to anyone who cares to read material other than what the drug-company ads say and what the news media disseminate; and yet it’s not known because too few people read the right things, even books by former editors of medical journals and academic researchers at leading universities and published by mainstream publishers; see “What’s wrong with modern medicine”.

When it comes to drug “development”, the facts are all hidden in plain view. There’s even a whole journal about it, Nature Reviews — Drug Discovery, that began publication in 2002. I came to learn about this because Josh Nicholson had alerted me to an article in that journal, “Drug repositioning: identifying and developing new uses for existing drugs” (by Ted T. Ashburn and Karl B. Thor, 3 [2004] 673-82). I had never heard of “drug repositioning”. What could it mean?

Well, it means finding new uses for old drugs. And the basic reason for doing so is that it’s much easier and more profitable than trying to design or discover a new drug, because old drugs have already been approved as safe, and it’s already known how to manufacture them.

What seems obvious, however — albeit only as a Fog Fact — is that the very success of repositioning drugs should be a red flag warning against the drug-based medicine or drug-first medicine or drug-besotted medicine that has become standard practice in the United States. The rationale for prescribing a drug is that it will fix what needs attending to without seriously and adversely affecting anything else, in other words that there are no serious “side” effects. But repositioning a drug shows that it has a comparably powerful effect on something other than its original target. In other words, “side” effects may be as powerful and significant as the originally intended effect. Ashburn and Thor give a number of examples:

Cymbalta was originally prescribed to treat depression, anxiety, diabetic peripheral neuropathy, and fibromyalgia (all at about the same dosage, which might cause one to wonder how many different mechanisms or systems are actually being affected besides the intended one). The listed side effects do not include anything about urination, yet the drug has been repositioned as Duloxetine SUI to treat “stress urinary incontinence (SUI), a condition characterized by episodic loss of urine associated with sharp increases in intra-abdominal pressure (for example, when a person laughs, coughs or sneezes)”; and “Lilly is currently anticipating worldwide sales of Duloxetine SUI to approach US $800 million within four years of launch”.

Dapoxetine was not a success for analgesia or against depression, but came into its own to treat premature ejaculation.

Thalidomide was originally marketed to treat morning sickness, but it produced limb defects in babies. Later it was found effective against “erythema nodosum laprosum (ENL), an agonizing inflammatory condition of leprosy”. Moreover, since the birth defects may have been associated with blocking development of blood vessels, thalidomide might work against cancer; and indeed “Celgene recorded 2002 sales of US $119 million for Thalomid, 92% of which came from off-label use of the drug in treating cancer, primarily multiple myeloma . . . . Sales reached US $224 million in 2003 . . . . The lesson from the thalidomide story is that no drug is ever understood completely, and repositioning, no matter   how unlikely, often remains a possibility” [emphasis added: once the FDA has approved drug A to treat condition B, individual doctors are allowed to prescribe it for other conditions as well, although drug companies are not allowed to advertise it for those other uses. That legal restriction is far from always honored, as demonstrated by the dozens of settlements paid by drug companies for breaking the law.]

Perhaps the prize for repositioning (so far) goes to Pfizer, which turned sildenafil, an unsuccessful treatment for angina, into Viagra, a very successful treatment for “erectile dysfunction”: “By 2003, sildenafil had annual sales of US $1.88 billion and nearly 8 million men were taking sildenafil in the United States alone”.

At any rate, Ashburn and Thor could not be more clear: The whole principle behind repositioning is that it’s more profitable to see what existing drugs might do than to look for what might be biologically speaking the best treatment for a given ailment. So anti-depressants get approved and prescribed against smoking, premenstrual dysphoria, or obesity; a Parkinson’s drug and a hypertension drug are prescribed for ADHD; an anti-anxiety medication is prescribed for irritable bowel syndrome; Alzheimer’s, whose etiology is not understood, gets treated with Reminyl which, as Nivalin, (generic galantamine) is also supposed to treat polio and paralysis. Celebrex, a VIOXX-type anti-arthritic, can be prescribed against breast and colon cancer; treatment of enlarged prostate is by the same drug used to combat hair loss; the infamous “morning after” pill for pregnancy termination can treat “psychotic major depression”; Raloxifene to treat breast and prostate cancer is magically able also to treat osteoporosis.

And so on and so forth. This whole business of drug repositioning exposes the fallacy of the concept that it is possible to find “a silver bullet”, a chemical substance that can be introduced into the human body to accomplish just one desired thing. That concept ought to be recognized as absurd a priori, since we know that human physiology is an interlocking network of signals, feedback, attempted homeostasis, defenses against intruders.

It is one thing to use, for brief periods of time, toxins that can help the body clear infections — sulfa drugs, antibiotics. It is quite another conceit and ill-founded hubris to administer powerful chemicals to decrease blood pressure, lower cholesterol, and the like, in other words, to attempt to alter interlocking self-regulating systems as though one single aspect of them could be altered without doing God-only-knows-what-else elsewhere.

The editorial in the first issue (January 2002) of Nature Reviews Drug Discovery was actually clear about this: “drugs need to work in whole, living systems”.

But that editorial also gave the reason for the present-day emphasis on medicine by drugs: “Even with vastly increased R & D spending, the top 20 pharmaceutical companies still churn out only around 20 drugs per year between them, far short of the 4-5 new drugs that analysts say they each need to produce to justify their discovery and development costs”.

And the editorial also mentions one of the deleterious “side” effects of the rush to introduce new drugs: “off-target effects . . . have led to the vastly increased number of costly late-stage failures seen in recent years (approximately half the withdrawals in the past 20 years have occurred since 1997)” — “off-target effects” being a synonym for “side” effects.

It’s not only that new drugs are being rushed to market. As a number of people have pointed out, drug companies also create their own markets by inventing diseases like attention-deficit disorder, erectile dysfunction, generalized anxiety disorder, and so on and on. Any deviation of behavior from what might naively be described as “normal” offers the opportunity to discover a new disease and to re-position a drug.

The ability of drug companies to sell drugs for new diseases is helped by the common misconception about “risk factors”. Medication against hypertension or high cholesterol, for example, is based on the presumption that both those raise the risk of heart attack, stroke, and other undesirable contingencies because both are “risk factors” for such contingencies. But “risk factor” describes only an observed association, a correlation, not an identified causation. Correlation never proves causation. “Treating” hypertension or high cholesterol makes sense only if those things are causes, and they have not been shown to be that. On the other hand, lifelong ingestion of drugs is certainly known to have potentially dangerous consequences.

Modern drug-based, really drug-obsessed medical practice is as misguided as “Seeking Immortality”.

Posted in fraud in medicine, legal considerations, media flaws, medical practices, prescription drugs | Tagged: , , | Leave a Comment »

 
%d bloggers like this: