Skepticism about science and medicine

In search of disinterested science

Archive for the ‘media flaws’ Category

Where to turn for disinterested scientific knowledge and insight?

Posted by Henry Bauer on 2018/02/11

The “vicious cycle of wrong knowledge” illustrates the dilemma we face nowadays: Where to turn for disinterested scientific knowledge and insight?

In centuries past in the intellectual West, religious authorities had offered unquestionable truth. In many parts of the world, religious authorities or political authorities still do. But in relatively emancipated, socially and politically open societies, the dilemma is inescapable. We accept that religion doesn’t have final answers on everything about the natural world, even if we accept the value of religious teachings about how we should behave as human beings. Science, it seemed, knew what religion didn’t, about the age of the Earth, about the evolution of living things, about all sorts of physical, material things. So “science” became the place to turn for reliable knowledge. We entered the Age of Science (Knight, 1983). But we (most of us) recognize that scientific knowledge cannot be absolutely and finally true because, ultimately, it rests on experience, on induction from observations, which can never be a complete reflection of the natural world; there remain always the known unknown and the unknown unknown.

Nevertheless, for practical purposes we want to be guided by the best current understanding that science can afford. The problem becomes, how to glean the best current understanding that science can offer?

Society’s knee-jerk response is to consult the scientific community: scientific associations, lauded scientists, government agencies, scientific literature. What society hears, however, is not a disinterested analysis or filtering of what those sources say, because all of them conform to whatever the contemporary “scientific consensus” happens to be. And, as earlier discussed (Dangerous knowledge II: Wrong knowledge about the history of science), that consensus is inevitably fallible, albeit the conventional wisdom is not on guard against that, largely because of misconceptions stemming from an holistic ignorance of the history of science.

The crux of the problem is that scientific knowledge and ideas that do not conform to the scientific consensus are essentially invisible in the public sphere. In any case, society has no mechanism for ensuring that what the scientific consensus holds at any given time is the most faithful, authoritative reflection of the available evidence and its logical interpretation. That represents clear and present danger as “science” is increasingly turned to for advice on public policies, in an environment replete with claims of truth from many sides, people claiming to speak for religion or for science, or organizations claiming to do so, including sophisticated advertisements by commercial and political groups.

In less politically partisan times, Congress and the administration had the benefit of the Office of Technological Assessment (OTA), founded in 1972 to provide policy makers with advice, as objective and up-to-date as possible, about technical issues; but OTA was disbanded in 1995 for reasons of partisan politics, and no substitute has been established. Society needs badly some authoritative, disinterested, non-partisan mechanism for analyzing, filtering, and interpreting scientific claims.

The only candidate so far on offer for that task is a Science Court, apparently first mooted half a century ago by Arthur Kantrowitz (1967) in the form of an “institute for scientific judgment”, soon named by others as a Science Court (Cavicchi 1993; Field 1993; Mazur 1993; Task Force 1993). Such a Court’s sole mission would be to assess the validity of conflicting contemporary scientific and technical claims and advice.

The need for such a Court is most obvious in the context of impassioned controversy in the public arena where political and ideological interests confuse and obfuscate the purely technical points, as for instance nowadays over global warming (A politically liberal global-warming skeptic?). Accordingly, a Science Court would need complete independence, for which the best available appropriate model is the United States Supreme Court. Indeed, perhaps a Science Court could be managed and supervised by the Supreme Court.

Many knotty issue beside independence present themselves in considering how a Science Court might function: choice of judges or panels or juries; choice of issues to take on; possibilities for appealing findings. For an extended discussion of such matters, see chapter 12 of Science Is Not What You Think and further sources given there. But the salient point is this:

Society needs but lacks an authoritative, disinterested, non-partisan mechanism for adjudicating conflicting scientific advice. A Science Court seems the only conceivable possibility.

———————————————————–

Jon R. Cavicchi, “The Science Court: A Bibliography”, RISK — Issues in Health and Safety, 4 [1993] 171–8.

Thomas G. Field, Jr., “The Science Court Is Dead; Long Live the Science Court!” RISK — Issues in Health and Safety, 4 [1993] 95–100.

Arthur Kantrowitz, “Proposal for an Institution for Scientific Judgment”, Science,
156 [1967] 763–4.

David Knight, The Age of Science, Basil Blackwell, 1986.

Allan Mazur, “The Science Court: Reminiscence and Retrospective”, RISK — Issues in Health and Safety, 4 [1993] 161–70.

Task Force of the Presidential Advisory Group on Anticipated Advances in Science and Technology, “The Science Court Experiment: An Interim Report”, RISK — Issues in Health and Safety, 4 [1993] 179–88

Advertisements

Posted in consensus, legal considerations, media flaws, politics and science, science is not truth, science policy, scientific culture, unwarranted dogmatism in science | Tagged: | 2 Comments »

Dangerous knowledge IV: The vicious cycle of wrong knowledge

Posted by Henry Bauer on 2018/02/03

Peter Duesberg, universally admired scientist, cancer researcher, and leading virologist, member of the National Academy of Sciences, recipient of a seven-year Outstanding Investigator Grant from the National Institutes of Health, was astounded when the world turned against him because he pointed to the clear fact that HIV had never been proven to cause AIDS and to the strong evidence that, indeed, no retrovirus could behave in the postulated manner.

Frederick Seitz, at one time President of the National Academy of Sciences and for some time President of Rockefeller University, became similarly non grata for pointing out that parts of an official report contradicted one another about whether human activities had been proven to be the prime cause of global warming (“A major deception on global warming”, Wall Street Journal, 12 June 1996).

A group of eminent astronomers and astrophysicists (among them Halton Arp, Hermann Bondi, Amitabha Ghosh, Thomas Gold, Jayant Narlikar) had their letter pointing to flaws in Big-Bang theory rejected by Nature.

These distinguished scientists illustrate (among many other instances involving less prominent scientists) that the scientific establishment routinely refuses to acknowledge evidence that contradicts contemporary theory, even evidence proffered by previously lauded fellow members of the elite establishment.

Society’s dangerous wrong knowledge about science includes the mistaken belief that science hews earnestly to evidence and that peer review — the behavior of scientists — includes considering new evidence as it comes in.

Not so. Refusal to consider disconfirming facts has been documented on a host of topics less prominent than AIDS or global warming: prescription drugs, Alzheimer’s disease, extinction of the dinosaurs, mechanism of smell, human settlement of the Americas, the provenance of Earth’s oil deposits, the nature of ball lightning, the evidence for cold nuclear fusion, the dangers from second-hand tobacco smoke, continental-drift theory, risks from adjuvants and preservatives in vaccines, and many more topics; see for instance Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, Jefferson (NC): McFarland 2012. And of course society’s officialdom, the conventional wisdom, the mass media, all take their cue from the scientific establishment.

The virtually universal dismissal of contradictory evidence stems from the nature of contemporary science and its role in society as the supreme arbiter of knowledge, and from the fact of widespread ignorance about the history of science, as discussed in earlier posts in this series (Dangerous knowledge; Dangerous knowledge II: Wrong knowledge about the history of science; Dangerous knowledge III: Wrong knowledge about science).

The upshot is a vicious cycle. Ignorance of history makes it seem incredible that “science” would ignore evidence, so claims to that effect on any given topic are brushed aside — because it is not known that science has ignored contrary evidence routinely. But that fact can only be recognized after noting the accumulation of individual topics on which this has happened, evidence being ignored. That’s the vicious cycle.

Wrong knowledge about science and the history of science impedes recognizing that evidence is being ignored in any given actual case. Thereby radical progress is nowadays being greatly hindered, and public policies are being misled by flawed interpretations enshrined by the scientific consensus. Society has succumbed to what President Eisenhower warned against (Farewell speech, 17 January 1961) :

in holding scientific research and discovery in respect, as we should,
we must also be alert to the equal and opposite danger
that public policy could itself become the captive
of a scientific-technological elite.

The vigorous defending of established theories and the refusal to consider contradictory evidence means that once theories have been widely enough accepted, they soon become knowledge monopolies, and support for research establishes the contemporary theory as a research cartel(“Science in the 21st Century: Knowledge Monopolies and Research Cartels”).

The presently dysfunctional circumstances have been recognized only by two quite small groups of people:

  1. Observers and critics (historians, philosophers, sociologists of science, scholars of Science & Technology Studies)
  2. Researchers whose own experiences and interests happened to cause them to come across facts that disprove generally accepted ideas — for example Duesberg, Seitz, the astronomers cited above, etc. But these researchers only recognize the unwarranted dismissal of evidence in their own specialty, not that it is a general phenomenon (see my talk, “HIV/AIDS blunder is far from unique in the annals of science and medicine” at the 2009 Oakland Conference of Rethinking AIDS; mov file can be downloaded at http://ra2009.org/program.html, but streaming from there does not work).

Such dissenting researchers find themselves progressively excluded from mainstream discourse, and that exclusion makes it increasingly unlikely that their arguments and documentation will gain attention. Moreover, frustrated by a lack of attention from mainstream entities, dissenters from a scientific consensus find themselves listened to and appreciated increasingly only by people outside the mainstream scientific community to whom the conventional wisdom also pays no attention, for instance the parapsychologists, ufologists, cryptozoologists. Such associations, and the conventional wisdom’s consequent assigning of guilt by association, then entrenches further the vicious cycle of dangerous knowledge that rests on the acceptance of contemporary scientific consensuses as not to be questioned — see chapter 2 in Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth and “Good Company and Bad Company”, pp. 118-9 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017).

Posted in conflicts of interest, consensus, denialism, funding research, global warming, media flaws, peer review, resistance to discovery, science is not truth, science policy, scientific culture, scientism, scientists are human, unwarranted dogmatism in science | Tagged: , | 2 Comments »

Dangerous knowledge II: Wrong knowledge about the history of science

Posted by Henry Bauer on 2018/01/27

Knowledge of history among most people is rarely more than superficial; the history of science is much less known even than is general (political, social) history. Consequently, what many people believe they know about science is typically wrong and dangerously misleading.

General knowledge about history, the conventional wisdom about historical matters, depends on what society as a whole has gleaned from historians, the people who have devoted enormous time and effort to assemble and assess the available evidence about what happened in the past.

Society on the whole does not learn about history from the specialists, the primary research historians. Rather, teachers of general national and world histories in schools and colleges have assembled some sort of whole story from all the specialist bits, perforce taking on trust what the specialist cadres have concluded. The interpretations and conclusions of the primary specialists are filtered and modified by second-level scholars and teachers. So what society as a whole learns about history as a whole is a sort of third-hand impression of what the specialists have concluded.

History is a hugely demanding pursuit. Its mission is so vast that historians have increasingly had to specialize. There are specialist historians of economics, of   mathematics, and of other aspects of human cultures; and there are historians who specialize in particular eras in particular places, say Victorian Britain. Written material still extant is an important resource, of course, but it cannot be taken literally, it has to be evaluated for the author’s identity, and clues as to bias and ignorance. Artefacts provide clues, and various techniques from chemistry and physics help to discover dates or to test putative dates. What further makes doing history so demanding is the need to capture the spirit of a different time and place, an holistic sense of it; on top of which the historian needs a deep, authentic understanding of the particular aspect of society under scrutiny. So doing economic history, for example, calls not only for a good sense of general political history, it requires also a good understanding of the whole subject of economics itself in its various stages of development.

The history of science is a sorely neglected specialty within history. There are History Departments in colleges and universities without a specialist in the history of science — which entails also that many of the people who — at both school and college levels — teach general history or political or social or economic history, or the history of particular eras or places, have never themselves learned much about the history of science, not even as to how it impinges on their own specialty. One reason for the incongruous place — or lack of a place — for the history of science with respect to the discipline of history as a whole is the need for historians to command an authentic understanding of the particular aspect of history that is their special concern. Few if any people whose career ambition was to become historians have the needed familiarity with any science; so a considerable proportion of historians of science are people whose careers began in a science and who later turned to history.

Most of the academic research in the history of science has been carried on in separate Departments of History of Science, or Departments of History and Philosophy of Science, or Departments of History and Sociology of Science, or in the relatively new (founded within the last half a century) Departments of Science & Technology Studies (STS).

Before there were historian specialists in the history of science, some historical aspects were typically mentioned within courses in the sciences. Physicists might hear bits about Galileo, Newton, Einstein. Chemists would be introduced to thought-bites about alchemy, Priestley and oxygen, Haber and nitrogen fixation, atomic theory and the Greeks. Such anecdotes were what filtered into general knowledge about the history of science; and the resulting impressions are grossly misleading. Within science courses, the chief interest is in the contemporary state of known facts and established theories, and historical aspects are mentioned only in so far as they illustrate progress toward ever better understanding, yielding an overall sense that science has been unswervingly progressive and increasingly trustworthy. In other words, science courses judge the past in terms of what the present knows, an approach that the discipline of history recognizes as unwarranted, since the purpose of history is to understand earlier periods fully, to know about the people and events in their own terms, under their own values.

*                   *                   *                  *                    *                   *

How to explain that science, unlike other human ventures, has managed to get better all the time? It must be that there is some “scientific method” that ensures faithful adherence to the realities of Nature. Hence the formulaic “scientific method” taught in schools, and in college courses in the behavioral and social sciences (though not in the natural sciences).

Specialist historians of science, and philosophers and sociologists of science and scholars of Science & Technology Studies all know that science is not done by any such formulaic scientific method, and that the development of modern science owes as much to the precursors and ground-preparers as to such individual geniuses as Newton, Galileo, etc. — Newton, by the way, being so fully aware of that as to have used the modest “If I have seen further it is by standing on the shoulders of giants” mentioned in my previous post (Dangerous knowledge).

*                     *                   *                   *                   *                   *

Modern science cannot be understood, cannot be appreciated without an authentic sense of the actual history of science. Unfortunately, for the reasons outlined above, contemporary culture is pervaded by partly ignorance and partly wrong knowledge of the history of science. In elementary schools and in high schools, and in college textbooks in the social sciences, students are mis-taught that science is characterized, defined, by use of “the scientific method”. That is simply not so: see Chapter 2 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017)  and sources cited there. The so-called the scientific method is an invention of philosophical speculation by would-be interpreters of the successes of science; working scientists never subscribed to this fallacy, see for instance Reflections of a Physicist (P. W. Bridgman, Philosophical Library, 1955), or in 1992 the physicist David Goodstein, “I would strongly recommend this book to anyone who hasn’t yet heard that the scientific method is a myth. Apparently there are still lots of those folks around” (“this book” being my Scientific Literacy and Myth of the Scientific Method).

The widespread misconception about the scientific method is compounded by the misconception that the progress of science has been owing to individual acts of genius by the people whose names are common currency — Galileo, Newton, Darwin, Einstein, etc. — whereas in reality those unquestionably outstanding individuals were not creating out of the blue but rather placing keystones, putting final touches, synthesizing; see for instance Tony Rothman’s Everything’s Relative: And Other Fables from Science and Technology (Wiley, 2003). The same insight is expressed in Stigler’s Law, that discoveries are typically named after the last person who discovered them, not the first (S. M. Stigler, “Stigler’s Law of Eponymy”, Transactions of the N.Y. Academy of Science, II, 39 [1980] 147–58).

That misconception about science progressing by lauded leaps by applauded geniuses is highly damaging since it hides the crucially important lesson that the acts of genius that we praise in hindsight were vigorously, often even viciously resisted by their contemporaries, their contemporary scientific establishment and scientific consensus; see “Resistance by scientists to scientific discovery” (Bernard Barber, Science, 134 [1961] 596–602); “Prematurity and uniqueness in scientific discovery” (Gunther Stent, Scientific American, December 1972, 84–93); Prematurity in Scientific Discovery: On Resistance and Neglect (Ernest B. Hook (ed)., University of California Press, 2002).

What is perhaps most needed nowadays, as the authority of science is invoked in so many aspects of everyday affairs and official policies, is clarity that any contemporary scientific consensus is inherently and inevitably fallible; and that the scientific establishment will nevertheless defend it zealously, often unscrupulously, even when it is demonstrably wrong.

 

Recommended reading: The historiography of the history of science, its relation to general history, and related issues, as well as synopses of such special topics as evolution or relativity, are treated authoritatively in Companion to the History of Modern Science (eds.: Cantor, Christie, Hodge, Olby; Routledge, 1996) [not to be confused with the encyclopedia titled Oxford Companion to the History of Modern Science, ed. Heilbron, Oxford University Press, 2003).

Posted in consensus, media flaws, resistance to discovery, science is not truth, scientific culture, scientific literacy, scientism, scientists are human, the scientific method, unwarranted dogmatism in science | Tagged: , , | 2 Comments »

Dangerous knowledge

Posted by Henry Bauer on 2018/01/24

It ain’t what you don’t know that gets you into trouble.
It’s what you know for sure that just ain’t so.

That’s very true.

In a mild way, the quote also illustrates itself since it is so often attributed wrongly; perhaps most often to Mark Twain but also to other humorists — Will Rogers, Artemus Ward, Kin Hubbard — as well as to inventor Charles Kettering, pianist Eubie Blake, baseball player Yogi Berra, and more (“Bloopers: Quote didn’t really originate with Will Rogers”).

Such mis-attributions of insightful sayings are perhaps the rule rather than any exception; sociologist Robert Merton even wrote a whole book (On the Shoulders of Giants, Free Press 1965 & several later editions) about mis-attributions over many centuries of the modest acknowledgment that “If I have seen further it is by standing on the shoulders of giants”.

No great harm comes from mis-attributing words of wisdom. Great harm is being done nowadays, however, by accepting much widely believed and supposedly scientific medical knowledge; for example about hypertension, cholesterol, prescription drugs, and more (see works listed in What’s Wrong with Present-Day Medicine).

The trouble is that “science” was so spectacularly successful in elucidating so much about the natural world and contributing to so many useful technologies that it has come to be regarded as virtually infallible.

Historians and other specialist observers of scientific activity — philosophers, sociologists, political scientists, various others — of course know that science, no less than all other human activities, is inherently and unavoidably fallible.

Until the middle of the 20th century, science was pretty much an academic vocation not venturing very much outside the ivory towers. Consequently and fortunately, the innumerable things on which science went wrong in past decades and centuries did no significant damage to society as a whole; the errors mattered only within science and were corrected as time went by. Nowadays, however, science has come to pervade much of everyday life through its influences on industry, medicine, and official policies on much of what governments are concerned with: agriculture, public health, environmental matters, technologies of transport and of warfare, and so on. Official regulations deal with what is permitted to be in water and in the air and in innumerable man-made products; propellants in spray cans and refrigerants in cooling machinery have been banned, globally, because science (primarily chemists) persuaded the world that those substances were reaching the upper atmosphere and destroying the natural “layer” of ozone that absorbs some of the ultraviolet radiation from the sun, thereby protecting us from damage to eyes and skin. For the last three decades, science (primarily physicists) has convinced the world that human generation of carbon dioxide is warming the planet and causing irreversible climate change.

So when science goes wrong nowadays, that can do untold harm to national economies, and to whole populations of people if the matter has to do with health.

Yet science remains as fallible as it ever was, because it continues to be done by human beings. The popular illusion that science is objective and safeguarded from error by the scientific method is simply that, an illusion: the scientific method describes how science perhaps ought to be done, but how it is done depends on the human beings doing it, none of whom never make mistakes.

When I wrote that “science persuaded the world” or “convinced the world”, of course it was not science that did that, because science cannot speak for itself. Rather, the apparent “scientific consensus” at any given time is generally taken a priori as “what science says”. But it is rare that any scientific consensus represents what all pertinent experts think; and consensus is appealed to only when there is controversy, as Michael Crichton pointed out so cogently: “the claim of consensus has been the first refuge of scoundrels[,] … invoked only in situations where the science is not solid enough. Nobody says the consensus of scientists agrees that E=mc2. Nobody says the consensus is that the sun is 93 million miles away. It would never occur to anyone to speak that way”.

Yet the scientific consensus represents contemporary views incorporated in textbooks and disseminated by science writers and the mass media. Attempting to argue publicly against it on any particular topic encounters the pervasive acceptance of the scientific consensus as reliably trustworthy. What reason could there be to question “what science says”? There seems no incentive for anyone to undertake the formidable task of seeking out and evaluating the actual evidence for oneself.

Here is where real damage follows from what everyone knows that just happens not to be so. It is not so that a scientific consensus is the same as “what science says”, in other words what the available evidence is, let alone what it implies. On any number of issues, there are scientific experts who recognize flaws in the consensus and dissent from it. That dissent is not usually mentioned by the popular media, however; and if it should be mentioned then it is typically described as misguided, mistaken, “denialism”.

Examples are legion. Strong evidence and expert voices dissent from the scientific consensus on many matters that the popular media regard as settled: that the universe began with a Big Bang about 13 billion years ago; that anti-depressant drugs work specifically and selectively against depression; that human beings (the “Clovis” people) first settled the Americas about 13,000 years ago by crossing the Bering Strait; that the dinosaurs were brought to an end by the impact of a giant asteroid; that claims of nuclear fusion at ordinary temperatures (“cold fusion”) have been decisively disproved; that Alzheimer’s disease is caused by the build-up of plaques of amyloid protein; and more. Details are offered in my book, Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth (McFarland, 2012). That book also documents the widespread informed dissent from the views that human-generated carbon dioxide is the prime cause of global warming and climate change, and that HIV is not the cause of AIDS (for which see the compendium of evidence and sources at The Case against HIV).

The popular knowledge that just isn’t so is, directly, that it is safe to accept as true for all practical purposes what the scientific consensus happens to be. That mistaken knowledge can be traced, however, to knowledge that isn’t so about the history of science, for that history is a very long story of the scientific consensus being wrong and later modified or replaced, quite often more than once.

Further posts will talk about why the real history of science is so little known.

 

Posted in consensus, denialism, global warming, media flaws, medical practices, prescription drugs, science is not truth, scientific literacy, scientism, scientists are human, the scientific method, unwarranted dogmatism in science | Tagged: , | 4 Comments »

Science is broken: Illustrations from Retraction Watch

Posted by Henry Bauer on 2017/12/21

I commented before about Science is broken: Perverse incentives and the misuse of quantitative metrics have undermined the integrity of scientific research.  The magazine The Scientist published on 18 December “Top 10 Retractions of 2017 —
Making the list: a journal breaks a retraction record, Nobel laureates Do the Right Thing, and Seinfeld characters write a paper”, compiled by Retraction Watch. It should be widely read and digested for an understanding of the jungle of unreliable stuff nowadays put out under the rubric of “science”.

See also “Has all academic publishing become predatory? Or just useless? Or just vanity publishing?”

 

Posted in conflicts of interest, fraud in medicine, fraud in science, media flaws, science is not truth, scientific culture, scientists are human | Tagged: , | Leave a Comment »

Fog Facts: Side effects and re-positioning of drugs

Posted by Henry Bauer on 2017/11/23

Fog Facts: things that are known and yet not known —
[not known to the conventional wisdom, the general public, the media
but known to those (few) who are genuinely informed about the subject]

For that delightful term, Fog Facts, I’m grateful to Larry Beinhart who introduced me to it in his novel “The Librarian”. There it’s used in connection with political matters, but it’s entirely appropriate for the disconnect between “what everyone knows” about blood pressure, cholesterol, prescription drugs, and things of that ilk, and what the actual facts are in the technical literature.

For example, the popular shibboleth is that drug companies spend hundreds of millions of dollars in the development of a new drug, and that’s why they need to make such large profits to plough back into research. The truth of the matter is that most new drugs originate in academic research, conducted to a great extent at public expense; and drug companies spend more on advertising and marketing than they do on research. All that is known to anyone who cares to read material other than what the drug-company ads say and what the news media disseminate; and yet it’s not known because too few people read the right things, even books by former editors of medical journals and academic researchers at leading universities and published by mainstream publishers; see “What’s wrong with modern medicine”.

When it comes to drug “development”, the facts are all hidden in plain view. There’s even a whole journal about it, Nature Reviews — Drug Discovery, that began publication in 2002. I came to learn about this because Josh Nicholson had alerted me to an article in that journal, “Drug repositioning: identifying and developing new uses for existing drugs” (by Ted T. Ashburn and Karl B. Thor, 3 [2004] 673-82). I had never heard of “drug repositioning”. What could it mean?

Well, it means finding new uses for old drugs. And the basic reason for doing so is that it’s much easier and more profitable than trying to design or discover a new drug, because old drugs have already been approved as safe, and it’s already known how to manufacture them.

What seems obvious, however — albeit only as a Fog Fact — is that the very success of repositioning drugs should be a red flag warning against the drug-based medicine or drug-first medicine or drug-besotted medicine that has become standard practice in the United States. The rationale for prescribing a drug is that it will fix what needs attending to without seriously and adversely affecting anything else, in other words that there are no serious “side” effects. But repositioning a drug shows that it has a comparably powerful effect on something other than its original target. In other words, “side” effects may be as powerful and significant as the originally intended effect. Ashburn and Thor give a number of examples:

Cymbalta was originally prescribed to treat depression, anxiety, diabetic peripheral neuropathy, and fibromyalgia (all at about the same dosage, which might cause one to wonder how many different mechanisms or systems are actually being affected besides the intended one). The listed side effects do not include anything about urination, yet the drug has been repositioned as Duloxetine SUI to treat “stress urinary incontinence (SUI), a condition characterized by episodic loss of urine associated with sharp increases in intra-abdominal pressure (for example, when a person laughs, coughs or sneezes)”; and “Lilly is currently anticipating worldwide sales of Duloxetine SUI to approach US $800 million within four years of launch”.

Dapoxetine was not a success for analgesia or against depression, but came into its own to treat premature ejaculation.

Thalidomide was originally marketed to treat morning sickness, but it produced limb defects in babies. Later it was found effective against “erythema nodosum laprosum (ENL), an agonizing inflammatory condition of leprosy”. Moreover, since the birth defects may have been associated with blocking development of blood vessels, thalidomide might work against cancer; and indeed “Celgene recorded 2002 sales of US $119 million for Thalomid, 92% of which came from off-label use of the drug in treating cancer, primarily multiple myeloma . . . . Sales reached US $224 million in 2003 . . . . The lesson from the thalidomide story is that no drug is ever understood completely, and repositioning, no matter   how unlikely, often remains a possibility” [emphasis added: once the FDA has approved drug A to treat condition B, individual doctors are allowed to prescribe it for other conditions as well, although drug companies are not allowed to advertise it for those other uses. That legal restriction is far from always honored, as demonstrated by the dozens of settlements paid by drug companies for breaking the law.]

Perhaps the prize for repositioning (so far) goes to Pfizer, which turned sildenafil, an unsuccessful treatment for angina, into Viagra, a very successful treatment for “erectile dysfunction”: “By 2003, sildenafil had annual sales of US $1.88 billion and nearly 8 million men were taking sildenafil in the United States alone”.

At any rate, Ashburn and Thor could not be more clear: The whole principle behind repositioning is that it’s more profitable to see what existing drugs might do than to look for what might be biologically speaking the best treatment for a given ailment. So anti-depressants get approved and prescribed against smoking, premenstrual dysphoria, or obesity; a Parkinson’s drug and a hypertension drug are prescribed for ADHD; an anti-anxiety medication is prescribed for irritable bowel syndrome; Alzheimer’s, whose etiology is not understood, gets treated with Reminyl which, as Nivalin, (generic galantamine) is also supposed to treat polio and paralysis. Celebrex, a VIOXX-type anti-arthritic, can be prescribed against breast and colon cancer; treatment of enlarged prostate is by the same drug used to combat hair loss; the infamous “morning after” pill for pregnancy termination can treat “psychotic major depression”; Raloxifene to treat breast and prostate cancer is magically able also to treat osteoporosis.

And so on and so forth. This whole business of drug repositioning exposes the fallacy of the concept that it is possible to find “a silver bullet”, a chemical substance that can be introduced into the human body to accomplish just one desired thing. That concept ought to be recognized as absurd a priori, since we know that human physiology is an interlocking network of signals, feedback, attempted homeostasis, defenses against intruders.

It is one thing to use, for brief periods of time, toxins that can help the body clear infections — sulfa drugs, antibiotics. It is quite another conceit and ill-founded hubris to administer powerful chemicals to decrease blood pressure, lower cholesterol, and the like, in other words, to attempt to alter interlocking self-regulating systems as though one single aspect of them could be altered without doing God-only-knows-what-else elsewhere.

The editorial in the first issue (January 2002) of Nature Reviews Drug Discovery was actually clear about this: “drugs need to work in whole, living systems”.

But that editorial also gave the reason for the present-day emphasis on medicine by drugs: “Even with vastly increased R & D spending, the top 20 pharmaceutical companies still churn out only around 20 drugs per year between them, far short of the 4-5 new drugs that analysts say they each need to produce to justify their discovery and development costs”.

And the editorial also mentions one of the deleterious “side” effects of the rush to introduce new drugs: “off-target effects . . . have led to the vastly increased number of costly late-stage failures seen in recent years (approximately half the withdrawals in the past 20 years have occurred since 1997)” — “off-target effects” being a synonym for “side” effects.

It’s not only that new drugs are being rushed to market. As a number of people have pointed out, drug companies also create their own markets by inventing diseases like attention-deficit disorder, erectile dysfunction, generalized anxiety disorder, and so on and on. Any deviation of behavior from what might naively be described as “normal” offers the opportunity to discover a new disease and to re-position a drug.

The ability of drug companies to sell drugs for new diseases is helped by the common misconception about “risk factors”. Medication against hypertension or high cholesterol, for example, is based on the presumption that both those raise the risk of heart attack, stroke, and other undesirable contingencies because both are “risk factors” for such contingencies. But “risk factor” describes only an observed association, a correlation, not an identified causation. Correlation never proves causation. “Treating” hypertension or high cholesterol makes sense only if those things are causes, and they have not been shown to be that. On the other hand, lifelong ingestion of drugs is certainly known to have potentially dangerous consequences.

Modern drug-based, really drug-obsessed medical practice is as misguided as “Seeking Immortality”.

Posted in fraud in medicine, legal considerations, media flaws, medical practices, prescription drugs | Tagged: , , | Leave a Comment »

Slowing of global warming officially confirmed — by reading between the lines

Posted by Henry Bauer on 2017/08/12

Climate-change skeptics, deniers, denialists, and also unbiased observers have pointed out that measured global temperatures seem to have risen at a slower rate, or perhaps ceased rising at all, since about 1998.
But the media have by and large reported the continuing official alarmist claims that each year has been the hottest on record, that a tipping point is nearly upon us, catastrophe is just around the corner — exemplified perhaps by Al Gore’s recent new film, An Inconvenient Sequel, with interviews of Gore in many media outlets.

However, that the pause in global warming is quite real is shown decisively by the way in which the mainstream consensus has tried to discount the facts, attempting to explain them away.
For example a pamphlet, published jointly by the Royal Society of London and the National Academy of Sciences of the USA, asserts that human-caused release of greenhouse gases is producing long-term warming and climate change, albeit there might be periods on the order of decades where there is no or little warming, as with the period since about 2000 when such natural causes as “lower solar activity and volcanic eruptions” have “masked” the rise in temperature (“Climate-Change Science or Climate-Change Propaganda?”).

That assertion misses the essential point: All the alarmist projections are based on computer models. The models failed to foresee the pause in temperature rise since 1998, demonstrating that the models are inadequate and therefore their projections are wrong. The models also fail to accommodate the period of global cooling rather than warming from the 1940s to the 1970s.

The crux is that the models do not incorporate important natural forces that affect the carbon cycle and the energy interactions. Instead, when the models are patently wrong, as from 1940s to 1970s and since 1998, the modelers and other researchers vested in the theory of human-caused climate change speculate about how one or other natural phenomenon somehow “masks” the asserted underlying temperature rise.

Above all, of course, the theorists neglect to mention that the Earth is still rebounding from the last Ice Age and will, if the last million years are any guide, continue to warm up for many tens of thousands of years (Climate-change facts: Temperature is not determined by carbon dioxide).
The various attempts to explain away the present pause in temperature rise were listed a few years ago at THE HOCKEY SCHTICK“Updated list of 66 excuses for the 18-26 year ‘pause’ in global warming — ‘If you can’t explain the ‘pause’, you can’t explain the cause’”.
Here are a few of the dozens of excuses for the failure of global temperature to keep up with projections of the climate models:

1. Lower activity of the sun
That ought to raise eyebrows about this whole business. Essentially all the energy Earth receives from out there comes from the Sun. Apparently the computer models do not start by taking that into account?
(Peter Stauning, “Reduced solar activity disguises global temperature rise”, Atmospheric and Climate Sciences, 4 #1, January 2014 “Without the reduction in the solar activity-related contributions the global temperatures would have increased steadily from 1980 to present”)
And of course if the Sun stopped shining altogether…
Anyway, the models are wrong.

2. The heat is being hidden in the ocean depths (Cheng et al., “Improved estimates of ocean heat content from 1960 to 2015”, Science Advances, 10 March 2017, 3 #3, e1601545, DOI: 10.1126/sciadv.160154
In other words, the models are wrong about the distribution of supposedly trapped heat.

3. Increased emission of aerosols especially in Asia (Kühn et al., “Climate impacts of changing aerosol emissions since 1996”, Geophysical Research Letters, 41 [14 July 2014] 4711–18, doi:10.1002/2014GL060349)
The climate models are wrong because they do not properly take aerosol emissions into account.

3a. “Volcanic aerosols, not pollutants, tamped down recent Earth warming, says CU study”
       In other words, the models are wrong because they cannot take into account the complexities of natural events that affect climate.

4. Reduced emission of greenhouse gases, following the Montreal Protocol eliminating ozone-depleting substances (Estrada et al, “Statistically derived contributions of diverse human influences to twentieth-century temperature changes”, Nature Geoscience, 6 (2013) 1050-55 doi:10.1038/ngeo1999
       The climate models are wrong because they do not take into account all greenhouse-gas emissions.

5. “Contributions of stratospheric water vapor to decadal changes in the rate of global warming” (Solomon et al., Science, 327 [2010] 1219-12;
DOI: 10.1126/science.1182488)
In other words, the models are wrong because they do not take account of variations in water vapor in the stratosphere.

6. Strengthened trade winds in the Pacific
Again, the models are wrong because they cannot take account of the innumerable natural phenomena that determine climate.

6a.     An amusing corollary is that “Seven years ago, we were told the opposite of what the new Matthew England paper says: slower (not faster) trade winds caused ‘the pause’”

And so on though another 50 or 60 different speculations. Although they are all different, there is a single commonality: The computer models used to represent Earth’s climate are woefully unable to do so. That might well be thought to be obvious a priori in view of the astronomical number of variables and interactions that determine climate. Moreover, a little less obviously perhaps, “global” climate is a human concept. The reality is that short- and long-term changes in climate by no means always occur in parallel in different regions.

Take-away points:

Mainstream climate science has demonstrated that
all the climate models are inadequate
and their projections have been wrong

Since the late 1990s, global temperatures have not risen
to the degree anticipated by climate models and climate alarmists
but that is not officially admitted
even as it is obvious from the excuses offered
for the failure of the models

Posted in consensus, denialism, global warming, media flaws, science is not truth, science policy, unwarranted dogmatism in science | Tagged: | 3 Comments »

Has all academic publishing become predatory? Or just useless? Or just vanity publishing?

Posted by Henry Bauer on 2017/06/14

A pingback to my post “Predatory publishers and fake rankings of journals” led me to “Where to publish and not to publish in bioethics – the 2017 list”.

That essay brings home just how pervasive has become for-profit publishing of purportedly scholarly material. The sheer volume of the supposedly scholarly literature is such as to raise the question, who looks at any part of this literature?

One of the essay’s links leads to a listing by the Kennedy Center for Ethics of 44 journals in the field of bioethics.  Another link leads to a list of the “Top 100 Bioethics Journals in the World, 2015” by the author of the earlier “Top 50 Bioethics Journals and Top 250 Most Cited Bioethics Articles Published 2011-2015

What, I wonder, does any given bioethicist actually read? How many of these journals have even their Table of Contents scanned by most bioethicists?

Beyond that: Surely the potential value of scholarly work in bioethics is to improve the ethical practices of individuals and institutions in the real world. How does this spate of published material contribute to that potential value?

Those questions are purely rhetorical, of course. I suggest that the overwhelming mass of this stuff has no influence whatever on actual practices by doctors, researchers, clinics and other institutions.

This literature does, however, support the existence of a body of bioethicists whose careers are tied in some way to the publication of articles about bioethics.

The same sort of thing applies nowadays in every field of scholarship and science. The essay’s link to Key Journals in The Philosopher’s Index brings up a 79-page list, 10 items per page, of key [!] journals in philosophy.

This profusion of scholarly journals supports not only communities of publishing scholars in each field, it also nurtures an expanding community of meta-scholars whose publications deal with the profusion of publication. The earliest work in this genre was the Science Citation Index which capitalized on information technology to compile indexes through which all researchers could discover which of their published work had been cited and where.

That was unquestionably useful, including by making it possible to discover people working in one’s own specialty. But misuse became abuse, as administrators and bureaucrats began simply to count how often an individual’s work had been cited and to equate that number with quality.

No matter how often it has been pointed out that this equation is so wrong as to be beyond rescuing, this attraction of supposedly objective numbers and the ease of obtaining them has made citation-counting an apparently permanent part of the scholarly literature.

Not only that. The practice has been extended to judging the influence a journal has by counting how often the articles in it have been cited, yielding a “journal impact factor” that, again, is typically conflated with quality, no matter how often or how learnedly the meta-scholars point out the fallacies in that equation — for example different citing practices in different fields, different editorial practices that sometimes limit number of permitted citations, the frequent citation of work that had been thought important but that turned out to be wrong.

The scholarly literature had become absurdly voluminous even before the advent of on-line publishing. Meta-scholars had already learned several decades ago that most published articles are never cited by anyone other than the original author(s): see for instance J. R. Cole & S. Cole, Social Stratification in Science (University of Chicago Press, 1973); Henry W. Menard, Science: Growth and Change (Harvard University Press, 1971); Derek de Solla Price, Little Science, Big Science … And Beyond (Columbia University Press, 1986).

Derek Price (Science Since Babylon, Yale University Press, 1975) had also pointed out that the growth of science at an exponential rate since the 17th century had to cease in the latter half of the 20th century since science was by then consuming several percent of the GDP of developed countries. And indeed there has been cessation of growth in research funds; but the advent of the internet has made it possible for publication to continue to grow exponentially.

Purely predatory publishing has added more useless material to what was already unmanageably voluminous, with only rare needles in these haystacks that could be of any actual practical use to the wider society.

Since almost all of this publication has to be paid for by the authors or their research grants or patrons, one could also characterize present-day scholarly and scientific publication as vanity publishing, serving to the benefit only of the author(s) — except that this glut of publishing now supports yet another publishing community, the scholars of citation indexes and journal impact factors, who concern themselves for example with “Google h5 vs Thomson Impact Factor” or who offer advice for potential authors and evaluators and administrators about “publishing or perishing”.

To my mind, the most damaging aspect of all this is not the waste of time and material resources to produce useless stuff, it is that judgment of quality by informed, thoughtful individuals is being steadily displaced by reliance on numbers generated via information technology by procedures that are understood by all thinking people to be invalid substitutes for informed, thoughtful human judgment.

 

Posted in conflicts of interest, funding research, media flaws, scientific culture | Tagged: , , | 3 Comments »

How to interpret statistics; especially about drug efficacy

Posted by Henry Bauer on 2017/06/06

How (not) to measure the efficacy of drugs  pointed out that the most meaningful data about a drug are the number of people needed to be treated for one person to reap benefit, NNT, and the number needed to be treated for one person to be harmed, NNH.

But this pertinent, useful information is rarely disseminated, and most particularly not by drug companies. Most commonly cited are statistics about drug performance relative to other drugs or relative to placebo. Just how misleading this can be is described in easily understood form in this discussion of the use of anti-psychotic drugs.

 

That article (“Psychiatry defends its antipsychotics: a case study of institutional corruption” by Robert Whitaker) has many other points of interest. Most important, of course, the potent demonstration that official psychiatric practice is not evidence-based, rather, its aim is to defend the profession’s current approach.

 

In these ways, psychiatry differs only in degree from the whole of modern medicine — see WHAT’S WRONG WITH PRESENT-DAY MEDICINE  — and indeed from contemporary science on too many matters: Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, Jefferson (NC): McFarland 2012.

Posted in conflicts of interest, consensus, media flaws, medical practices, peer review, prescription drugs, scientific culture, unwarranted dogmatism in science | Tagged: , | Leave a Comment »

Climate-change orthodoxy: alternative facts, uncertainty equals certainty, projections are not predictions, and other absurdities of the “scientific consensus”

Posted by Henry Bauer on 2017/05/10

G. K. Chesterton once suggested that the best argument for accepting the Christian faith lies in the reasons offered by atheists and skeptics against doing so. That interesting slant sprang to mind as I was trying to summarize the reasons for not believing the “scientific consensus” that blames carbon dioxide for climate change.

Of course the very best reason for not believing that CO2 causes climate change are the data, as summarized in an earlier post

–>      Global temperatures have often been high while CO2 levels were low, and vice versa

–>     CO2 levels rise or fall after temperatures have risen or fallen

–>     Temperatures decreased between the 1940s and 1970s, and since about 1998 there has been a pause in warming, perhaps even cooling, while CO2 levels have risen steadily.

But disbelieving the official propaganda becomes much easier when one recognizes the sheer absurdities and illogicalities and self-contradictions committed unceasingly by defenders of the mainstream view.

1940s-1970s cooling
Mainstream official climate science is centered on models: computer programs that strive to simulate real-world phenomena. Any reasonably detailed description of such models soon reveals that there are far too many variables and interactions to make that feasible; and moreover that a host of assumptions are incorporated in all the models (1). In any case, the official models do not simulate the cooling trend of these three decades.
“Dr. James Hansen suspects the relatively sudden, massive output of aerosols from industries and power plants contributed to the global cooling trend from 1940-1970” (2).
But the models do not take aerosols into account; they are so flawed that they are unable to simulate a thirty-year period in which carbon emissions were increasing and temperatures decreasing. An obvious conclusion is that no forecast based on those models deserves to be given any credence.

One of the innumerable science-groupie web-sites expands on the aerosol speculation:
“40’s to 70’s cooling, CO2 rising?
This is a fascinating denialist argument. If CO2 is rising, as it was in the 40’s through the 70’s, why would there be cooling?
It’s important to understand that the climate has warmed and cooled naturally without human influence in the past. Natural cycle, or natural variability need to be understood if you wish to understand what modern climate forcing means. In other words modern or current forcing is caused by human industrial output to the atmosphere. This human-induced forcing is both positive (greenhouse gases) and negative (sulfates and aerosols).”

Fair enough; but the models fail to take account of natural cycles.

Rewriting history
The Soviet Union had an official encyclopedia that was revised as needed, for example by rewriting history to delete or insert people and events to correspond with a given day’s political correctness. Some climate-change enthusiasts also try to rewrite history: “There was no scientific consensus in the 1970s that the Earth was headed into an imminent ice age. Indeed, the possibility of anthropogenic warming dominated the peer-reviewed literature even then” (3). Compare that with a host of reproductions and citations of headlines from those cold times when media alarms were set off by what the “scientific consensus” indeed then was (4). And the cooling itself was, of course, real, as is universally acknowledged nowadays.

The media faithfully report what officialdom disseminates. Routinely, any “extreme” weather event is ascribed to climate change — anything worth featuring as “breaking news”, say tsunamis, hurricanes, bushfires in Australia and elsewhere. But the actual data reveal no increase in extreme events in recent decades: not Atlantic storms, nor Australian cyclones, nor US tornadoes, nor “global tropical cyclone accumulated energy”, nor extremely dry periods in the USA, in the last 150 years during which atmospheric carbon dioxide increased by 40% (pp. 46-51 in (1)). Nor have sea levels been rising in any unusual manner (Chapter 6 in (1)).

Defenders of climate-change dogma tie themselves in knots about whether carbon dioxide has already affected climate, whether its influence is to be seen in short-term changes or only over the long term. For instance, the attempt to explain 1940s-70s cooling presupposes that CO2 is only to be indicted for changes over much longer time-scales than mere decades. Perhaps the ultimate demonstration of wanting to have it both ways — only long-term, but also short-term — is illustrated by a pamphlet issued jointly by the Royal Society of London and the National Academy of Science of the USA (5, 6).

No warming since about 1998
Some official sources deny that there has been any cessation of warming in the new century or millennium. Others admit it indirectly by attempting to explain it away or dismiss it as irrelevant, for instance “slowdowns and accelerations in warming lasting a decade or more will continue to occur. However, long- term climate change over many decades will depend mainly on the total amount of CO2 and other greenhouse gases emitted as a result of human   activities” (p. 2 in (5)); “shorter-term variations are mostly due to natural causes, and do not contradict our fundamental understanding that the long-term warming trend is primarily due to human-induced changes in the atmospheric levels of CO2 and other greenhouse gases” (p. 11 in (5)).

Obfuscating and misdirecting
The Met Office, the UK’s National Meteorological Service, is very deceptive about the recent lack of warming:

“Should climate models have predicted the pause?
Media coverage … of the launch of the 5th Assessment Report of the IPCC has again said that global warming is ‘unequivocal’ and that the pause in warming over the past 15 years is too short to reflect long-term trends.

[No one disputes the reality of long-term global warming — the issue is whether natural forces are responsible as opposed to human-generated carbon dioxide]

… some commentators have criticised climate models for not predicting the pause. …
We should not confuse climate prediction with climate change projection. Climate prediction is about saying what the state of the climate will be in the next few years, and it depends absolutely on knowing what the state of the climate is today. And that requires a vast number of high quality observations, of the atmosphere and especially of the ocean.
On the other hand, climate change projections are concerned with the long view; the impact of the large and powerful influences on our climate, such as greenhouse gases.

[Implying sneakily and without warrant that natural forces are not “large and powerful”. That is quite wrong and it is misdirection, the technique used by magicians to divert attention from what is really going on. By far the most powerful force affecting climate is the energy coming from the sun.]

Projections capture the role of these overwhelming influences on climate and its variability, rather than predict the current state of the variability itself.
The IPCC model simulations are projections and not predictions; in other words the models do not start from the state of the climate system today or even 10 years ago. There is no mileage in a story about models being ‘flawed’ because they did not predict the pause; it’s merely a misunderstanding of the science and the difference between a prediction and a projection.
[Misdirection again. The IPCC models failed to project or predict the lack of warming since 1998, and also the cooling of three decades after 1940. The point is that the models are inadequate, so neither predictions nor projections should be believed.]

… the deep ocean is likely a key player in the current pause, effectively ‘hiding’ heat from the surface. Climate model projections simulate such pauses, a few every hundred years lasting a decade or more; and they replicate the influence of the modes of natural climate variability, like the Pacific Decadal Oscillation (PDO) that we think is at the centre of the current pause.
[Here is perhaps the worst instance of misleading. The “Climate model projections” that are claimed to “simulate such pauses, a few every hundred years lasting a decade or more” are not made with the models that project alarming human-caused global warming, they are ad hoc models that explore the possible effects of variables not taken into account in the overall climate models.]”

The projections — which the media (as well as people familiar with the English language) fail to distinguish from predictions — that indict carbon dioxide as cause of climate change are based on models that do not incorporate possible effects of deep-ocean “hidden heat” or such natural cycles as the Pacific Decadal Oscillation. Those and other such factors as aerosols are considered only in trying to explain why the climate models are wrong, which is the crux of the matter. The climate models are wrong.

Asserting that uncertainty equals certainty
The popular media disseminated faithfully and uncritically from the most recent official report that “Scientists are 95% certain that human are responsible for the ‘unprecedented’ warming experienced by the Earth over the last few decades

Leave aside that the warming cannot be known to be “unprecedented” — global temperatures have been much higher in the past, and historical data are not fine-grained enough to compare rates of warming over such short time-spans as mere decades or centuries.

There is no such thing as “95% certainty”.
Certainty means 100%; anything else is a probability, not a certainty.
A probability of 95% may seem very impressive — until it is translated into its corollary: 5% probability of being wrong; and 5% is 1 in 20. I wouldn’t bet on anything that’s really important to me if there’s 1 chance in 20 of losing the bet.
So too with the frequent mantra that 97% or 98% of scientists, or some other superficially impressive percentage, support the “consensus” that global warming is owing to carbon dioxide (7):

 

“Depending on exactly how you measure the expert consensus, it’s somewhere between 90% and 100% that agree humans are responsible for climate change, with most of our studies finding 97% consensus among publishing climate scientists.”

In other words, 3% (“on average”) of “publishing climate scientists” disagree. And the history of science teaches unequivocally that even a 100% scientific consensus has in the past been wrong, most notably on the most consequential matters, those that advanced science spectacularly in what are often called “scientific revolutions” (8).
Furthermore, “publishing climate scientists” biases the scales a great deal, because peer review ensures that dissenting evidence and claims do not easily get published. In any case, those percentages are based on surveys incorporating inevitable flaws (sampling bias as with peer review, for instance). The central question is, “How convinced are you that most recent and near future climate change is, or will be, the result of anthropogenic causes”? On that, the “consensus” was only between 33% and 39%, showing that “the science is NOT settled” (9; emphasis in original).

Science groupies — unquestioning accepters of “the consensus”
The media and countless individuals treat the climate-change consensus dogma as Gospel Truth, leading to such extraordinary proposals as that by Professor of Law, Philippe Sands, QC, that “False claims from climate sceptics that humans are not responsible for global warming and that sea level is not rising should be scotched by an international court ruling”.

I would love to see any court take up the issue, which would allow us to make defenders of the orthodox view attempt to explain away all the data which demonstrate that global warming and climate change are not driven primarily by carbon dioxide.

The central point

Official alarms and established scientific institutions rely not on empirical data, established facts about temperature and CO2, but on computer models that are demonstrably wrong.

Those of us who believe that science should be empirical, that it should follow the data and change theories accordingly, become speechless in the face of climate-change dogma defended in the manner described above. It would be screamingly funny, if only those who do it were not our own “experts” and official representatives (10). Even the Gods are helpless in the face of such determined ignoring of reality (11).

___________________________________

(1)    For example, chapter 10 in Howard Thomas Brady, Mirrors and Mazes, 2016; ISBN 978-1522814689. For a more general argument that models are incapable of accurately simulating complex natural processes, see, O. H. Pilkey & L. Pilkey-Jarvis, Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future, Columbia University Press, 2007
(2)    “40’s to 70’s cooling, CO2 rising?”
(3)    Thomas C. Peterson, William M. Connolley & John Fleck, “The myth of the 1970s global cooling scientific consensus”, Bulletin of the American Meteorological Society, September 2008, 1325-37
(4)    “History rewritten, Global Cooling from 1940 – 1970, an 83% consensus, 285 papers being ‘erased’”; 1970s Global Cooling Scare; 1970s Global Cooling Alarmism
(5)    Climate Change: Evidence & Causes—An Overview from the Royal   Society and the U.S. National Academy of Sciences, National Academies Press; ISBN 978-0-309-30199-2
(6)    Relevant bits of (e) are cited in a review, Henry H. Bauer, “Climate-change science or climate-change propaganda?”, Journal of Scientific Exploration, 29 (2015) 621-36
(7)    The 97% consensus on global warming
(8) Thomas S. Kuhn, The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1970; Bernard Barber, “Resistance by scientists to scientific discovery”, Science, 134 (1961) 596–602. Gunther Stent, “Prematurity and uniqueness in   scientific discovery”, Scientific American, December 1972, pp. 84-93. Hook, Ernest B. (ed), Prematurity in Scientific Discovery: On Resistance and Neglect, University of California Press, 2002
(9)    Dennis Bray, “The scientific consensus of climate change revisited”, Environmental Science & Policy, 13 (2010) 340 –50; see also “The myth of the Climate Change ‘97%’”, Wall Street Journal, 27 May 2014, p. A.13, by Joseph Bast & Roy Spencer
(10) My mother’s frequent repetitions engraved in my mind the German folk-saying, “Wenn der Narr nicht mein wär’, lacht’ ich mit”. Google found it in the Deutsches sprichwörter-lexikon edited by Karl Friedrich Wilhelm Wander (#997, p. 922)
(11)  “Mit der Dummheit kämpfen Götter selbst vergebens”; Friedrich Schiller, Die Jungfrau von Orleans.

 

Posted in consensus, denialism, global warming, media flaws, peer review, resistance to discovery, science is not truth, science policy, scientism, unwarranted dogmatism in science | Tagged: , , | 6 Comments »

 
%d bloggers like this: