Skepticism about science and medicine

In search of disinterested science

Archive for the ‘unwarranted dogmatism in science’ Category

Vaccines: The good, the bad, and the ugly

Posted by Henry Bauer on 2017/05/21

Only in recent years have I begun to wonder whether there are reasons not to follow official recommendations about vaccination. In the 1930s, I had the then-usual vaccinations, including (in Austria, perhaps Europe) against smallpox. A few others in later years when I traveled quite a bit.

But the Andrew Wakefield affair *, and the introduction of Gardasil **, showed me that official sources had become as untrustworethy about vaccines as they have become about prescription drugs.

It seems that Big Pharma had just about run out of new diseases to invent against which to create drugs and had turned to snake-oil-marketing of vaccines. We are told, for example, that 1 in 3 people will experience shingles in their lifetime and should get vaccinated against it. Have one in three of your aged friends ever had shingles? Not among my family and friends. One of my buddies got himself vaccinated, and came down with shingles a couple of weeks later. His physician asserted that the attack would have been more severe if he hadn’t been vaccinated — no need for a control experiment, or any need to doubt official claims.

So it’s remarkable that the Swedish Government has resisted attempts to make vaccinations compulsory (“Sweden bans mandatory vaccinations over ‘serious health concerns’” by Baxter Dmitry, 12 May 2017).

That article includes extracts from an interview of Robert F. Kennedy, Jr., on the Tucker Carlson Show, which included such tidbits as the continued presence of thimerosal (organic mercury compound) in many vaccines including the seasonal flu vaccines that everyone is urged to get; and the huge increase in number of things against which vaccination is being recommended:

“I got three vaccines and I was fully compliant. I’m 63 years old. My children got 69 doses of 16 vaccines to be compliant. And a lot of these vaccines aren’t even for communicable diseases. Like Hepatitis B, which comes from unprotected sex, or using or sharing needles – why do we give that to a child on the first day of their life? And it was loaded with mercury.”

 

————————————————–

“Autism and Vaccines: Can there be a final unequivocal answer?”
      “YES: Thimerosal CAN induce autism”

** See “Gardasil and Cervarix: Vaccination insanity” and many other posts recovered with SEARCH for “Gardasil” on my blogs: https://scimedskeptic.wordpress.com/?s=gardasil and https://hivskeptic.wordpress.com/?s=gardasil

Posted in fraud in medicine, legal considerations, medical practices, politics and science, prescription drugs, science is not truth, science policy, unwarranted dogmatism in science | Tagged: | Leave a Comment »

Superstitious belief in science

Posted by Henry Bauer on 2017/05/16

Most people have a very misled, unrealistic view of “science”. A very damaging consequence is that scientific claims are given automatic respect even when that is unwarranted — as it always is with new claims, say about global warming. Dramatic changes in how science is done, especially since mid-20th century, make it less trustworthy than earlier.

In 1987, historian John Burnham published How Superstition Won and Science Lost, arguing that modern science had not vanquished popular superstition by inculcating scientific, evidence-based thinking; rather, science had itself become on worldly matters the accepted authority whose pronouncements are believed without question, in other words superstitiously, by society at large.

Burnham argued through detailed analysis of how science is popularized, and especially how that has changed over the decades. Some 30 years later, Burnham’s insight is perhaps even more important. Over those years, certain changes in scientific activity have also become evident that support Burnham’s conclusion from different directions: science has grown so much, and has become so specialized and bureaucratic and dependent on outside patronage, that it has lost any ability to self-correct. As with religion in medieval times, official pronouncements about science are usually accepted without further ado, and minority voices of dissent are dismissed and denigrated.

A full discussion with source references, far too long for a blog post, is available here.

Posted in conflicts of interest, consensus, denialism, politics and science, science is not truth, scientific culture, scientific literacy, scientism, scientists are human, unwarranted dogmatism in science | Tagged: | Leave a Comment »

Climate-change orthodoxy: alternative facts, uncertainty equals certainty, projections are not predictions, and other absurdities of the “scientific consensus”

Posted by Henry Bauer on 2017/05/10

G. K. Chesterton once suggested that the best argument for accepting the Christian faith lies in the reasons offered by atheists and skeptics against doing so. That interesting slant sprang to mind as I was trying to summarize the reasons for not believing the “scientific consensus” that blames carbon dioxide for climate change.

Of course the very best reason for not believing that CO2 causes climate change are the data, as summarized in an earlier post

–>      Global temperatures have often been high while CO2 levels were low, and vice versa

–>     CO2 levels rise or fall after temperatures have risen or fallen

–>     Temperatures decreased between the 1940s and 1970s, and since about 1998 there has been a pause in warming, perhaps even cooling, while CO2 levels have risen steadily.

But disbelieving the official propaganda becomes much easier when one recognizes the sheer absurdities and illogicalities and self-contradictions committed unceasingly by defenders of the mainstream view.

1940s-1970s cooling
Mainstream official climate science is centered on models: computer programs that strive to simulate real-world phenomena. Any reasonably detailed description of such models soon reveals that there are far too many variables and interactions to make that feasible; and moreover that a host of assumptions are incorporated in all the models (1). In any case, the official models do not simulate the cooling trend of these three decades.
“Dr. James Hansen suspects the relatively sudden, massive output of aerosols from industries and power plants contributed to the global cooling trend from 1940-1970” (2).
But the models do not take aerosols into account; they are so flawed that they are unable to simulate a thirty-year period in which carbon emissions were increasing and temperatures decreasing. An obvious conclusion is that no forecast based on those models deserves to be given any credence.

One of the innumerable science-groupie web-sites expands on the aerosol speculation:
“40’s to 70’s cooling, CO2 rising?
This is a fascinating denialist argument. If CO2 is rising, as it was in the 40’s through the 70’s, why would there be cooling?
It’s important to understand that the climate has warmed and cooled naturally without human influence in the past. Natural cycle, or natural variability need to be understood if you wish to understand what modern climate forcing means. In other words modern or current forcing is caused by human industrial output to the atmosphere. This human-induced forcing is both positive (greenhouse gases) and negative (sulfates and aerosols).”

Fair enough; but the models fail to take account of natural cycles.

Rewriting history
The Soviet Union had an official encyclopedia that was revised as needed, for example by rewriting history to delete or insert people and events to correspond with a given day’s political correctness. Some climate-change enthusiasts also try to rewrite history: “There was no scientific consensus in the 1970s that the Earth was headed into an imminent ice age. Indeed, the possibility of anthropogenic warming dominated the peer-reviewed literature even then” (3). Compare that with a host of reproductions and citations of headlines from those cold times when media alarms were set off by what the “scientific consensus” indeed then was (4). And the cooling itself was, of course, real, as is universally acknowledged nowadays.

The media faithfully report what officialdom disseminates. Routinely, any “extreme” weather event is ascribed to climate change — anything worth featuring as “breaking news”, say tsunamis, hurricanes, bushfires in Australia and elsewhere. But the actual data reveal no increase in extreme events in recent decades: not Atlantic storms, nor Australian cyclones, nor US tornadoes, nor “global tropical cyclone accumulated energy”, nor extremely dry periods in the USA, in the last 150 years during which atmospheric carbon dioxide increased by 40% (pp. 46-51 in (1)). Nor have sea levels been rising in any unusual manner (Chapter 6 in (1)).

Defenders of climate-change dogma tie themselves in knots about whether carbon dioxide has already affected climate, whether its influence is to be seen in short-term changes or only over the long term. For instance, the attempt to explain 1940s-70s cooling presupposes that CO2 is only to be indicted for changes over much longer time-scales than mere decades. Perhaps the ultimate demonstration of wanting to have it both ways — only long-term, but also short-term — is illustrated by a pamphlet issued jointly by the Royal Society of London and the National Academy of Science of the USA (5, 6).

No warming since about 1998
Some official sources deny that there has been any cessation of warming in the new century or millennium. Others admit it indirectly by attempting to explain it away or dismiss it as irrelevant, for instance “slowdowns and accelerations in warming lasting a decade or more will continue to occur. However, long- term climate change over many decades will depend mainly on the total amount of CO2 and other greenhouse gases emitted as a result of human   activities” (p. 2 in (5)); “shorter-term variations are mostly due to natural causes, and do not contradict our fundamental understanding that the long-term warming trend is primarily due to human-induced changes in the atmospheric levels of CO2 and other greenhouse gases” (p. 11 in (5)).

Obfuscating and misdirecting
The Met Office, the UK’s National Meteorological Service, is very deceptive about the recent lack of warming:

“Should climate models have predicted the pause?
Media coverage … of the launch of the 5th Assessment Report of the IPCC has again said that global warming is ‘unequivocal’ and that the pause in warming over the past 15 years is too short to reflect long-term trends.

[No one disputes the reality of long-term global warming — the issue is whether natural forces are responsible as opposed to human-generated carbon dioxide]

… some commentators have criticised climate models for not predicting the pause. …
We should not confuse climate prediction with climate change projection. Climate prediction is about saying what the state of the climate will be in the next few years, and it depends absolutely on knowing what the state of the climate is today. And that requires a vast number of high quality observations, of the atmosphere and especially of the ocean.
On the other hand, climate change projections are concerned with the long view; the impact of the large and powerful influences on our climate, such as greenhouse gases.

[Implying sneakily and without warrant that natural forces are not “large and powerful”. That is quite wrong and it is misdirection, the technique used by magicians to divert attention from what is really going on. By far the most powerful force affecting climate is the energy coming from the sun.]

Projections capture the role of these overwhelming influences on climate and its variability, rather than predict the current state of the variability itself.
The IPCC model simulations are projections and not predictions; in other words the models do not start from the state of the climate system today or even 10 years ago. There is no mileage in a story about models being ‘flawed’ because they did not predict the pause; it’s merely a misunderstanding of the science and the difference between a prediction and a projection.
[Misdirection again. The IPCC models failed to project or predict the lack of warming since 1998, and also the cooling of three decades after 1940. The point is that the models are inadequate, so neither predictions nor projections should be believed.]

… the deep ocean is likely a key player in the current pause, effectively ‘hiding’ heat from the surface. Climate model projections simulate such pauses, a few every hundred years lasting a decade or more; and they replicate the influence of the modes of natural climate variability, like the Pacific Decadal Oscillation (PDO) that we think is at the centre of the current pause.
[Here is perhaps the worst instance of misleading. The “Climate model projections” that are claimed to “simulate such pauses, a few every hundred years lasting a decade or more” are not made with the models that project alarming human-caused global warming, they are ad hoc models that explore the possible effects of variables not taken into account in the overall climate models.]”

The projections — which the media (as well as people familiar with the English language) fail to distinguish from predictions — that indict carbon dioxide as cause of climate change are based on models that do not incorporate possible effects of deep-ocean “hidden heat” or such natural cycles as the Pacific Decadal Oscillation. Those and other such factors as aerosols are considered only in trying to explain why the climate models are wrong, which is the crux of the matter. The climate models are wrong.

Asserting that uncertainty equals certainty
The popular media disseminated faithfully and uncritically from the most recent official report that “Scientists are 95% certain that human are responsible for the ‘unprecedented’ warming experienced by the Earth over the last few decades

Leave aside that the warming cannot be known to be “unprecedented” — global temperatures have been much higher in the past, and historical data are not fine-grained enough to compare rates of warming over such short time-spans as mere decades or centuries.

There is no such thing as “95% certainty”.
Certainty means 100%; anything else is a probability, not a certainty.
A probability of 95% may seem very impressive — until it is translated into its corollary: 5% probability of being wrong; and 5% is 1 in 20. I wouldn’t bet on anything that’s really important to me if there’s 1 chance in 20 of losing the bet.
So too with the frequent mantra that 97% or 98% of scientists, or some other superficially impressive percentage, support the “consensus” that global warming is owing to carbon dioxide (7):

 

“Depending on exactly how you measure the expert consensus, it’s somewhere between 90% and 100% that agree humans are responsible for climate change, with most of our studies finding 97% consensus among publishing climate scientists.”

In other words, 3% (“on average”) of “publishing climate scientists” disagree. And the history of science teaches unequivocally that even a 100% scientific consensus has in the past been wrong, most notably on the most consequential matters, those that advanced science spectacularly in what are often called “scientific revolutions” (8).
Furthermore, “publishing climate scientists” biases the scales a great deal, because peer review ensures that dissenting evidence and claims do not easily get published. In any case, those percentages are based on surveys incorporating inevitable flaws (sampling bias as with peer review, for instance). The central question is, “How convinced are you that most recent and near future climate change is, or will be, the result of anthropogenic causes”? On that, the “consensus” was only between 33% and 39%, showing that “the science is NOT settled” (9; emphasis in original).

Science groupies — unquestioning accepters of “the consensus”
The media and countless individuals treat the climate-change consensus dogma as Gospel Truth, leading to such extraordinary proposals as that by Professor of Law, Philippe Sands, QC, that “False claims from climate sceptics that humans are not responsible for global warming and that sea level is not rising should be scotched by an international court ruling”.

I would love to see any court take up the issue, which would allow us to make defenders of the orthodox view attempt to explain away all the data which demonstrate that global warming and climate change are not driven primarily by carbon dioxide.

The central point

Official alarms and established scientific institutions rely not on empirical data, established facts about temperature and CO2, but on computer models that are demonstrably wrong.

Those of us who believe that science should be empirical, that it should follow the data and change theories accordingly, become speechless in the face of climate-change dogma defended in the manner described above. It would be screamingly funny, if only those who do it were not our own “experts” and official representatives (10). Even the Gods are helpless in the face of such determined ignoring of reality (11).

___________________________________

(1)    For example, chapter 10 in Howard Thomas Brady, Mirrors and Mazes, 2016; ISBN 978-1522814689. For a more general argument that models are incapable of accurately simulating complex natural processes, see, O. H. Pilkey & L. Pilkey-Jarvis, Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future, Columbia University Press, 2007
(2)    “40’s to 70’s cooling, CO2 rising?”
(3)    Thomas C. Peterson, William M. Connolley & John Fleck, “The myth of the 1970s global cooling scientific consensus”, Bulletin of the American Meteorological Society, September 2008, 1325-37
(4)    “History rewritten, Global Cooling from 1940 – 1970, an 83% consensus, 285 papers being ‘erased’”; 1970s Global Cooling Scare; 1970s Global Cooling Alarmism
(5)    Climate Change: Evidence & Causes—An Overview from the Royal   Society and the U.S. National Academy of Sciences, National Academies Press; ISBN 978-0-309-30199-2
(6)    Relevant bits of (e) are cited in a review, Henry H. Bauer, “Climate-change science or climate-change propaganda?”, Journal of Scientific Exploration, 29 (2015) 621-36
(7)    The 97% consensus on global warming
(8) Thomas S. Kuhn, The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1970; Bernard Barber, “Resistance by scientists to scientific discovery”, Science, 134 (1961) 596–602. Gunther Stent, “Prematurity and uniqueness in   scientific discovery”, Scientific American, December 1972, pp. 84-93. Hook, Ernest B. (ed), Prematurity in Scientific Discovery: On Resistance and Neglect, University of California Press, 2002
(9)    Dennis Bray, “The scientific consensus of climate change revisited”, Environmental Science & Policy, 13 (2010) 340 –50; see also “The myth of the Climate Change ‘97%’”, Wall Street Journal, 27 May 2014, p. A.13, by Joseph Bast & Roy Spencer
(10) My mother’s frequent repetitions engraved in my mind the German folk-saying, “Wenn der Narr nicht mein wär’, lacht’ ich mit”. Google found it in the Deutsches sprichwörter-lexikon edited by Karl Friedrich Wilhelm Wander (#997, p. 922)
(11)  “Mit der Dummheit kämpfen Götter selbst vergebens”; Friedrich Schiller, Die Jungfrau von Orleans.

 

Posted in consensus, denialism, global warming, media flaws, peer review, resistance to discovery, science is not truth, science policy, scientism, unwarranted dogmatism in science | Tagged: , , | 6 Comments »

Climate-change facts: Temperature is not determined by carbon dioxide

Posted by Henry Bauer on 2017/05/02

The mainstream claims about carbon dioxide , global warming, and climate change, parroted by most media and accepted by most of the world’s governments, are rather straightforward: carbon dioxide released in the burning of “fossil fuels” (chiefly coal and oil) drives global warming because CO2 is a “greenhouse gas”, absorbing heat that would otherwise radiate harmlessly out into space. Since the mid-19th century, when the Industrial Revolution set off this promiscuous releasing of CO2, the Earth has been getting hotter at an unprecedented pace.

The trouble with these claims is that actual data demonstrate that global temperature is not determined by the amount of CO2 in the atmosphere.

For example, during the past 500 million years, CO2 levels have often been much higher than now, including times when global temperatures were lower (1):

“The gray bars at the top … correspond to the periods when the global climate was cool; the intervening white space corresponds to the warm modes … no correspondence between pCO2 and climate is evident …. Superficially, this observation would seem to imply that pCO2 does not exert dominant control on Earth’s climate …. A wealth of evidence, however, suggests that pCO2 exerts at least some control …. [but this Figure] … shows that the ‘null hypothesis’ that pCO2 and climate are unrelated cannot be rejected on the basis of this evidence alone.” [To clarify convoluted double negative: All the evidence cited in support of mainstream claims is insufficient to over-rule what the above Figure shows, that CO2 does not determine global temperatures (the “null hypothesis”).]

Again, with temperature levels in quantitative detail (2):

Towards the end of the Precambrian Era, CO2 levels (purple curve) were very much higher than now while temperatures (blue curve) were if anything lower. Over most of the more recent times, CO2 levels have been very much lower while temperatures most of the time were considerably higher.

Moreover, the historical range of temperature fluctuations makes a mockery of contemporary mainstream ambitions to prevent global temperatures rising by as much as 2°C; for most of Earth’s history, temperatures have been about 6°C higher than at present.

Cause precedes effect

The data just cited do not clearly demonstrate whether rising CO2 brings about subsequent rises in temperature — or vice versa. However, ice-core data back as far as 420,000 years do show which comes first: temperature changes are followed by CO2 changes (3):

On average, CO2 rises lag about 800 years behind temperature rises; and CO2 levels also decline slowly after temperatures have fallen.

Since the Industrial Revolution

Over the last 150 years, global temperatures have risen, and levels of CO2 have risen. This period is minuscule by comparison to the historical data summarized above. Crucially, what has happened in this recent sliver of time cannot be compared directly to the past because the historical data are not fine-grained enough to discern changes over such short periods of time. What is undisputed, however, is that CO2 and temperature have not increased in tandem in this recent era, just as over geological time-spans. From the 1940s until the 1970s, global temperatures were falling, and mainstream experts were telling the mass media that an Ice Age was threatening (4) — at the same time as CO2 levels were continuing their merry rise with fossil fuels being burnt at an ever-increasing rate (5):

1945 to 1977 cool period with soaring CO2 emissions. Global temperatures began to cool in the mid–1940’s at the point when CO2 emissions began to soar … . Global temperatures in the Northern Hemisphere dropped about 0.5°C (0.9° F)
from the mid-1940s until 1977 and temperatures globally
cooled about 0.2°C (0.4° F) …. Many of the world’s glaciers advanced during this time and recovered a good deal
of the ice lost during the 1915–1945 warm period”.

Furthermore (5):

Global cooling from 1999 to 2009. No global warming has occurred above the 1998 level. In 1998, the PDO [Pacific Decadal Oscillation] was in its warm mode. In 1999, the PDO flipped from its warm mode into its cool mode and satellite imagery confirms that the cool mode has become firmly entrenched since then and global cooling has deepened significantly in the past few years.”

In short:
–>       Global temperatures have often been high
while CO2 levels were low, and vice versa
–>        CO2 levels rise or fall after temperatures have risen or fallen
–>         CO2 levels have risen steadily but temperatures decreased
between the 1940s and 1970s, and since about 1998
there has been a pause in warming, perhaps even cooling

Quite clearly, CO2 is not the prime driver of global temperature. Data, facts, about temperature and CO2 demonstrate that something else has far outweighed the influence of CO2 levels in determining temperatures throughout Earth’s history, including since the Industrial Revolution. “Something else” can only be natural forces. And indeed there are a number of known natural forces that affect Earth’s temperature; and many of those forces vary cyclically over time. The amount of energy radiated to Earth by the Sun varies in correlation with the 11-year periodic cycle of sun-spots, which is fairly widely known; but there are many other cycles known only to specialists, say the 9-year Lunisolar Precession cycle; and these natural forces have periodically warmed and cooled the Earth in cycles of glaciation and warmth at intervals of roughly 100,000 – 120,000 years (the Milankovitch Cycles), with a number of other cycles superposed on those (6).

So the contemporary mainstream view, the so-called “scientific consensus”, is at odds with the evidence, the facts.

That will seem incredible to many people, who might well ask how that could be possible. How could “science” be so wrong?

In brief: because of facts about science that are not much known outside the ranks of historians and philosophers and sociologists of science (7): that the scientific consensus at any given time on any given matter has been wrong quite often over the years and centuries (8); and that science nowadays has become quite different from our traditional view of it (9).

____________________________________

(1)    Daniel H. Rothman, Proceedings of the National Academy of Sciences of the United States of America, 99 (2002) 4167-71, doi: 10.1073/pnas.022055499
(2)    Nahle Nasif, “Cycles of Global Climate Change”, Biology Cabinet Journal Online, #295 (2007); primary sources of data are listed there
(3)    The 800 year lag in CO2 after temperature – graphed; primary sources are cited there
(4)    History rewritten, Global Cooling from 1940 – 1970, an 83% consensus, 285 papers being “erased”;
 1970s Global Cooling Scare;
 1970s Global Cooling Alarmism 
(5)    Don Easterbrook,
 “Global warming and CO2 during the past century”
(6)    David Dilley, Natural Climate Pulse, January 2012;
(7)    For example:
What everyone knows is usually wrong (about science, say)
Scientific literacy in one easy lesson
The culture and the cult of science
(8)    For example:
Bernard Barber, “Resistance by scientists to scientific discovery”,
Science, 134 (1961) 596–602.
Gunther Stent, “Prematurity and uniqueness in
scientific discovery”, Scientific American,
December 1972, pp. 84-93.
Hook, Ernest B. (ed), Prematurity in Scientific Discovery:
On Resistance and Neglect,
                                          University of California Press, 2002.
Science: A Danger for Public Policy?!
(9)   For example:
How Science Has Changed — notably since World War II
The Science Bubble
The business of for-profit “science”
From Dawn to Decadence: The Three Ages of Modern Science

Posted in consensus, global warming, resistance to discovery, science is not truth, science policy, scientific culture, the scientific method, unwarranted dogmatism in science | Tagged: | 2 Comments »

The banality of evil — Psychiatry and ADHD

Posted by Henry Bauer on 2017/04/25

“The banality of evil” is a phrase coined by Hannah Arendt when writing about the trial of Adolf Eichmann who had supervised much of the Holocaust. The phrase has been much misinterpreted and misunderstood. Arendt was pointing to the banality of Eichmann, who “had no motives at all” other than “an extraordinary diligence in looking out for his personal advancement”; he “never realized what he was doing … sheer thoughtlessness … [which] can wreak more havoc than all the evil instincts” (1). There was nothing interesting about Eichmann. Applying Wolfgang Pauli’s phrase, Eichmann was “not even wrong”: one can learn nothing from him other than that evil can result from banality, from thoughtlessness. As Edmund Burke put it, “The only thing necessary for the triumph of evil is for good men to do nothing” — and not thinking is a way of doing nothing.

That train of thought becomes quite uncomfortable with the realization that sheer thoughtlessness nowadays pervades so much of the everyday practices of science, medicine, psychiatry. Research simply — thoughtlessly — accepts contemporary theory as true, and pundits, practitioners, teachers, policy makers all accept the results of research without stopping to think about fundamental issues, about whether the pertinent contemporary theories or paradigms make sense.

Psychiatrists, for example, prescribe Ritalin and other stimulants as treatment for ADHD — Attention-Deficit/Hyperactivity Disorder — without stopping to think about whether ADHD is even “a thing” that can be defined and diagnosed unambiguously (or even at all).

The official manual, which one presumes psychiatrists and psychologists consult when assigning diagnoses, is the Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the American Psychiatric Association, now (since 2013) in its 5th edition (DSM-5). DSM-5 has been quite widely criticized, including by such prominent psychiatrists as Allen Frances who led the task force for the previous, fourth, edition (2).

Even casual acquaintance with the contents of this supposedly authoritative DSM-5 makes it obvious that criticism is more than called for. In DSM-5, the Diagnostic Criteria for ADHD are set down in five sections, A-E.

A: “A persistent pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development, as characterized by (1) and/or (2):
     1.   Inattention: Six (or more) of the following symptoms have persisted for at least 6 months to a degree that is inconsistent with developmental level and that negatively impacts directly on social and academic/occupational activities
           Note: The symptoms are not solely a manifestation of oppositional behavior, defiance, hostility, or failure to understand tasks or instructions. For older adolescents and adults (age 17 and older), at least five symptoms are required.
a.     Often fails to give close attention to details or makes careless mistakes in schoolwork, at work, or during other activities (e.g., overlooks or misses details, work is inaccurate)
b.     Often has difficulty sustaining attention in tasks or play activities (e.g., has difficulty remaining focused during lectures, conversations, or lengthy reading).”
and so on through c-i, for a total of nine asserted characteristics of inattention.

Paying even cursory attention to these “criteria” makes plain that they are anything but definitive. Why, for example, are six symptoms required up to age 16 when five are sufficient at 17 years and older? There is nothing clear-cut about “inconsistent with developmental level”, which depends on personal judgment about both the consistency and the level of development. Different people, even different psychiatrists no matter how trained, are likely to judge inconsistently in any given case whether the attention paid (point “a”) is “close” or not. So too with “careless”, “often”, “difficulty”; and so on.

It is if anything even worse with Criteria A(2):

“2.    Hyperactivity and Impulsivity:
Six (or more) of the following symptoms have persisted for at least 6 months to a degree that is inconsistent with developmental level and that negatively impacts directly on social and academic/occupational activities
       Note: The symptoms are not solely a manifestation of oppositional behavior, defiance, hostility, or failure to understand tasks or instructions. For older adolescents and adults (age 17 and older), at least five symptoms are required.
a.    Often fidgets with or taps hands or feet or squirms in seat.”
and so on through b-i, for again a total of nine supposed characteristics of inattention. There is no need to cite any of those since “a” amply reveals the absurdity of designating as the symptom of a mental disorder a type of behavior that is perfectly normal for the majority of young boys. This “criterion” makes self-explanatory the reported finding that boys are three times more likely than girls to be diagnosed with ADHD, though experts make heavier weather of it by suggesting that sex hormones may be among the unknown causes of ADHD (3).

A(1) and (2) are followed by
“B. Several inattentive or hyperactivity-impulsivity symptoms were present prior to age 12 years.
C. Several inattentive or hyperactivity-impulsivity symptoms are present in two or more
settings  (e.g., at home, school, or work; with friends or relatives; in other activities).
D. There is clear evidence that the symptoms interfere with, or reduce the quality of, social,
academic, or occupational functioning.
E. The symptoms do not occur exclusively during the course of schizophrenia or another
psychotic disorder and are not better explained by another mental disorder (e.g., mood
disorder, anxiety disorder, dissociative disorder, personality disorder, substance
intoxication or withdrawal).”

It should be plain enough that this set of so-called criteria is not based on any definitive empirical data, as a simple thought experiment shows: What clinical (or any other sort of) trial could establish by observation that six symptoms are diagnostic up to age 17 whereas five can be decisive from that age on? What if the decisive symptoms were apparent for only 5 months rather than six; or five-and-three-quarters months? How remarkable, too, that “inattention” and hyperactivity and impulsivity” are both characterized by exactly nine possible symptoms.

Leaving aside the deplorable thoughtlessness of the substantive content of DSM-5, it is also saddening that something published by an authoritative medical society should reflect such carelessness or thoughtlessness in presentation. Competent copy-editing would have helped, for example by eliminating the many instances of “and/or”: “this ungraceful phrase … has no right to intrude in ordinary prose” (4) since just “or” would do nicely; if, for instance, I tell you that I’ll be happy with A or with B, obviously I’ll be perfectly happy also if I get both.
Good writing and proper syntax are not mere niceties; their absence indicates a lack of clear substantive thought in what is being written about, as Richard Mitchell ( “The Underground Grammarian”), liked to illustrate by quoting Ben Jonson: “Neither can his Mind be thought to be in Tune, whose words do jarre; nor his reason in frame, whose sentence is preposterous”.

At any rate, ADHD is obviously an invented condition that has no clearly measurable characteristics. Assigning that diagnosis to any given individual is an entirely subjective, personal judgment. That this has been done for some large number of individuals strikes me as an illustration of the banality of evil. Countless parents have been told that their children have a mental illness when they are behaving just as children naturally do. Countless children have been fed mind-altering drugs as a consequence of such a diagnosis. Some number have been sent to special schools like Eagle Hill, where annual tuition and fees can add up to $80,000 or more.

Websites claim to give information that is patently unfounded or wrong, for example:

“Researchers still don’t know the exact cause, but they do know that genes, differences in brain development and some outside factors like prenatal exposure to smoking might play a role. … Researchers looking into the role of genetics in ADHD say it can run in families. If your biological child has ADHD, there’s a one in four chance you have ADHD too, whether it’s been diagnosed or not. … Some external factors affecting brain development have also been linked to ADHD. Prenatal exposure to smoke may increase your child’s risk of developing ADHD. Exposure to high levels of lead as a toddler and preschooler is another possible contributor. … . It’s a brain-based biological condition”.

Those who establish such websites simply follow thoughtlessly, banally, what the professional literature says; and some number of academics strive assiduously to ensure the persistence of this misguided parent-scaring and children-harming. For example, by claiming that certain portions of the brains of ADHD individuals are characteristically smaller:

“Subcortical brain volume differences in participants with attention deficit hyperactivity disorder in children and adults: a cross-sectional mega-analysis” by Martine Hoogman et al., published in Lancet Psychiatry (2017, vol. 4, pp. 310–19). The “et al.” stands for 81 co-authors, 11 of whom declared conflicts of interest with pharmaceutical companies. The conclusions are stated dogmatically: “The data from our highly powered analysis confirm that patients with ADHD do have altered brains and therefore that ADHD is a disorder of the brain. This message is clear for clinicians to convey to parents and patients, which can help to reduce the stigma that ADHD is just a label for difficult children and caused by incompetent parenting. We hope this work will contribute to a better understanding of ADHD in the general public”.

An extensive detailed critique of this article has been submitted to the journal as a basis for retracting that article: “Lancet Psychiatry Needs to Retract the ADHD-Enigma Study” by Michael Corrigan & Robert Whitaker. The critique points to a large number of failings in methodology, including that the data were accumulated from a variety of other studies with no evidence that diagnoses of ADHD were consistent or that controls were properly chosen or available — which ought in itself be sufficient reason not to find publication.

Perhaps worst of all: Nowhere in the article is IQ mentioned; yet the Supplementary Material contains a table revealing that the “ADHD” subjects had on average higher IQ scores than the “normal” controls. “Now the usual assumption is that ADHD children, suffering from a ‘brain disorder,’ are less able to concentrate and focus in school, and thus are cognitively impaired in some way. …. But if the mean IQ score of the ADHD cohort is higher than the mean score for the controls, doesn’t this basic assumption need to be reassessed? If the participants with ADHD have smaller brains that are riddled with ‘altered structures,’ then how come they are just as smart as, or even smarter than, the participants in the control group?”

[The Hoogman et al. article in many places refers to “(appendix)” for details, but the article — which costs $31.50 — does not include an appendix; one must get it separately from the author or the journal.]

As usual, the popular media simply parroted the study’s claims, illustrated by headlines cited in the critique:

And so the thoughtless acceptance by the media of anything published in an established, peer-reviewed journal contributes to making this particular evil a banality. The public, including parents of children, are further confirmed in the misguided, unproven, notion that something is wrong with the brains of children who have been designated with a diagnosis that is no more than a highly subjective opinion.

The deficiencies of this article also illustrate why those of us who have published in peer-reviewed journals know how absurd it is to regard “peer review” as any sort of guarantee of quality, or even of minimal standards of competence and honesty. As Richard Horton, himself editor of The Lancet, has noted, “Peer review . . . is simply a way to collect opinions from experts in the field. Peer review tells us about the acceptability, not the credibility, of a new finding” (5).

The critique of the Hoogman article is just one of the valuable pieces at the Mad in America website. I also recommend highly Robert Whitaker’s books, Anatomy of an Epidemic and Mad in America.


(1)  Hannah Arendt, Eichmann in Jerusalem — A Report on the Banality of Evil. Viking Press,
1964 (rev. & enlarged ed.). Quotes are at p. 134 of PDF available at
https://platypus1917.org/wp-content/uploads/2014/01/arendt_eichmanninjerusalem.pdf
(2)  Henry H. Bauer, “The Troubles With Psychiatry — essay review of Saving Normal by Allen
Frances and The Book Of Woe by Gary Greenberg”, Journal of Scientific Exploration,
29  (2015) 124-30
(3)   Donald W. Pfaff, Man and Woman: An Inside Story, Oxford University Press, 2010: p. 147
(4)   Modern American Usage (edited & completed by Jacques Barzun et al. from the work of
Wilson Follett), Hill & Wang 1966
(5)    Health Wars: On the Global Front Lines of Modern Medicine, New York Review Books,
2003, p. 306

 

Posted in conflicts of interest, consensus, media flaws, medical practices, peer review, prescription drugs, science is not truth, unwarranted dogmatism in science | Tagged: , , , , | Leave a Comment »

We are being routinely misled about health and diet

Posted by Henry Bauer on 2017/03/24

Most of what the media make a fuss about over health or diet should not be believed.

It should not be believed even as it cites peer-reviewed articles or official guidelines. All too often the claims made are based on misuse of statistics and are an abuse of common sense.

That little rant was set off by a piece in the august New York Times: “Pollution leads to greater risk of dementia among older women, study says”).

Alarms were triggered:
“Older women”: Only among older and not younger? Women but not men?

The original article did not improve my mood:
The pollution actually studied was “fine particulate matter, P.M. 2.5, 2.5 micrometers or smaller in diameter”: What about 2.5 to 3, say? Or 3 to 4? And so on.
“Women with the genetic variant APOE4, which increases the risk of Alzheimer’s disease, were more likely to be affected by high levels of air pollution”:
Is this asserting that there’s synergy? That the combined effect is not just the added effects of the two factors? That pollution is not just an independent risk factor but somehow is more effective with APOE4 carriers? So what about APOE3 or APOE2 carriers?

The New York Times piece mentioned some other studies as well:
“[P]renatal exposure to air pollution could result in children with greater anxiety, depression and attention-span disorders”.
“[A]ir pollution caused more than 5.5 million premature deaths in 2013”.

With those sort of assertions, my mind asks, “How on earth could that be known?”
What sort of study could possibly show that? What sort of data, and how much of it, would be required to justify those claims?

So, with the older women and dementia, how were the observational or experimental subjects (those exposed to the pollution) distinguished from the necessary controls that were not exposed to pollution? Controls need to be just like the experimental subjects (in age, state of health, economic circumstances, etc.) with the sole exception that the latter were exposed to pollution and the controls were not.
For the controls not to be exposed to the pollution, obviously the two groups must be geographically separate. Then what other possibly pertinent factors differed between those geographic regions? How was each of those factors controlled for?

In other words, what’s involved is not some “simple” comparison of polluted and not polluted; there is a whole set of possibly influential factors that need somehow to be controlled for.

The more factors, the larger the needed number of experimental subjects and controls; and the required number of data points increases much more than linearly with the number of variables. Even just that realization should stimulate much skepticism about many of the media-hyped stories about diet or health. Still more skepticism is called for when the claim has to do with lifestyle, since the data then depend on how the subjects recall and describe how they have behaved.

The dementia article was published in Translational Psychiatry, an open-access journal from the Nature publishing group. The study had enrolled 3647 women aged between 65 and 79. That is clearly too small a number for all possibly relevant factors to have been controlled for. Many details make that more than a suspicion, for example, “Women in the highest PM2.5 quartile (14.34–22.55 μg m −3) were older (aged ≥75 years); more likely to reside in the South/Midwest and use hormonal treatment; but engage less in physical activities and consume less alcohol, relative to counterparts (all P-values <0.05. . . )” — in other words, the highest exposure to pollution was experiences by subjects who differed from controls and from other subjects in several ways besides pollution exposure.

At about the same time as the media were hyping the dementia study, there was also “breaking news” about how eating enough fruit and vegetables protects against death and disease, based on the peer-reviewed article “Fruit and vegetable intake and the risk of cardiovascular disease, total cancer and all-cause mortality — a systematic review and dose-response meta-analysis of prospective studies”.

Meta-analysis means combining different studies, the assumption being that the larger amount of primary data can make conclusions stronger and firmer. However, that requires that each of the individual studies being drawn on is sound and that the subjects and circumstances are reasonably comparable in all the different studies. In this case, 95 studies reported in 142 publications were analyzed. Innumerable factors need to be considered — the specific fruit or vegetable (one cannot presume that apples and pears have the same effect, nor cauliflower and carrots); and the effects of different amounts of what is eaten must somehow be taken into account. There are innumerable variables, in other words, permitting considerable skepticism about the claims that “An estimated 5.6 and 7.8 million premature deaths worldwide in 2013 may be attributable to a fruit and vegetable intake below 500 and 800 g/day, respectively, if the observed associations are causal” and that ‘Fruit and vegetable intakes were associated with reduced risk of cardiovascular disease, cancer and all-cause mortality. These results support public health recommendations to increase fruit and vegetable intake for the prevention of cardiovascular disease, cancer, and premature mortality.” Skepticism is yet more called for since health and mortality are influenced to a great extent by genetics and geography, which were not controlled for.
The authors deserve credit, though, for the clause, “if the observed associations are causal”. What everyone should know about statistics is that correlations, associations, never prove causation. That law is almost universally ignored as the media disseminate press releases and other spin from researchers and their institutions, implying that associations are meaningful about what causes what.

It is easy enough to understand why considerable skepticism should be exercised with claims like those about mortality and diet or about dementia and pollution, simply because studies to test these claims properly would need to include much larger numbers of subjects. But an even greater reason to doubt such claims, as well as claims about newly approved drugs and treatments, is that the statistical analyses commonly used are inherently flawed, most particularly by a quite inadequate criterion for statistical significance.

Almost universally in social science and in medical science, statistical significance is defined as p≤0.05: the probability that the results are mere coincidence, owing just to random chance, is less than 5%, in other words less than 1 in 20.

Several things are wrong with that. Among the most serious are:

  1. That something is not a coincidence, not owing to random chance, does not tell us what it is owing to, what the cause is. It is not necessarily the experimenter’s hypothesis, yet that is the assumption made universally with this type of statistical analysis.
  2. 1 in 20 is a very weak criterion. It means that 1 in every 20 “statistically significant” conclusions is wrong. Do 20 studies, and on average one of them will be “statistically significant” even though it is wrong.
  3. That something is statistically significant does not mean that the effect is meaningful.
    For example, after I had a TIA (transient ischemic attack, minor stroke), the neurologist automatically prescribed the “blood thinner” Plavix, clopidogrel, as lessening the risk of further strokes. I am wary of all drugs since they all have “side” effects, so later I searched the literature and found that Plavix is statistically significantly better at decreasing risk than is aspirin, p = 0.043, better than p≤0.05. However, the relative efficacies found were just 5.83% compared to 5.32%; to my mind, not at all a significant difference, not enough to compensate for the greater risk of “side” effects from clopidogrel than from aspirin which has been in use for far longer by far more people without discovery of seriously dangerous “side” effects. (Chemicals don’t have two types of effect, main and side, those we want and those we don’t want. “Side” effects are just as real as the intended effects.)

Many statisticians have pointed out for many years what is wrong with the p-value approach to statistics and its use in social science and in medical science. More than two decades ago, an editorial in the British Medical Journal pointed to “The scandal of poor medical research” [i] with incompetent statistical analysis one of the prime culprits. Matthews [ii] has explained clearly point 1 above. Colquhoun [iii] explains that p ≤ 0.05 makes for wrong conclusions even more often than 1 in 20 times: “If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time”. Gigerenzer [iv] has set out in clear detail the troubles with the commonly used p-value analysis.
Nevertheless, this misleading approach continues to be routine, standard, because it is so simple that many researchers who have no real understanding of statistics can use it. Among the consequences is that most published research findings are false [v] and that newly approved drugs have had to be withdrawn sooner and sooner after their initial approval [vi].
Slowly the situation improves as systemic inertia is penetrated by a few initiatives. A newly appointed editor of the journal Basic and Applied Social Psychology (BASP) announced that p-value analyses would no longer be required [vii], and soon after that they were actually banned [viii].

In the meantime, however, tangible damage is being done by continued use of the p-value approach in the testing and approval of prescription drugs, which adds to a variety of deceptive practices routinely employed by the pharmaceutical industry in clinical trials, see for example Ben Goldacre, Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients (Faber & Faber, 2013); Peter C. Gøtzsche, Deadly Medicines and Organised Crime: How Big Pharma Has Corrupted Healthcare (Radcliffe, 2013); David Healy, Pharmageddon (University of California Press, 2012). Gøtzsche and Healy report that prescription drugs, even though “properly” used, are the 3rd or 4th leading cause of death in developed countries.

***************************************************************************

[i] D G Altman, BMJ, 308 [1994] 283

[ii] Matthews, R. A. J. 1998. “Facts versus Factions: The use and abuse of subjectivity in scientific research.” European Science and Environment Forum Working Paper; pp. 247-82 in J. Morris (ed.), Rethinking Risk and the Precautionary Principle, Oxford: Butterworth (2000).

[iii] David Colquhoun, “An investigation of the false discovery rate and the misinterpretation of p-values”, Royal Society Open Science, 1 (2014) 140216; http://dx.doi.org/10.1098/rsos.14021

[iv] Gerd Gigerenzer, “Mindless statistics”, Journal of Socio-Economics, 33 [2004] 587-606)

[v] (John P. A. Ioannidis, “Why most published research findings are false”, PLoS Medicine, 2 [#8, 2005] 696-701; e124)

[vi] Henry H. Bauer, Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012, Table 5 (p. 240) and text pp. 238-42

[vii] David Trafimow, Editorial, Basic and Applied Social Psychology, 36 (2014) 1-2

[viii] (David Trafimow & Michael Marks, Editorial, BASP, 37 [2015] 1-2; comments by Royal Statistical sociry[viii] and at https://www.reddit.com/r/statistics/comments/2wy414/social_psychology_journal_bans_null_hypothesis/)

Posted in media flaws, medical practices, peer review, prescription drugs, unwarranted dogmatism in science | Tagged: , , , , | Leave a Comment »

Political Correctness in Science

Posted by Henry Bauer on 2017/03/06

Supposedly, science investigates via the scientific method: testing the validity of hunches (hypotheses) against reality and allowing reality to establish beliefs, thereby discarding disproved pre-judgments, hunches, prejudices, biases. Scientific theories. are determined by facts, evidence.   Science is empirical, pragmatic; it does not accept beliefs on authority or from tradition.

Historians, philosophers, sociologists, scholars of Science & Technology Studies have long recognized that this view of science is mythical (i), but it continues to be taught in schools and in social-science texts and it is the conventional wisdom found in the media and in public discourse generally. A corollary of the misconception that scientific theories have been successfully tested against reality is the widespread belief that what science says, what the contemporary scientific consensus is, can safely be accepted as truth for all practical purposes.

So it seems incongruous, paradoxical, that large numbers of scientists should disagree violently, on any given issue, over what science really says. Yet that is the case on a seemingly increasing range of topics (ii), some of them of great public import, for instance whether HIV causes AIDS (iii) or whether human-generated carbon dioxide is the prime cause of global warming and climate change. On those latter matters as well as some others, the difference of opinion within the scientific community parallels political views: left-leaning (“liberal”) opinion regards it as unquestionably true that HIV causes AIDS and that human-generated carbon dioxide is the prime cause of global warming and climate change, whereas right-leaning (“conservative”) opinion denies that those assertions constitute “settled science” or have been proved beyond doubt. Those who harbor these “conservative” views are often labeled “denialists”; it is not to be countenanced that politically liberal individuals should be global warming skeptics (iv).

In other words, it is politically incorrect to doubt that HIV causes AIDS or that human-generated carbon dioxide is the prime cause of global warming. It requires no more than cursory observation of public discourse to recognize this pervasive phenomenon. Governments and Nobel-Prize committees illustrate that those beliefs are officially acted on as though they were established truths. One cadre of mainstream scientists even wants criminal charges laid (v) against those who question that global warming is caused primarily by human-generated carbon dioxide. So political correctness is present within the scientific community in the USA.

I’m of a sufficient age to be able to testify that half a century ago it would not have occurred to any researchers in a democratic society to urge the government to prosecute for criminal conspiracy other researchers who disagreed with them. Declaring certain scientific research programs as politically incorrect and therefore substantively without merit, and persecuting those who perpetrated such research, characterized totalitarian regimes, not free societies. Stalin’s Soviet Union declared wrong the rest of the world’s understanding of genetics and imprisoned exponents of it; it also declared wrong the rest of the world’s understanding of chemical bonding and quantum mechanics. Nazism’s Deutsche Physik banned relativity and other “Jewish” science.

**************************************************************

Political correctness holds that HIV causes AIDS and that human-generated carbon dioxide is the prime cause of global warming. Those beliefs also characterize left-leaning opinion. Why is political correctness a left-wing phenomenon?

In contemporary usage, political correctness means “marked by or adhering to a typically progressive orthodoxy on issues involving especially ethnicity, gender, sexual orientation, or ecology” (vi) or “conforming to a belief that language and practices which could offend political sensibilities (as in matters of sex or race) should be eliminated” (vii), evidently “progressive” or “liberal” or Left-ish views. But those descriptions fail to capture the degree of fanatical dogmatism that can lead practicing scientists to urge that those of differing views be criminally prosecuted; political correctness includes the wish to control what everyone believes.

Thus political correctness has been appropriately called “liberal fascism”, which also reveals why it is a phenomenon of the ultra-extreme Left. Attempted control of beliefs and corresponding behavior is openly proclaimed, unashamedly, by the extreme Right; it is called, and calls itself, fascism, Nazism, and needs no other name. But the Left, the “liberals”, claim to stand for and to support individual freedom of belief and speech; so a name is needed for the phenomenon by which proclamations of liberal ideals are coupled with attempts to enforce adherence to particular beliefs and social norms. Political correctness is the hypocrisy of self-proclaimed liberals functioning as authoritarian fascists.

That hypocrisy pervades political correctness, I was able to observe at first hand during my years in academic administration. People say things they don’t mean, and that they know everyone knows they don’t mean, and no one dares point to the absence of the Emperor’s clothes. For instance, the Pooh-Bahs assert that affirmative action means goals and not quotas, even as hiring practices and incentives demonstrate that they are quotas. For innumerable examples gathered over the years, see the newsletter I edited from 1993 until my retirement at the end of 1999 (viii).

********************************************

Science had represented for a long time the virtues associated with honest study of reality. Around the 1930s and 1940s, sociologist Robert Merton could describe the norms evidently governing scientific activity as communal sharing of universally valid observations and conclusions obtained by disinterested people deploying organized skepticism. That description does not accommodate researchers urging criminal prosecution of peers who disagree with them about evidence or conclusions. It does not accommodate researchers lobbying publishers to withdraw articles accepted for publication following normal review; and those norms do not describe the now prevalent circumstances in which one viewpoint suppresses others through refusal to allow publication or participation in scientific meetings (ix).

Science, in other words, is not at all what it used to be, and it is not what the popular view of it is, that common view having been based on what scientific activity used to be. It has not yet been widely recognized, how drastically science has changed since about the middle of the 20th century (x). Among the clues indicative of those changes are the spate of books since the 1980s that describe intense self-interested competition in science (xi) and the increasing frequency of fraud, again beginning about in the 1980s, that led to establishment of the federal Office of Research Integrity. That political correctness has surfaced within the scientific community is another illustration of how radically different are the circumstances of scientific activity now compared to a century ago and by contrast to the outdated conventional wisdom about science.

Political correctness began to pervade society as a whole during the same years as science was undergoing drastic change. The roots of political correctness in society at large may be traceable to the rebellious students of the 1960s, but the hegemony of their ideals in the form of political correctness became obvious only in the 1980s, when the term “political correctness” came into common usage:

The origin of the phrase in modern times is generally credited to gallows humor among Communists in the Stalin era (xii):

“Comrade, your statement is factually incorrect.”
“Yes, it is. But it is politically correct.”

That political correctness is in contemporary times a Left-ish phenomenon is therefore true to its modern origin.

How seriously political correctness corrupts science should be obvious, since it more than breaks all the traditional norms. Those norms are often summarized as universalism, communalism, disinterestedness, skepticism — taking for granted as well simple honesty and absence of hypocrisy. Nowadays what was taken for granted no longer applies. It is simply dishonest to assert that something has been proven beyond doubt when strong contrary evidence exists that is taken seriously by competent researchers. One cannot, of course, look into the minds of those who assert certainty where there is none (xiii), but among possible explanations, hypocrisy may be the least culpable.

Science cannot be isolated from the rest of society, so the incursion of political correctness into science is understandable. Moreover, what used to be the supposedly isolated ivory tower of academe is nowadays the very epicenter where political correctness breeds and from where it spreads. Whatever the causes may be, however, it is important to recognize how science has changed and that it can be corrupted by the same influences as the rest of society.

***************************************************************************

i        Henry H. Bauer, Scientific Literacy and Myth of the Scientific Method, University of Illinois Press 1992; http://www.press.uillinois.edu/books/catalog/77xzw7sp9780252064364.html.

ii       Henry H. Bauer, Dogmatism   in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland 2012.

iii      Henry H. Bauer, The Origin, Persistence and Failings of HIV/AIDS Theory, McFarland 2007.

iv      Henry H. Bauer, “A politically liberal global-warming skeptic?”, 2012/11/25; https://scimedskeptic.wordpress.com/2012/11/25/a-politically-liberal-global-warming-skeptic.

v       Letter to President Obama, Attorney General Lynch, and OSTP Director Holdren, 1 September 2015; http://scienceblogs.com/gregladen/2015/09/19/letter-to-president-obama-investigate-deniers-under-rico.
The original pdf posted in 2003 at http://www.iges.org/letter/LetterPresidentAG.pdf is no longer there. The Wayback Machine says, “The letter that was inadvertently posted on this web site has been removed. It was decided more than two years ago that the Institute of Global Environment and Society (IGES) would be dissolved when the projects then undertaken by IGES would be completed. All research projects by IGES were completed in July 2015, and the IGES web site is in the process of being decommissioned”.
As of March 2017, however, a Google search for “Institute of Global Environment and Society” led to a website with that header, albeit augmented by “COLA”: http://www.m.monsoondata.org/home.html accessed 4 March 2017. Right-leaning Internet sources offer insight into this seeming mystery: http://www.breitbart.com/big-government/2015/09/22/lead-climate-scientist-behind-obamarico-letter-serious-questions-answer/ and http://leftexposed.org/2015/10/institute-of-global-environment-and-society, both accessed 4 March 2017.

vi      http://www.dictionary.com/browse/politically-correct?s=t (accessed 4 March 2017).

vii     https://www.merriam-webster.com/dictionary/politically%20correct (accessed 4 March 2017).

viii    https://web.archive.org/web/20131030115950/http://fbox.vt.edu/faculty/aaup/index4.html.

ix      Ref. ii, especially chapter 3.

x       Henry H. Bauer, “Three stages of modern science”, Journal of Scientific Exploration, 27 (2013) 505-13; https://www.dropbox.com/s/xl6jaldtx3uuz8b/JSE273-3stages.pdf?dl=0.

xi      Natalie Angier, Natural Obsessions: The Search for the Oncogene, Houghton Mifflin 1987; David H. Clark, The Quest for SS433, Viking 1985; Sheldon Glashow with Ben Bova, Interactions: A Journey through the Mind of a Particle Physicist and the Matter of the World, Warner 1988; Jeff Goldberg Anatomy of a Scientific Discovery, Bantam 1988; Stephen S. Hall, Invisible Frontiers: The Race to Synthesize a Human Gene, Atlantic Monthly Press 1987; Robert M. Hazen, The Breakthrough: The Race for the Superconductor, Summit 1988; David L. Hull, Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science, University of Chicago Press 1988; Robert Kanigel, Apprentice to Genius: The Making of a Scientific Dynasty, Macmillan 1986; Charles E. Levinthal,. Messengers of Paradise: Opiates and the Brain, Anchor/Doubleday 1988; Roger Lewin, Bones of Contention: Controversies in the Search for Human Origins, Simon and Schuster 1987; Ed Regis, Who Got Einstein’s Office: Eccentricity and Genius at the Institute for Advanced Study, Addison-Wesley 1987; Bruce Schechter, The Path of No Resistance: The Story of the Revolution in Superconductivity, Touchstone (Simon and Schuster) 1990; Solomon H. Snyder, Brainstorming: The Science and Politics of Opiate Research, Harvard University Press 1989; Gary Taubes, Nobel Dreams: Power, Deceit, and the Ultimate Experiment, Random House 1986; Robert Teitelman, Gene Dreams: Wall Street, Academia, and the Rise of Biotechnology, Basic Books 1989; Nicholas Wade, The Nobel Duel: Two Scientists’ 21-Year Race to Win the World’s Most Coveted Research Prize, Doubleday 1981.

xii     Jon Miltimore, “The historical origin of ‘political correctness’”, 5 December 2016, http://www.intellectualtakeout.org/blog/historical-origin-political-correctness; Angelo M. Codevilla, “The rise of political correctness”, Claremont Review of Books, Fall 2016, pp. 37-43; http://www.claremont.org/download_pdf.php?file_name=1106Codevilla.pdf.

xiii    Henry H. Bauer , “Shamans of Scientism: Conjuring certainty where there is none”, Journal of Scientific Exploration, 28 (2014) 491-504.

 

Posted in legal considerations, media flaws, politics and science, science is not truth, scientific culture, scientists are human, the scientific method, unwarranted dogmatism in science | Tagged: | Leave a Comment »

Science: A Danger for Public Policy?!

Posted by Henry Bauer on 2017/02/08

Public policies rely on advice and consent from science about an ever wider range of issues (environmental challenges, individual and public health. infrastructure and its safety, military systems). Surely this is unquestionably good, that public policies are increasingly pragmatic through respecting the facts delivered by science?

No. Not necessarily, not always.

The central problem is that science — humankind’s understanding of nature, of the world — doesn’t just deliver facts. Science is perpetually incomplete. On any given question it may not be unequivocal.

The media, the public, policy makers, the legal system all presume that a contemporary consensus in the scientific community can be safely accepted as true for all practical purposes. The trouble is that any contemporary scientific consensus may later prove to have been wrong.

If this assertion seems outlandish —theoretically possible but so unlikely as to be ignorable in practice — it is because the actual history and nature of science are not widely enough understood.

The contemporary scientific consensus has in fact been wrong about many, perhaps even most of the greatest advances in science: Planck and quantums, Wegener and drifting continents, Mendel and quantitative genetic heredity; the scientific consensus and 1976   Nobel Prize for discovering the viral cause of mad-cow diseases was wrong; that stomach ulcers are caused by bacteria had been pooh-poohed by the mainstream consensus for some two decades before adherents of the consensus were willing to examine the evidence and then award a Nobel Prize in 2005.

Historical instances of a mistaken scientific consensus being have seemingly not affected major public policies in catastrophic ways, although one possible precedent for such unhappy influence may be the consensus that supported the eugenics movement around the 1920s, resulting in enforced sterilization of tens of thousands of people in the USA as recently as the latter half of the 20th century.

Nowadays, though, the influence of science is so pervasive that the danger has become quite tangible that major public policies might be based on a scientific consensus that is at best doubtfully valid and at worst demonstrably wrong.

The possibility that significant public actions might be dictated by an unproven scientific consensus was explicitly articulated by President Eisenhower. His warning against the potential influence of the military-industrial complex is quite often cited, but little cited is another warning he gave in the same speech:

“in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”

That can happen when a contemporary scientific consensus is accepted as practical truth, as settled science. The crucial distinction could hardly be explained more clearly than Michael Crichton did in an invited lecture at CalTech:

“Consensus is invoked only in situations where the science is not solid enough. Nobody says the consensus of scientists agrees that E=mc2. . . . It would never occur to anyone to speak that way.”

Crichton had in mind the present-day scientific consensus that human-caused generation of carbon dioxide is chiefly responsible for rising global temperatures and associated major climate-change. The fact that there are highly competent public dissenters — including such winners of Nobel Prizes as Ivar Giaever (Physics 1973), Robert Laughlin (Physics 1998), Kary Mullis (Chemistry 1993) — demonstrates that human-caused global warming is a consensus, not the unanimity associated with such “settled science” as the Periodic Table of the chemical elements or that E=mc2.

The proponents of human-caused global warming constitute an effective elite. Since they represent the contemporary consensus, they largely control peer review, research funding, and which research gets published; and they hold important positions in the halls of power of individual nations as well as in such international organizations as the Intergovernmental Panel on Climate Change.

The history of science is unequivocal: Contemporary scientific consensuses have been wrong on some of the most significant issues. Those who determine public policies would do well to seek an impartial comparison and analysis of the substantive claims made both by proponents of a mainstream consensus and by those who claim that the evidence does not prove that consensus to be unquestionably correct.

In absence of an impartial comparative analysis, public discourse and public actions are determined by ideology and not by evidence. “Liberals” assert that the mainstream consensus on global warming equals “science” and anyone who properly respects the environment is supposed to accept this scientific consensus. On the other side, many “conservatives” beg to differ, as when Senator Inhofe flourishes a snowball. One doubts that most proponents of either side could give an accurate summary of the pertinent evidence. That is not a very good way to discuss or to make public policy.

******************************************************************************

This little essay had been offered as an Op-Ed to the Wall Street Journal, the New York Times. the Washington Post, the Los Angeles Times, the Financial Times (London), and USA Today. That it appears here confirms that none of those media stalwarts wanted to use it.

Posted in consensus, global warming, media flaws, politics and science, science is not truth, science policy, scientism, unwarranted dogmatism in science | Tagged: , , , , , , , | 1 Comment »

What to believe? Science is a red herring and a wild-goose chase

Posted by Henry Bauer on 2016/07/24

To be certain about things is reassuring. It allows feelings of safety, security.

For knowledge, for understanding the world, humankind seems to have turned at first to what could be inferred from the spirits of things — the spirits associated with or inherent in everything: in mountains, in trees, in bodies of water. The spirits could be understood, at least partly, because they were similar to people in having emotions and desires.

Eventually — quite recently, only a few thousand years ago — the plurality and hierarchies of spirits and gods yielded to monotheistic religions in most parts of the world. Even more recently, and only in the most powerfully developed countries, religion yielded to science.

That is to say, traditional religion yielded to scientism, the religion of science. Even the monotheistic gods have emotions and desires, but science doesn’t. So knowledge became entirely impersonal, at least in principle.

Nowadays, then, for real certainty we look to science. “Scientific” stands for unquestionably true. Science is the gatekeeper of truth. “Science” and “scientific” are mediators of being certain, being sure about something.

Consequently, a great deal of arguing to-and-fro has to do with whether something is scientific:
Does it emerge from use of the scientific method?
Is it reproducible?
Is it falsifiable?

And if a claim doesn’t satisfy those criteria or equivalent ones then it’s dismissed as not scientific, or as pseudo-science, or as just plain not to be believed.

That’s an indirect way of judging believability, and arguments about whether something is scientific can be and have been highly abstract, complicated, and sophisticated as technical philosophical discourse tends to be.

Instead, why not go directly at the issues of certainty and truth and just ask, what does it take to be justifiably and reliably certain about something?

In any case, although we use science as mediator of certain truth, we’ve also learned that contemporary scientific knowledge and understanding really isn’t always reliably true. Even when an explanation has been based on tangible evidence, and withstood challenges and tests — if it’s properly scientific, in other words — we’ve learned that it may be misleading. Scientific progress with periodic scientific revolutions has continually revealed flaws, deficiencies, errors, in what were for a time the most widely and fully accepted scientific theories.

If something has always happened in the past, can we be certain that it always will happen in the future? We’ve learned that we cannot be quite certain.

When an explanation has always worked in the past, can we be certain that it always will work in the future? We’ve learned that we cannot be quite certain.

When tangible things are sub-divided into their ultimate components, those turned out to be nothing like objects accessible to direct human observation. They do not fit our concepts of particles or energy, although many of their reactions can be calculated using sometimes particle equations and sometimes wave equations. They behave sometimes as though they were locatable, delimited in space-time, and at other times appear to be “non-local”, not so delimited.

In other words, we’ve learned that we cannot get certain and humanly comprehensible understanding of everything about the whole of the natural world. It’s surely time to accept that, that human beings will never attain complete certainty.

That could be liberating. It would make more feasible pragmatic, non-ideological communication and cooperative action — if only we could be rid of the ideologues: the true believers in a religion, including the true believers in scientism, the religion of science. Anyone who claims complete certainty has insufficient warrant for that claim. The world and its behaviors can be known only within degrees of probability. Instead of arguing about whether something is scientific or whether it is true, we ought to be discussing plausibility, likelihood, utility, risk.

Instead of dismissing as pseudo-science the claims that Loch Ness Monsters are real animals, we should be content to say, “Feel free to believe that if the evidence seems to you sufficiently convincing. For my part, I’ll wait until someone shows me an actual specimen or an indubitable bit of one”. And similarly with yetis and other cryptids, and with UFOs, and with all other anomalous or Fortean reports or claims.

Instead of arguing over being for or against vaccination, we should ask for the statistical data of harm possibly caused by each specific vaccine. For instance, since in many countries the chance of becoming infected by polio is less than the risk of contracting polio from the oral vaccine. perhaps official sources might be less dogmatic about enforcing use of that particular vaccine (“Polio vaccines now the #1 cause of polio paralysis”; “Oral polio vaccine-associated paralysis in a child despite previous immunization with inactivated virus”; “Bill Gates’ polio vaccine program caused 47,500 cases of paralysis death“).

And so on. For every drug and every treatment, we should demand that the Food and Drug Administration require data on NNT and NNH — NNT: the number of patients needed to be treated in order that 1 patient benefit, compared with NNH: the number of patients who must receive a drug in order to have 1 patient experience harm [How (not) to measure the efficacy of drugs].  That would go a long way to decreasing the number of people nowadays being killed by prescription drugs, which are the 3rd or 4th leading cause of death in First-World countries (Peter C. Gøtzsche, Deadly Medicines and Organised Crime: How Big Pharma Has Corrupted

Healthcare [Radcliffe, 2013]; David Healy, Pharmageddon [University of California Press, 2012]).

We need more data and less dogmatism.

 

 

Posted in medical practices, prescription drugs, science is not truth, unwarranted dogmatism in science | Tagged: , , , , , , | Leave a Comment »

“Dark matter” and dinosaur extinction

Posted by Henry Bauer on 2016/01/06

“Everyone” knows that the collision of an asteroid with Earth damaged the environment so much that the dinosaurs died out and only much smaller creatures survived. Many also know that the impact crater, the Chicxulub crater, has been found beneath the surface near the Yucatan peninsula. Just consult Wikipedia, or Google for more sources.

Except: Google also turns up some reservations, for instance “What really killed the dinosaurs? New challenges to the impact theory” (BBC program).

Several decades ago already, paleontologist Dewey McLean (as well as some other geologists and paleontologists) had made the case that the dinosaur extinction was brought about by climate changes owing largely to the enormous volcanic activity associated with the Deccan Traps (a region in India) —
see Dewey M. McLean, “Impact winter in the global K/T extinctions: no definitive evidence”, pp. 493-503 in Global Biomass Burning: Atmospheric, Climatic, and Biospheric Implications, ed. J. S. Levine, MIT Press, 1991.
(McLean’s somewhat lonely public dissidence is mentioned in my book, Dogmatism in Science and Medicine [McFarland 2012, pp. 97-8]. I knew McLean, we worked at the same university.)

Donald Prothero is also a paleontologist. Recently he posted the following in a book review on amazon.com:
“that the impact at the end of the Cretaceous is the primary cause of the extinction of dinosaurs has been discredited in recent years. . . . the consensus has now swung to the idea that the massive Deccan eruptions in India and Pakistan were far more important to the end-Cretaceous extinctions.”

Prothero’s review is of the book by Lisa Randall, Dark Matter and the Dinosaurs: The Astounding Interconnectedness of the Universe, which postulates the presence in the Milky Way (our galaxy) of a disc of “dark matter” that somehow periodically liberates comets or asteroids that go on to cause periodic extinction events on Earth.
In his amazon.com review, Prothero also debunks the notion that extinctions follow an identifiably periodic pattern.

My own trouble with Randall’s speculation is that “dark matter” is no more than a fudge factor necessary to make Big-Bang cosmology fit the observed facts. There is no shred of direct empirical evidence that “dark matter” exists.
Things just don’t add up in Big-Bang cosmology. Actual observations of quasars and galaxies do not jibe with calculations based on the known force of gravity and on the presumption that redshifts reflect speed relative to Earth (Doppler effect).
There isn’t enough gravity. So “dark matter” was invented to yield that needed extra gravity. “Dark matter” is associated with “dark energy”, for which we have no evidence either.
All this “dark” stuff is supposed to make up more than 90% of the universe, at the same time as “dark” is the euphemism for “we know nothing about it, we just need it to make the equations balance”.

This collection of science fiction is treated respectfully by the media.

But there is a much simpler explanation for the failure of Big-Bang cosmology to fit the observed facts. There is strong evidence that redshifts of quasars do not always result purely from Doppler effects, that quasars are associated with the creation of new matter which has an inherent redshift:
— see Halton Arp, Quasars, Redshifts and Controversies (Interstellar Media 1987) and Seeing Red: Redshifts, Cosmology and Academic Science (Apeiron 1998); for a summary, see pp. 113-18 in Dogmatism in Science and Medicine.

Which all goes to show, as many others besides me have often remarked, that “What everyone knows is usually wrong (about science, say)”.  On all but the most non-controversial issues, TED talks and Wikipedia entries are among the sources most likely to be wrong, moreover wrong dogmatically, insistently, aggressively, uncompromisingly, as they treat every contemporary (and thereby temporary) mainstream consensus as Gospel truth.

A pervasive problem is that mainstream dogmas are taken as truth by people outside the particular field of knowledge:
Randall is a physicist, so she is not familiar with the range of views among paleontologists and geologists.
On the matter of HIV/AIDS, one finds economists like South African Nicoli Nattrass (The AIDS Conspiracy: Science fights back) and political scientists like Courtney Jung (Lactivism: How feminists and fundamentalists, hippies and yuppies, and physicians and politicians made breastfeeding big business and bad policy) getting the facts totally wrong, even citing mainstream sources incorrectly.
Many social scientists get a whole lot wrong about science, as when Steven Shapin asserted that scientists don’t value their technicians appropriately (p. 142 in Fatal Attractions: The Troubles with Science, Paraview Press 2001).
No one is immune, because we cannot look at the primary evidence on every topic of interest, so we have to decide, more or les by instinct, which mainstream beliefs to accept, at least provisionally, and which to doubt enough that further digging is called for. I went wrong by accepting mainstream views about UFOs and about homosexuality,  for example, and I’m probably wrong on some other issues where I haven’t yet woken up to it. But at least I’m aware of the problem. The media, though, apparently are not aware of it, nor are the publishers who put out books like Nattrass’s or Jung’s or Randall’s.

 

Posted in consensus, media flaws, science is not truth, scientism, unwarranted dogmatism in science | Tagged: , , , , , , | 7 Comments »