Skepticism about science and medicine

In search of disinterested science

Archive for March, 2014

Experts, pundits, and the search for trustworthy knowledge

Posted by Henry Bauer on 2014/03/16

Long ago on a family trip by car, after we had gone for some miles in a wrong direction, my 9-year-old daughter noted that “No one’s perfect, not even Daddy”.
That’s worth recalling whenever you hear the word “expert”.

“According to experts” is a favorite media phrase. The citing of un-named experts is an outright scam. Maybe the reporter had just asked a trusted friend? Or consulted Wikipedia or some other Googled source? “Experts” differ in what they know, in how competent they are, how honest, scrupulous, publicity-seeking . . . . — they differ in all the ways that humans can differ from one another, including trustworthiness.

The whole point of citing experts is to justify a claim. But to be able to judge whether an expert is trustworthy, one needs to know who that expert is.

The media’s choices of experts are anything but reassuring. When non-anonymous experts about subject X appear in the media, in print or in person, it can be observed that any assistant professor of subject X at any college evidently qualifies for the media designation “expert”; or any journalist or pundit who has devoted time to subject X. For my part, if I am to take someone’s opinion as worth attending to, I need a lot more information and assurance than that.

Even experts with ostensibly unimpeachable credentials for expertise may not necessarily inspire trust. On matters of public policy, experts (like others) may express views on technical matters that reflect their political inclinations more than they do an unbiased view of the technicalities. Thus on matters of atomic energy and nuclear warfare, indubitably qualified expert Edward Teller and indubitably qualified expert Robert Oppenheimer delivered directly opposite advice with comparable force and expertness.

Indeed, when it comes to informing the general public or giving advice to policy makers, the most technically expert experts are not typically the best sources. The most lauded experts are those who have gained prominence through achieving something highly significant and original. Almost always, such individuals are headstrong, fiercely driven, egotistical personalities whose unwillingness to listen to others was an important asset in their technical achievements, for the most significant and original advances almost always encounter initial resistance from the mainstream, and it requires great self-confidence and persistence and not listening to critics to bring novel science into being (Barber, 1961; though Barber’s analysis is half-a-century in the past, it remains pertinent. For instance, if Townes (1999) had listened to the pleas of his Department Head and other senior physicists at Columbia University, his invention of the maser and laser might well have had to wait for later work by others).

Pundits, journalists, science writers and others may well have focused virtually all their time and effort on subject X and yet get it significantly or even entirely wrong on central points (Bauer, 2012a). Most commonly, they do so because they misguidedly lend unquestioned trust to the most prominent technical experts. And like those experts and everyone else, pundits and journalists and science writers may be so at the mercy of their ideology that they cannot evaluate the evidence in reasonably unbiased fashion. Writings and statements on the subject of human-caused global warming or climate change illustrate that fact as pro and con “experts” wax furious at those with differing views; and the great majority of pundits, journalists and others only consult those experts whose views are congenial to them: Fox News and MSNBC manage to find experts on opposing sides of almost any issue.

Anyone who wants to form a trustworthy opinion on any issue over which “experts” differ must delve into the substantive issues for themselves. As I’ve pointed out elsewhere, one does not need much technical expertise to be able to evaluate the trustworthiness of opposing experts. One can judge appropriately by how well or badly the experts try to explain and justify their opinions, whether they respond substantively to queries or whether they brush them aside haughtily or evade them sneakily. When Robert Gallo is queried about the HIV=AIDS hypothesis, for example, sometimes he hangs up the phone, or perhaps says that everyone agrees with him so he must be right; what he has never done in response to a direct request is to cite publications that supposedly prove that HIV causes AIDS (Bauer, 2007).

So the most prominent, publicly acclaimed “experts” on a given topic cannot be relied on to deliver the most judicious and unbiased advice. They can be useful about the most intricate technical details, the structure and properties of leaves or arteries or retroviruses, but they are not usually willing or able to see the forest among all the trees.

Nor of course can anyone else be relied on if you really want to reach an informed and evidence-respecting opinion. No matter how much time and effort anyone may have devoted to an issue over which there is less than 100% consensus, everyone remains fallible for a variety of reasons: conflicts of interest, ideology, lapses of mental acuity, bad luck in not locating important evidence. Not that 100% consensus guarantees trustworthy information either. The whole history of the progress of scientific understanding is marked by milestones that are also gravestones of earlier 100% unquestioned mainstream consensuses. Scientific understanding has progressed via major revolutions in which earlier 100% consensuses were acknowledged to be wanting and were ditched in favor of something different (Kuhn, 1970).

Encyclopedias and other compendia are particularly to be treated with great caution, for they too are drawn up by fallible people. Most commonly, the authors suffer from faith in scientism, namely, “a too uncritically deferential attitude toward science” (Haack, 2013/14): using “science” or “scientific” as an honorific, signifying “to be trusted”; insisting on a clear difference between real science and pseudo-scientific imposters; asserting the existence of an unimpeachable “scientific method”; regarding science as the only source of reliable answers to all possible questions, and denigrating the legitimacy of such non-scientific endeavors as the humanities, the arts, theology.

Nowadays, it is quite difficult to locate individuals who are not to some degree addicted to and misled by scientistic belief — and those most immune to it most often suffer addiction to some comparably intellectually disabling dogmatic faith, say erroneously literalist and fundamentalist Islam or Christianity.

The unquestioned benefits that the Internet has brought have been accompanied by wholesale lack of reliable sourcing. I’ve described from personal experience the lack of fact-checking at Wikipedia and the lack of useful means to correct misinformation, for example, getting plainly wrong by half-a-dozen years two events in my career and thereby drawing an absurdly unwarranted conclusion (Bauer, 2008, 2009). Not that this happens only to ordinary folks like me: The Wiki entry for Francine Prose, a first-rate writer and novelist, has a plain error of fact whose correction has not happened because she “cannot face the byzantine process apparently required”. An even more famous writer, Philip Roth, had to publish at the New Yorker blog a 2600-word open letter before the self-appointed, anonymous officials at Wikipedia corrected an unwarranted inference about an alleged real-life model for one of Roth’s fictional characters (Prose, 2014).

Anyway: with Internet, Wikipedia, Encyclopedia Britannica, the National Academy of Sciences, the World Health Organization, or any other established authority, you believe them unreservedly at your peril. To discover what is most likely to be trustworthy, you need to find and evaluate primary sources for yourself.

Over the years I’ve found the conventional wisdom and the mainstream consensus to be significantly lacking — or just plain wrong — about the Loch Ness Monster (Bauer, 1986); about statins and many other prescription drugs (Bauer 2012b, 2014); about HIV/AIDS (Bauer, 2007) and about the Big Bang and about global warming (Bauer, 2012a); about what gets labeled pseudo-science (Bauer, 2001).

But please don’t take my word for it.
And don’t take anyone else’s word for the opposite, either.
Look at the evidence I cite,
look for other sources of evidence and other interpretations,
and eventually make up your own mind . . .
and don’t hesitate to leave a question open,
awaiting more and better evidence

——————————————————–

Barber, Bernard, 1961: Resistance by scientists to scientific discovery, Science, 134: 596-602;
Bauer, Henry H., 1986: The Enigma of Loch Ness: Making Sense of a Mystery, University of Illinois Press; also Genuine facts about “Nessie”, The Loch Ness “Monster”
Bauer, Henry H., 2001: Science or Pseudoscience: Magnetic Healing, Psychic Phenomena                 and Other Heterodoxies, University of Illinois Press
Bauer, Henry H., 2007: The Origin, Persistence and Failings of HIV/AIDS Theory, McFarland
Bauer, Henry H., 2008: Defenders of the HIV/AIDS Faith: Why Anonymous?
Bauer, Henry H., 2009: Beware the Internet: Amazon.com “reviews”, Wikipedia, and other sources of misinformation
Bauer, Henry H., 2012a: Dogmatism in Science and Medicine: How Dominant Theories                        Monopolize Research and Stifle the Search for Truth, McFarland
Bauer, Henry H., 2012b: Seeking Immortality? Challenging the drug-based medical paradigm,
Journal of Scientific Exploration, 26: 867-80
Bauer, Henry H., 2014:  Statins: Scandalous new guidelines
Haack, Susan, 2013/14:  Six signs of scientism, Skeptical Inquirer; Part 1, 37 #6: 40-5; Part 2, 38 #1: 43-7
Kuhn, Thomas S., 1970: The Structure of Scientific Revolutions, University of Chicago Press
Prose, Francine, 2014: New York Times Book Review, 19 January, p. 27
Townes, Charles H., 1999: How the Laser Happened: Adventures of a Scientist, Oxford University Press

Posted in consensus, media flaws, resistance to discovery, science is not truth, scientism, unwarranted dogmatism in science | Tagged: , | Leave a Comment »

Crime pays — if you are a drug company

Posted by Henry Bauer on 2014/03/13

In Crimes of the Drug Industry I listed 11 evils perpetrated routinely by the mainstream pharmaceutical industry (“Big Pharma”).

In the text of that blog post, I also pointed out that “companies regard the fines as a small part of the costs of doing business and they continue their illegal tactics”. Peter Gøtzsche (Deadly Medicines and Organised Crime, Radcliffe, 2013) suggests a solution like the Danish treatment of tax evasion:  penalties three times what the miscreant illegally got away with.

How huge the profits are that all the big drug companies make, predominantly through illegal marketing for off-label use, are demonstrated as they find fines of billions of dollars to be trivial compared to the profits from their illegality. During just the last 5 years, they have paid fines of up to $3 billion dollars — $3,000,000,000 — without admitting guilt or changing their behavior; see Big Pharma’s Big Fines.

Posted in legal considerations, medical practices, prescription drugs | Tagged: , | Leave a Comment »

Statins: Scandalous new guidelines

Posted by Henry Bauer on 2014/03/13

The evidence in the medical-science literature is quite clear. Previous entries on this blog about statins  have pointed out that there is no evidence that “high cholesterol” increases mortality [1], indeed the opposite, namely, low cholesterol increases mortality; thus no evidence that statins are beneficial, but much evidence that statins are harmful.

Contrary to the evidence, official statements and everyday medical practice recommend and prescribe statins to bring cholesterol levels ever lower. The most recent recommendation would have 33 million more Americans taking statins. Dissenting voices will no doubt be ignored just as they have been in the past.

——————————

In November 2013, the American College of Cardiology and the American Heart Association published “2013 ACC/AHA Guideline on the Treatment of Blood Cholesterol to Reduce Atherosclerotic Cardiovascular Risk in Adults”.

The document illustrates quite a few of the things that are so fatally wrong with present-day medicine and science [2]:

Who actually wrote this?
The Guideline bears the stamp of approval of eminent organizations, but it is not revealed who actually developed it. Sixteen “Expert Panel Members” are listed by name as well as five “Methodology Members”, fourteen “Task Force Members”, and a couple of people forming a Subcommittee on Prevention Guidelines.
But who actually did the literature search, deciding what to include and what to leave out from the mass of published material about statins and their benefits and their side effects? In this respect the Guideline is typical of Official Reports, which are all too often self-important and self-serving emanations from august bodies: The actually responsible individuals are not named, the staff who did the work that the “experts” then lent their names to. There is no independent “peer review” of these documents. There is no obvious substantive reason why they should be regarded as authoritative, and often they are remarkably incompetent on elementary points [3].
Such documents are comparable in their lack of authenticity to the many articles in medical journals that are written by staff of drug companies but published under the names of MDs. (Such “ghost-writing” is described and documented in many of the books on my list of critiques of practices in medicine and science.)

Conflicts of interest
This Guideline is fatally flawed by conflicts of interest [4] — as is all almost all the information that comes to practicing physicians, media, public, and policy makers about prescription drugs and medical devices, because it emanates either from the drugs and devices industries or from sources that are paid by those industries in one way or another. That very much includes the Food and Drug Administration and its advisory committees.

Risk Factors
The literature is chock-a-block full of misrepresentations of data because mere correlations are called “risk factors” and then treated as though they were actual risks, which commits the elementary blunder of confusing correlation with causation.
In the case of heart disease, these risk factors include C-reactive protein, troponin, overall or LDL cholesterol. These are biomarkers, measurable quantities that are taken to be measures of cardiovascular disease although they are not [5]. Yet these measures guide the prescribing of statins and antihypertensive drugs.
The new Guideline relies on estimates of the 10-year risk of cardiovascular disease and reduces the criterion for treatment from a risk of 20% to 7.5% That would increase by 33 million the number of Americans on statin treatment [6].

Weasel-Worded Critiques
Competent insiders are fully aware that drugs are prescribed without adequate evidence of efficacy or safety [2]. But it is rare for any of them to do more than publish academic articles or technical books about it, when what is needed is for members of the profession to take responsibility for actions to make things better — as the Hippocratic Oath would demand of them.
One small illustration: John Ioannidis has published devastating analyses of the unreliability of clinical trials and other aspects of the medical-science literature, but he stops short of stating the obvious conclusions about needed action. About the new Guideline he wrote a piece [7] that makes quite clear to any discerning reader that the Guideline is a catastrophe waiting to be put into practice, yet he soft-pedals his own text by writing, “It is uncertain whether this would be one of the greatest achievements or one of the worst disasters of medical history” instead of stating plainly what his analysis clearly shows, namely, that “this would be one of the worst disasters of medical history”.
The Guideline promotes itself as an improvement. In explaining what the improvement is it reveals, no doubt unwittingly, that the officially practices unreservedly recommended up to now have been quite unjustified: “Treat to target — This strategy has been the most widely used the past 15 years but there are 3 problems with this approach. First, current clinical trial data do not indicate what the target should be. Second, we do not know the magnitude of additional ASCVD risk reduction that would be achieved with one target lower than another. Third, it does not take into account potential adverse effects from multidrug therapy that might be needed to achieve a specific goal” (p. 17). In other words:

It isn’t known what level of cholesterol might be desirable,
and the severity of possible side effects is also not known —
yet despite that, for decades doctors have been prescribing statins
to bring cholesterol to increasingly lower levels.
This might reasonably be described as non-evidence-based medicine
or perhaps systematic malpractice.

Swamping substance with detail
The published Guideline runs to 84 pages. The mind-numbing details serve to distract from the most significant points.
Estimates of risk and benefit are based on invalid measures, namely, biomarkers instead of patient morbidity and mortality, and important assumptions are not stated. For example, “Some worry that a person aged 70 years without other risk factors will receive statin treatment on the basis of age alone. The estimated 10-year risk is still ≥7.5%, a risk threshold for which a reduction in ASCVD risk events has been demonstrated in RCTs [randomized Clinical Trials]. Most ASCVD events occur after age 70 years, giving individuals >70 years of age the greatest potential for absolute risk reduction” (p. 18).
But there are no actual data about the end results of treatment of people aged >70 because clinical trials do not enroll individuals of that age. As a number of critics have noted, older people are typically prescribed half-a-dozen or more medications, yet there are no data on interactions of those medications or possible synergy of their “side” effects.
Observational studies, however, indicate that cholesterol lower than 180 (mg/dL = 4.65 mmol/L) is associated with significantly higher mortality [8]. Kauffman [9] cites other sources that report similar observations.

Mutually contradicting statements are not acknowledged or explained
“By more accurately identifying higher risk individuals for statin therapy, the Guideline focuses statin therapy on those most likely to benefit” (p. 18) makes it seem as though prescribing is to be more focused, implying more restricted prescribing — whereas the effect would be the opposite, increasing statin prescribing very significantly. The Guideline is quite clearly less focused by decreasing the risk criterion from 20% to 7.5% (i.e., less risk of heart disease).
“The statin RCTs provide the most extensive evidence” (p. 16) — whereas “only 1 approach has been evaluated in multiple RCTs — the use of fixed doses of cholesterol-lowering drugs” (p. 9).
In fact, there is a “lack of data on the long-term follow-up of RCTs >15 years, the safety and ASCVD event reduction when statins are used for periods >10 years” (p. 17): yet most people are prescribed statins for life, which often means significantly longer periods than 10 or 15 years.
It is not even known what lowering cholesterol does in people who actually have atherosclerotic disease: “no data were identified regarding treatment or titration to a specific LDL–C goal in adults with clinical ASCVD” (p. 20).
As cited above, the Guidelines acknowledge that it isn’t known what levels of cholesterol might be desirable; yet the Risk Calculator  concludes with recommendations for prescribing statins to lower cholesterol.
However, for me (age 82), the recommendation read:

“Not In Statin Benefit Group Due To Age > 75 Years
Before initiating statin therapy, it is reasonable for clinicians and patients
to engage in a discussion which considers the potential
for ASCVD risk reduction benefits and for adverse effects,
for drug-drug interactions, and patient preferences for treatment.”

In other words, there is no evidence that statins are of benefit to people aged >75, yet we are encouraged to discuss with our doctors, whether or not to take these drugs of no proven benefit that also carry significant risks of harm. What about “First, do no harm”?

On the other hand, Googling “cardiac risk calculator” might take you to the Pooled Cohort Risk Assessment Equations  (also said to be based on the new Guideline) which do not hesitate to assign me a 44.2% 10-year risk, compared to only 24% for “a similar patient with optimal risk factors” which are said to include total cholesterol of 170 or less (mine is 131), HDL of 50 (mine is 35), not diabetic (I’m not), not a smoker (I haven’t been for 22 years, and smoked <5 cigarettes a day for a decade before that) and not taking medications for hypertension (I don’t) with systolic blood pressure (BP) of 110 (mine is typically 165 or less when not too active or stressed). BP increases normally with age, and published data give an average of 153 for my age of 82 [10], so claiming 110 as “optimal” is something of a flight of fancy. Admittedly, the calculator warns that the result of 44.2% may be “less accurate” because it substituted its maximum age of 79 for the actual 82.
My jaundiced view of all this is not lessened by the absurdity of “44.2” for this sort of estimate: any competent individual would not allow numbers like that to be presented instead of “about 45” or even better “a bit less than 50”, given all the assumptions and uncertainties built into the calculation.
Given that the Risk Calculator’s whole purpose is to guide administration of cholesterol-lowering statins, it seems incongruous to read that “The panel makes no recommendations for or against specific LDL–C or non-HDL–C targets for the primary or secondary prevention of ASCVD” (p. 22, Table 4).

Experts, who they are, and what their role should be in making policy
Almost all the signatories to the new Guideline are MDs. But the significant expertise here is the understanding of published results whose validity depends on proper experimental or observational protocols and proper application of statistics. The Guideline would deserve more respect if it had been developed by biostatisticians with no connection to drug companies and without other conflicts of interest.
It is worth remembering that, as George Bernard Shaw wrote, “all professions are a conspiracy against the laity”; and just as war is too important to be left to the generals, so policies about medicine and science are too important to be left to the practitioners in those fields: they should be listened to, cross-examined, disbarred for conflicts of interest, but they should not be allowed to decide on public policies and issue recommendations.

A few sane voices
The Guideline was immediately and properly criticized by a few insiders and observers: “statins have no overall health benefit in this population [risk criterion of 7.5%] . . . . [because] “Lifestyle factors — including lack of exercise, tobacco use, and unhealthy diet — account for 80% of cardiovascular disease . . . . [and] side effects of statins — including muscle symptoms, increased risk of diabetes (especially in women), liver inflammation, cataracts, decreased energy, sexual dysfunction, and exertional fatigue — occur in about 20% of people” [11]; “2% of individuals treated with statins will develop diabetes and 10% will have muscle damage”, and that harm is not balanced by the estimated benefits — “98% will see no benefit; 1.6% will be spared a heart attack and 0.4% a stroke—and importantly, there will be no difference in overall mortality” [4; emphasis added].
The Risk Calculator overestimates by 75-150%. “Barbara Roberts, a cardiologist . . ., said that the new guidelines are a “big kiss to big Pharma. . . . According to the new risk calculator all African American men aged 65 and up with normal blood pressure and normal cholesterol levels
should be on statins. That’s an outrage and is unsupported by clinical evidence” [4].

Unfortunately, if history is any guide, the voices of evidence and sanity will be ignored.

——————————————————————

[1] In addition to the entries on my blog: “plasma total cholesterol levels poorly discriminate risk for coronary heart disease: 35 percent of CHD occurs among individuals with below-average levels of total cholesterol” — p. 143 in reference [3]
[2] Critiques of contemporary science and medicine
[3] “Official reports are not scientific publications”: chapter 8, pp. 196-213 in Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland 2012
[4] Jeanne Lenzer, “Majority of panelists on controversial new cholesterol guideline have current or recent ties to drug manufacturers”, British Medical Journal, 347 (2013) f6989 doi: 10.1136/bmj.f6989
[5] Evaluation of Biomarkers and Surrogate Endpoints in Chronic Disease, Institute of Medicine, 2010
[6] “Statins: new US guideline sparks controversy”, The Lancet, 382 (2013) 1680
[7] John P. A. Ioannidis, More than a billion people taking statins? Potential implications of the new cardiovascular guidelines, JAMA, 311 (2014) 463-4
[8] Schatz et al., “Cholesterol and all-cause mortality in elderly people . . .”, Lancet, 358 (2001) 351-5
[9] Joel M. Kauffman Malignant Medical Myths: Why medical treatment causes 200,000 deaths in the USA each year, and how to protect yourself. Infinity Publishing, 2006; ISBN 0-7414-2909-8
[10] “Hypertension”: An illness that isn’t illness
[11] Abramson et al., “Should people at low risk of cardiovascular disease take a statin?”, British Medical Journal, 347 (2013) f6123 doi: 10.1136/bmj.f6123

Posted in medical practices, peer review, prescription drugs | Tagged: , | Leave a Comment »

How many does it take?

Posted by Henry Bauer on 2014/03/02

To the question, “How many …. does it take to change a light bulb?”, there are innumerable answers, a few of them even good ones

Recent events prompt me to paraphrase:

“How many climate models does it take to get it right?”

After all, the last 15-20 years have seen almost no global warming despite continuing increase in what is supposed to be the primary influence, the concentration of carbon dioxide in the atmosphere; see for instance “Climate scientist: 73 UN climate models wrong, no global warming in 17 years”

Unsurprisingly, gurus and groupies of the hypothesis of human-caused global warming (AGW, anthropogenic global warming) have come up with all sorts of reasons why this recent lack of warming doesn’t disprove their hypothesis, for example [1]:
“The biggest mystery in climate science today may have begun, unbeknownst to anybody at the time, with a subtle weakening of the tropical trade winds blowing across the Pacific Ocean in late 1997. …. average atmospheric temperatures have risen little since 1998, in seeming defiance of projections of climate models and the ever-increasing emissions of greenhouse gases. . . . Climate sceptics have seized on the temperature trends as evidence that global warming has ground to a halt. . . . Climate scientists, meanwhile, know that heat must still be building up somewhere in the climate system, but they have struggled to explain where it is going, if not into the atmosphere. Some have begun to wonder whether there is something amiss in their models…. That has led sceptics — and some scientists — to the controversial conclusion that the models might be overestimating the effect of greenhouse gases” .

The only correct answer, of course, to “How many climate models does it take to get it right?”, is that it takes either none or an infinite number of climate models to get it right, because it is impossible for any number or array of computers to take into account all the variables and their interactions including feedbacks both positive and negative.
Models are research tools. Modelers try to find variables that combine to deliver results that mimic what is actually observed. The only test is against the real world. The only available data are what has happened up to the present. But it is elementary that the past is no guarantee of the future when it comes to human knowledge: not when it comes to believing that all swans are white, or that any given mutual fund outperforms all others, or anything else, including climate models — there is no guarantee that unknown or neglected variables will not become significant in the future. The past can be a fairly reliable guide only empirically, extrapolating actual real-world events, not merely human interpretation of or theories about those events: we can be fairly confident that the sun will (appear to) rise regularly in the east every 24 hours (or so), and that the succession of ice ages and warm periods experienced by the Earth will continue their cycles at about the same intervals (~150,000 years during the most recent million years).

Climate models are no more than research tools. They are inherently, inevitably, incapable of making reliable forecasts (recall always Michael Crichton’s wise words on consensus and prophecy [2].

Most of the arguing over the significance of the lack of atmospheric warming in the last couple of decades has been beside the point, arguments over whether it shows that long-term human-caused global warming (AGW, anthropogenic global warming) is actually occurring or not. Few moments of thought are needed to concluded that a couple of decades is insufficient to decide that. A mere smidgeon of knowledge of uncontroversial historical data suffices to recognize that global warming of about 5-6oC in the next 75,000 years or so is predictable since the Earth is just emerging from the last Ice Age, of which there have been 7 or 8 in the last million years. One of the obvious points against all current climate models is that the causes of these cycles are not understood and are therefore missing from the models.

The real point is that, since all the models have been wrong for the last couple of decades, therefore the models are faulty: “the most important point: the climate models that governments base policy decisions on have failed miserably” [3].
All official climate models have been definitively discredited. It follows that their predictions are not worth attending to.

How many Internet pundits does it take before one finds a reliable opinion?

I’ve remarked before on the pervasive unreliability of Internet stuff like Wikipedia [4] or Facebook [5]. There are a few useful sources only among the mass of rants by people who don’t know what they’re talking about but who parrot mainstream views as though those were Gospel Truth. So, for example, “Skeptical Science” had this to say [6]:
“Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened. If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future”.
Utterly, fundamentally, indubitably wrong on one of the most elementary points about models and what past performance cannot with assurance say about the future. Models are constructed by using past data, so of course they “predict” what happened in the past.
This particular pundit flaunts apparently expert status — “Skeptical Science is maintained by . . . the Climate Communication Fellow for the Global Change Institute at the University of Queensland” — while also disclaiming any authoritative knowledge: “There is no funding to maintain Skeptical Science other than Paypal donations — it’s run at personal expense. . . . [and] has no affiliations with any organisations or political groups. Skeptical Science is strictly a labour of love”.

The correct answer, of course, to “How many Internet pundits does it take before one finds a reliable opinion?”, is that unless one already knows a lot about a subject, the Internet is far more likely to mislead than to give reliable guidance.

How many Science Advisers does it take to deliver reliable advice?

It takes only one, provided it’s someone who understands what they’re doing.

Unfortunately, Presidential Science Advisers have so far always been scientists [7] with inadequate understanding of history of science and the nature of scientific activity and who therefore characteristically overestimate the trustworthiness of whatever the prevailing mainstream consensus happens to be. History is perfectly clear that science has always progressed by finding flaws in the mainstream consensus, modifying it or even overturning it completely [8]. The greatest achievements that we honor in retrospect were contrary to their contemporary mainstream consensus and were resisted, often fiercely and sometimes viciously, when they were first proposed [9]. To be potentially effective, science policy would need to be deeply informed by the maturing body of scholarship in Science & Technology Studies (see The progress of science and implications for Science Studies and for science policy;  and A consumer’s guide to Science Studies  [large file, takes a minute or more to download]).

Roger Pielke, Jr., has written soundly and sensibly of the proper role of scientists toward policy making: they should be honest brokers [10], delivering to decision makers the most unbiased, well informed, judicious summary of all the understanding and insight reflected in the various and often differing views of competent researchers.

The current Presidential Science Advisor is scandalously lacking in those desiderata: John Holdren’s epic fail.

How much contradictory data does it take to change a mainstream consensus?

It takes a lot more now even than it used to in the past. Max Planck, Nobel Prize for Physics (1918, quantum theory) is inevitably cited in this connection for the insight that new theories do not become accepted by convincing the mainstream but only as the old-timers pass away and a new generation takes over; science advances, in other words, one mainstreamer funeral at a time. Nowadays, outside interests have become so vested in scientific issues that it will take something like a social or political revolution to displace hypotheses like human-caused global warming [11].

———————————————————————–
[1] Jeff Tollefson, Climate change: The case of the missing heat — Sixteen years into the mysterious ‘global-warming hiatus’, scientists are piecing together an explanation, 15 January 2014; Nature 505: 276-8 ; doi:10.1038/505276a
[2] Michael Crichton, Aliens cause global warming, Caltech Michelin Lecture, 17 January 2003; also in Three speeches by Michael Crichton
[3] 95% of Climate models agree: The observations must be wrong 
[4] Beware the Internet: Amazon.com “reviews”, Wikipedia, and other sources of misinformation;  The Fairy-Tale Cult of Wikipedia;  Another horror story about Wikipedia;  The unqualified (= without qualifications) gurus of Wikipedia;  Lowest common denominator — Wikipedia and its ilk
[5] Facebook: As bad as Wikipedia, or worse?
[6] Getting Skeptical about global warming skepticism — How reliable are climate models?
[7] Pp. 37-8 in Henry H. Bauer, Scientific Literacy and Myth of the Scientific Method, 1992
[8] Thomas S. Kuhn, The Structure of Scientific Revolutions, 1970
[9] Bernard Barber, Resistance by scientists to scientific discovery, Science, 134 (1961) 596-602
[10] Roger A. Pielke, Jr., The Honest Broker: Making Sense of Science in Policy and Politics, Cambridge : Cambridge University Press, 2007
[11] Henry H. Bauer, Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, 2012

Posted in global warming, politics and science, resistance to discovery, science is not truth, science policy | Tagged: , , | 3 Comments »