Skepticism about science and medicine

In search of disinterested science

All vaccines are not the same; some are worse than useless

Posted by Henry Bauer on 2015/07/02

I am not among those who question the value of all vaccines on principle. I don’t doubt the value of vaccines in controlling smallpox, measles, polio. I do question the use of adjuvants and preservatives in vaccines, and I do think it makes sense to vaccinate babies against measles and the rest in single shots administered over a period of time instead of all at once in multiple vaccines.

But it gets difficult not to over-react as Big Pharma concentrates on generating vaccines that do more harm than any good that has ever been proven.

It seems that Big Pharma has been running out of new diseases to invent (see Moynihan & Cassels, Selling Sickness: How the World’s Biggest Pharmaceutical Companies Are Turning Us All Into Patients and other works listed in “What’s Wrong with Present-Day Medicine”) and has been turning increasingly to inventing vaccines supposed to guard against old or new infections.

The expected but not forthcoming “swine flu” epidemic led to rapid invention and marketing of a vaccine that turned out to have nasty “side” effects, for example, “How a swine flu shot led to narcolepsy”.

Gardasil and Cervarix, anti-HPV vaccines claimed to prevent cervical cancer, are a scandalous illustration; see for example “Merck Dr. Exposes Gardasil as Ineffective, Deadly, Very Profitable”  and related links. The only suggestion that HPV causes cervical cancer — or rather, that 4 out of four or five times that number of strains of HPV cause cervical cancer — comes from a correlation: those strains have often been found in women who have cervical cancer.

But correlations never, never, never prove causation, no matter that too many medical “experts” ignore this well established, long established fact.

I’ve become all too cynical about Big Pharma, lack of regulation, conflicts of interest, and the like. Yet I was taken aback to find that the National Institutes of Health profit from royalties from sales of Gardasil, and that there are exemptions to the Freedom of Information Act that enable them to hide that fact and the amounts involved.

Posted in conflicts of interest, fraud in medicine, funding research, legal considerations, medical practices, politics and science, prescription drugs | Tagged: , , | Leave a Comment »

Who looks at evidence? Almost no one

Posted by Henry Bauer on 2015/06/28

 

I’ve been a crank for a long time about Loch Ness Monsters, frustrated because I can’t get people to look at Tim Dinsdale’s 1960 film which shows quite clearly a huge animal swimming in Loch Ness, submerging while still throwing up a massive wake.

For more than a decade, I’ve been a crank about HIV not causing AIDS, frustrated because I can’t get people to look at the clear evidence that HIV tests don’t track something infectious, and that the numbers in plain sight on the website of the Centers for Disease Control & Prevention, rates of sexual transmission at less than 1 per 1000 acts of unprotected intercourse, mean that HIV cannot cause an epidemic.

Now I’ve become a crank about human-caused climate change, frustrated because people won’t look at the clear evidence that carbon dioxide has been increasing steadily even as the global temperature was level or dropping form the 1940s into the 1970s, when the experts were predicting an Ice Age; and as the global temperature has not increased since the end of the 1990s.

Why don’t people look at evidence?

Because, I’ve finally realized, they don’t want to risk having to change their mind. There is no positive incentive and plenty of negative incentive. It’s beyond cognitive dissonance, which is to evade the significance of evidence after having come across it. It’s obviously even better not to have come across the evidence at all.

On human-caused climate change (HCCC), disbelief is expressed loudly and publicly by “conservatives” (in my view more accurately described as reactionaries) who have that opinion for the wrong reasons, namely the belief that economic free markets are the most important thing and regulating anything is bad. They don’t look at the evidence because they don’t need to, it’s of no interest to them, they would take this stance no matter what. And you maintain perfect deniability, you are blameless, you were just accepting what the authorities, the experts, have been saying loudly and incessantly.

Most of my family and friends treat my “reactionary” stance on HCCC as a minor flaw, allowing me space because I tend to get caught up in Quixotic stuff all the time. They have no interest in looking at the evidence because they are completely comfortable with the notion of HCCC because it fits their anti-reactionary political views — which I happen to share. If it turns out that this HCCC is mistaken, there would be all sorts of undesirable consequences, in particular that reactionary views might appear to have been vindicated.

I was distressed when Stephen Colbert took HCCC as proven. I am not happy when all the MSNBC crowd does so, but they’ve become too extreme for me anyway and I rarely watch. But I was very unhappy when Jon Stewart took HCCC as proven. And Pope Francis may have been the last straw (in the wind, as far as ever changing public opinion). Though I did get a sort of sardonic enjoyment from the pundits who pointed out that the Pope knew what he was talking about because he had been a chemist. And I am getting continuing Schadenfreude over the contortions of the Republican presidential candidates as they are forced to comment on the Pope’s encyclical.

Evidence-seeking, I realize, is an obsession of perhaps the tiniest minority there is. On the dangers of modern medical practice, there are just a few dozen voices crying out publicly in the wilderness. On HIV/AIDS, there is our Rethinking AIDS  group of some dozens of people, with a few thousand more quietly agreeing. On HCCC, there are a few academic types like myself who got here because of the evidence, and who subsist uncomfortably in the association with people whose political and social views we do not share, to put it mildly.

I’m beginning to accept that none of the items in my bucket list will see the light of an enlightened day within my lifetime: Nessie discovery, rejection of HIV=AIDS, rejection of carbon-dioxide-is-hurting-us.

But I do remain curious about how the “authorities” will adjust when reality eventually catches up with them irrevocably.

 

Posted in consensus, denialism, fraud in medicine, fraud in science, global warming, media flaws, medical practices, politics and science, science is not truth, science policy, unwarranted dogmatism in science | Tagged: , , , , | 6 Comments »

Freeman Dyson on climate change

Posted by Henry Bauer on 2015/05/13

Freeman Dyson is an eminent, widely honored and respected  physicist. In the New York Times Book Review of 19 April 2015, he says this:

On climate science, I recommend “Cool It: The Skeptical Environmentalist’s Guide to Global Warming,” by Bjørn Lomborg. . . . Lomborg is an economist .  . . . [and skeptic] with understanding and respect for the beliefs . . . [he is] questioning. The reason why climate science is controversial is that it is both a science and a religion. Belief is strong, even when scientific evidence is weak.”

Posted in global warming, politics and science | Tagged: , , | Leave a Comment »

Climate–change beliefs are politically and not scientifically determined

Posted by Henry Bauer on 2015/05/09

I had inadvertently posted this on my HIV/AIDS blog:

It’s nice when elaborately technical academic discourse supports what one already knew.

I had pointed out (A politically liberal global-warming skeptic?) that Fox News and its devotees (Republicans, conservatives, political right-leaners) deny that human-generated carbon dioxide has been proven to cause global warming (later morphed to unfalsifiable “climate change”) whereas MSNBC and its devotees (Democrats, liberals, political left-leaners) take as settled science that human-generated carbon dioxide has caused climate change including an increased rate of exceptional events.

That observational fact has now been scientifically re-proven by experts in cognition, decision-making, law, psychology: “The polarizing impact of science literacy and numeracy on perceived climate change risks”, Nature Climate Change, 2 (2012) 732-5. The estimation of risk from climate change correlated positively with cultural or political world-views but negatively with scientific literacy and numeracy.

The experts concluded with another finding that everyone should already have known, namely, that facts don’t persuade people: “One aim of science communication, we submit, should be to dispel this tragedy of the risk-perception commons . . . . A communication strategy that focuses only on transmission of sound scientific information, our results suggest, is unlikely to do that. As worthwhile as it would be, simply improving the clarity of scientific information will not dispel public conflict so long as the climate-change debate continues to feature cultural meanings that divide citizens of opposing world-views.” “Members of the public with the highest degrees of science literacy and technical reasoning capacity were not the most concerned about climate change”.

The authors of this study accepted as given that human-generated carbon dioxide has been proven to cause global warming, climate change, and disastrous corollaries for our way of life. They presume that the risk is real AND MAJOR, and that estimates of it differ only as a result of perceptions, which of course are influenced by world-views. But if this is not accepted as axiomatic, then their findings can be interpreted in a much more straightforward way:

The more one knows and understands about science, the greater one’s numeracy and scientific literacy, the more is one able to recognize that human-generated carbon dioxide has NOT been proven to cause global warming.

Thus even a political liberal like myself becomes unwilling to accept that the risk from human activities is significant once he looks into the actual evidence. The prevailing belief in human-caused climate change comes from cherry-picking and misinterpreting historical data, in particular the time periods being considered, together with besotted infatuation with and obeisance to computer models — forgetting Computer Science 101: GIGO, Garbage In, Garbage Out.

Posted in consensus | Tagged: , | Leave a Comment »

“Cold fusion” never disproved, lives on under other names

Posted by Henry Bauer on 2015/03/29

“Cold fusion” began in 1989 as a claim that fusion of deuterium could be accomplished at room temperatures in electrochemical cells using palladium electrodes. The claim was quickly dismissed after quick and dirty attempts at replication, but hundreds of researchers have continued to look into that and similar systems, including activation by sound energy or lasers. Further claims of nuclear transformations followed, and the field is now being pursued under other names: ‘condensed matter nuclear science (CMNS)’;  ‘low energy nuclear reactions (LENR)’; ‘chemically assisted nuclear reactions (CANR)’; ‘lattice assisted nuclear reactions (LANR)’.

There is a dedicated professional society, the International Society for Condensed
Matter Nuclear Science (www.iscmns.org) and journal, the Electronic Journal of Condensed Matter Nuclear Science (http://iscmns.org/CMNS/publications.htm).

For an up-to-date review of the field, see Current Science 108 #4, pp. 491-659, freely available at http://www.currentscience.ac.in/php/toc.php?vol=108&issue=04.

 

Posted in funding research, resistance to discovery | Tagged: , , , | Leave a Comment »

Loch Ness Monsters

Posted by Henry Bauer on 2015/03/13

A book about “the Loch Ness Monster” by a man (Tim Dinsdale) who had filmed the back of a large creature swimming in Loch Ness had aroused my interest in 1961: Could the Loch Ness Monster be a real animal after all?

I was disappointed that I could find no authoritative discussion of the possibility in the popular or scientific literature. Encyclopedias had no more than a paragraph or two. On the other hand, Dinsdale’s book cited several earlier works, by Rupert Gould and by Constance Whyte, both of whom had quite impressive credentials. Why would science have nothing to say about a topic of such wide public interest?

That curiosity led me eventually to change my academic field from chemistry to science studies, with interest especially in scientific unorthodoxies. But I’ve kept my interest in Loch Ness, which remains an unexplained mystery. I’ve detailed elsewhere what my “belief” about Nessies actually is (Henry Bauer and the Loch Ness monsters).

Some of the most objective and compelling evidence for the existence of these creatures comes from sonar (“The Case for the Loch Ness Monster: The Scientific Evidence”Journal of Scientific Exploration, 16(2): 225–246 [2002]) and a few underwater photos taken simultaneously with sonar echoes, but such technical stuff is less subjectively convincing than “seeing with one’s own eyes”. For the latter, there is no substitute for the film taken by Tim Dinsdale in 1960. Recently Tim’s son Angus published a book, The Man Who Filmed Ness: Tim Dinsdale and the Enigma of Loch Ness, whose website  contains a link  that enables anyone to see the film itself on-line. Grainy as the film is, small as the Nessie’s back may seem at the range of a mile, you need to know only one thing to judge its significance:

The most determined debunkers, of whom there have been quite a few, have been able to suggest only one alternative explanation to this being a film of a large unidentified creature, of a species far larger than anything know to be in Loch Ness: That what seems to be a black hump, curved in cross-section and length, which submerges but continues to throw up a massive wake, is actually a boat with an outboard motor. Several magnified and computer-enhanced frames of the massive wake on my website show quite clearly that nothing material is visible above the wake after the hump has submerged.

If the most dedicated “skeptics” can offer no better explanation than this, then I feel justified in believing that Dinsdale filmed a genuine Nessie.
It reminds me of the Christian apologist, I think probably G. K. Chesterton or Malcolm Muggeridge, who remarked that the best argument for the truth of Christianity is the attempts by disbelievers to discredit it.
If there is one thing that the hump filmed by Dinsdale is certainly NOT, it’s a boat with an outboard motor.

Posted in resistance to discovery, science is not truth, unwarranted dogmatism in science | Tagged: , | Leave a Comment »

How (not) to measure the efficacy of drugs

Posted by Henry Bauer on 2015/02/19

Innumerable books and articles have described the flaws of contemporary drug-based medicine, notably the way drugs are approved: the Food and Drug Administration requires only 2 successful trials of 6 months duration — even if there have been many unsuccessful trials as well. Accordingly, drugs have had to be withdrawn from the market because of their toxicity sooner and sooner after their initial approval (p. 238 ff. in Dogmatism in Science and Medicine, McFarland 2012). It is becoming quite common to see a drug being advertised by its manufacturers at the same time as a law firm is canvassing for patients harmed by the drug to join their class-action suit (today, for example, with Xarelto, approved in 2008 and for extended uses in 2011).

Not widely noted or understood is that the statistical criterion for efficacy of a drug is inappropriate. What concerns patients (and ought to concern doctors) is how big an effect a drug has; but the approval process only requires that it be better than placebo, or than a competing drug, at “statistical significance” of p≤ 0.05. The latter is already a very weak criterion, allowing the result to be wrong once in 20 trials. But even more inappropriate is that the effect size need not be large. If one uses a large enough number of guinea pigs, even a tiny difference can become “statistically significant”. For instance, clopidogrel (Plavix) is prescribed for prevention of stroke, and a study found it better at 75 mg/day, at statistical significance of p = 0.043, than aspirin at 325 mg/day. But it took nearly 20,000 trial subjects to reach this conclusion, because the reduction in risk of an adverse event was only from 5.83% (per year) to 5.32% *. One might judge this as trivial and not worth the extra cost and extra danger of side effects compared to aspirin, one of the safest drugs as demonstrated by decades of use.

Moreover, meaningful for patients is the change in absolute risk brought about by an intervention, not the relative reduction in risk compared to something else. The occurrence of an adverse (stroke) event is about 5% per year in older people; the absolute reduction brings it to perhaps 4.5%, about 1 in 22 instead of 1 in 20. Trivial, especially considering that such small differences, even from large trials, may actually be artefacts of some flaw or other in the trial protocol or practice.

The easiest measure of efficacy to understand, but almost never shared with patients or doctors, is NNT: the number of patients that needs to be treated in order to achieve the desired result in 1 patient. These numbers reveal an aspect of drug treatment that is not much emphasized: no drug is 100% effective in every patient.
Even less commonly shared is NNH: the number of patients who must receive a drug in order to have 1 patients harmed by that drug. This reveals an aspect of drug treatment that is not at all emphasized, indeed deliberately avoided: every drug has adverse effects to some degree.

A fine exposition of this appeared in the New York Times: “How to measure a medical treatment’s potential for harm”: to prevent 1 heart attack over a 2-year period, 2000 patients need to be treated (NNT = 2000 — the benefit is 1 in 1000); but aspirin can also cause bleeding, NNH = 3333. So the chance of benefit — very small to start with — is only about twice the chance of harm. In other cases — mammograms are mentioned, and antibiotics to treat ear infections in children, NNH is large compared to NNT; yet current medical practice goes against this evidence.

More examples are given by Peter Elias.

Statins show up very badly indeed when evaluated in this manner:

StatinsNNT

 

For other critiques of using statins, see “STATINS are VERY BAD for you, especially FOR YOUR MUSCLES”;  “Statins weaken muscles by design”;  “Statins are very bad also for your brain”;  “Statins: Scandalous new guidelines”.

——————————————————————
* Melody Ryan, Greta Combs, & Laroy P. Penix, “Preventing stroke in patients with Transient Ischemic Attacks”, American Family Physician, 60(1999) 2329-36

Posted in fraud in medicine, medical practices, prescription drugs | Tagged: , , , | 4 Comments »

R. I. P., Ivory Tower

Posted by Henry Bauer on 2015/02/15

There was a time, well within living memory, when academic institutions expected their faculty to teach conscientiously and to do research with the resources provided by the institution. Freedom to follow one’s hunches was aided by tenure.

Then governments started to support research through separate agencies, and faculty could obtain support from them; whereupon academic institutions increasingly came to view their faculty as geese bringing in golden financial eggs from those government agencies. At my first job in the USA, the Research Director at my university tripled the budget I had estimated in a grant application, in order to increase what the university could rake off the top for “overhead”, “indirect costs”, and even reimbursement of part of my salary.

For a decade or so, everyone loved this arrangement, because the funding sources had enough goodies to distribute to satisfy almost everyone asking for them. But then more and more people wanted to feed at that same trough, and things became competitive and then cutthroat. For instance, if you were an engineer at my university 30 years ago and wanted tenure, you needed to bring in about $100,000 annually, and if you wanted to be a full professor your target was $300,000 annually.

I’ve described how The Science Bubble has continued to bloat and become increasingly dysfunctional in EdgeScience #17.

Faculty as milch cows for their institutions was invented in the USA, but the innovation has become viral. Here  is a description of one of the consequences in England.

As I was beginning my career in Australia more than half a century ago, academe seemed and largely was an ivory tower in which one could pursue scholarly and scientific interests sheltered from the hurly-burly rat-race of industry with its single-minded pursuit of commercial profit. So I was surprised in the mid-1950s in the USA when a newly minted chemistry PhD told me that he was planning to enter industry in order to get out of the academic rat-race. How prescient he was.

Posted in conflicts of interest, funding research, scientific culture | Tagged: , , | 3 Comments »

Probabilistic causation, misinterpreted probabilities, and misdiagnosing mental illness

Posted by Henry Bauer on 2015/01/25

Some people want everyone to accept what “science” says, even when they cannot really justify that from the actual evidence and facts.

For instance, Donald Prothero in Reality Check (Indiana University Press, 2013), spends countless words saying things like “nothing in real science is 100% proven” (italic emphasis in original) mixed in with “if something has a 99% likelihood of occurring, or being true, then this level of confidence is so overwhelming that it would be foolish to ignore it” (p. 32). He illustrates this by the high likelihood of injury or death if one jumps off a building.
Then comes a typical piece of misdirection about the likelihood of getting cancer if one smokes, because “the link between cancer and smoking is about 99%”.
In the first place, the evidence for jumping off a building and for cancer causing smoking are of an entirely different order. In the second place, no source is given for the claim of “about 99%” for the cancer-smoking link.
The observable evidence about jumping off buildings is quite direct, no inferences needed. On the other hand, the link between cancer and smoking is based on inferences from data that are probabilistic: analyzing records from people who have smoked varying amounts for varying lengths of time and applying statistical tests of significance.
But most subtly misleading or deceitful is that “about 99%” assertion. A similar point crops up in a number of quite different matters. Probabilities cannot be turned around, one might say they are not “commutative”. (A + B is commutative because it equals B + A. There are many operations in mathematics that are not commutative.)
If someone dies of lung cancer, there is a high likelihood that smoking may have been a causative factor; but that is not the same as saying that smoking is highly likely to cause death by lung cancer, and the second statement does not follow from the first. The commutated probability that a smoker will die of lung cancer is not very high:
“Smoking accounts for 30 percent of all cancer deaths and 87 percent of lung cancer deaths” but “fewer than 10 percent of lifelong smokers will get lung cancer”
(Christopher Wanjek, “Smoking’s many myths examined”).

I. J. Good discussed this general issue in relation to the trial of O. J. Simpson for the murder of his wife, given the acknowledged circumstance that Simpson was an habitual wife-batterer. Alan Dershowitz, assisting the defense, had pointed out that only about 0.1% of wife-batterers go on to actually kill their wives. But this was misleading. The pertinent probability must be calculated as follows: Given that a wife is murdered, and given that the husband is an habitual wife-batterer, what is the probability that the husband did it? Good showed that it was greater than about 1 in 3 (Nature 375 [1995] 541). In a later piece, Good reported that Dershowitz’s 0.1% was itself misleading, and the correction raised the pertinent probability from >1/3 to about 90% (Nature 381 [1996] 481).
The probability that the murdered wife of a battering husband was killed by the husband is high. The commutated probability that a wife-batterer will actually kill his wife is very small.

It is quite damaging to public and personal health that such basic issues concerning probabilities are so little understood among doctors. For example, what is the probability that a woman between 40 to 50 years of age and with no manifest symptoms or family history of breast cancer actually has breast cancer if her mammogram is “positive”? A survey of doctors yielded estimated probabilities of >50%, many of them at about 90%; but the actual probability is only 9% (Steven Strogatz, “Chances are”).
A fundamental point is that no test is 100% specific and 100% accurate. All tests have some probability, even if only small, of yielding a false positive. If a particular condition is rare, then the likelihood of a positive test being false can be quite high: in low-risk populations, a high proportion of “positives” are actually false positives (Jane M. Orient, Art & Science of Bedside Diagnosis, 2005).
The probability that a woman with breast cancer will have a positive mammogram is very high. The commutated probability that a woman with a positive mammogram has breast cancer is not high.

This sort of issue is very damaging when it comes to diagnosing mental illness, discussed at length in Saving Normal by Allen Frances and The Book of Woe by Gary Greenberg (both 2013; see my essay review in Journal of Scientific Exploration, 29 [2015] 142-8). The critical problem is that there exists no objective diagnostic test for a mental illness, diagnosis has to be gauged on the basis of observable symptoms. One classic procedure for diagnosing depression is the Hamilton Depression Rating Scale (HAM-D). It was evolved in the 1950s by British doctor Max Hamilton, who was seeking a way to measure efficacy of anti-depressants, using his depressed patients as guinea pigs (see for example Gary Greenberg, The Noble Lie, 2008, p. 55 ff.). Hamilton came up with 17 items — for instance insomnia, feelings of guilt, sleep, appetite — rated on scale of 0 to 4 or 0 to 2, with a possible maximum total of 52. There is nothing objective here since the assigned points depend on what the patient says and what the tester concludes; and the diagnosis also uses arbitrary cut-off points: 0-7 = normal, 8-13 = mild depression, 14-18 = moderate depression, 19-22 = severe depression, ≥23 = very severe depression. But the point here is not about subjectivity or arbitrariness of the diagnosis, but the fact that HAM-D was evolved by looking at patients who had already been diagnosed as depressed severely enough to require treatment, even hospitalization. However, the fact that depressed patients frequently accumulate high scores on this questionnaire does not entail the commutated reverse, that anyone who scores more than 7 is to some extent “depressed” or at ≥18 severely depressed.

Confusion about what statistics and probability mean, about interpreting such data with their seemingly accurate numbers, is a hazard in public discourse on a host of matters in science and in medicine. Misinterpretation is common and damaging.

Posted in consensus, media flaws, medical practices, science is not truth, scientism, unwarranted dogmatism in science | Tagged: , | 1 Comment »

Sea-Level-Rise Hysteria

Posted by Henry Bauer on 2015/01/15

Human-caused global warming, and even more terrifyingly human-caused climate change, is blamed for all sorts of things: more frequent and more extreme tornados and hurricanes and tsunamis, proliferation of viruses, shortage of fresh water, and of course rising sea levels that will submerge coastal cities. For example (from Reuters via Sydney Morning Herald):

“Sea level rise in the past two decades has accelerated faster than previously thought in a sign of climate change threatening coasts from Florida to Bangladesh, a study said on Wednesday. . . .

IPCC scenarios . . . range from a sea level rise of 28 to 98 cm this century . . . .

the rise has accelerated, with the most recent rates being the highest on record . . . .

Sea level rise is gnawing away at shores from Miami to Shanghai”.

All this is based on a letter, just published on-line by Nature, that used some mathematical techniques to re-evaluate, from admittedly incomplete data, what the sea-level rise actually was from 1901-1990; concluding that it was a bit less than formerly thought, at about 1.2 mm per year rather than 1.5 mm; and leapt to 3 mm per years in the last two decades. A terrifying acceleration!

Human activities apparently threaten us with between 28 cm (11 inches) and 98 cm (39 inches) in this century. That wide range of expectation should make obvious how uncertain these projections are; but what is altogether missing is an assessment of how this expectation compares with what one might expect from purely natural causes.

During the last Ice Age, which ended about 15,000 years ago, sea level was 400 feet lower than now. Quite naturally, sea level must rise as things warm up after the Ice Age and glaciers melt. But how rapidly?

On average, from purely natural causes, sea level has risen in the past, following an Ice Age, by about 5 inches per century. However, the rate varied tremendously during different eras; for example, pulses of as much as 100 inches per century for 5 centuries (Vivien Gornitz, “Sea level rise, after the ice melted and today”).

In other words, purely natural causes have in the past produced rates of sea-level rise considerably greater than the merchants of doom and gloom are now projecting as being caused by human actions, projecting on the basis of computer models that failed to predict the present decade-and-a-half lull in “global warming”.

Posted in global warming, media flaws | Tagged: | 1 Comment »

 
Follow

Get every new post delivered to your Inbox.

Join 39 other followers