Skepticism about science and medicine

In search of disinterested science

Archive for July, 2014

Magical statistics: Hearing loss causes dementia

Posted by Henry Bauer on 2014/07/27

Magical thinking sees a meaningful, causal link between two things that happen to occur together or to look alike in some way. On this view, there are no actual coincidences, links owing purely to random chance: what might appear to be coincidences are actually linked in some manner that we do not understand; Carl Jung described them as “synchronous” and meaningful, not coincidental.

The Skeptic’s Dictionary gives many examples, as does Psychology Today.

What needs to be said is that much, most, or perhaps all of medical statistics is pervaded by magical thinking, the confusion of correlation with causation. For example, increasingly fashionable (or faddish?) emphasis on prevention is replete with references to “risk factors”, things that are “associated = correlated” with some condition. In short order, “risk factor” becomes confused with actual risk, and drug companies capitalize on this to sell drugs that claim to lower risks when actually they only affect risk factors: symptoms and not illnesses are being “treated”.

This deception inaugurated the era of “blockbuster” drugs, enormously profitable because they are taken lifelong: drugs to lower cholesterol, blood pressure, blood sugar and to increase bone density. But data on morbidity and mortality fail to detect any actual benefit from “statins, antihypertensives, and bisphosphanates” *, and anti-diabetes pills continue to be marketed at the same time as law firms carry on class-action suits because of the toxicities of those drugs, which have highly unpleasant ands sometimes deadly “side” effects including allergic reactions, bloating, diarrhea, flatulence, hypoglycemia, cardiovascular troubles, cholestatic jaundice, lactic acidosis, nausea, urinary tract infections, weight gain.

Blockbuster drugs rely on the confusion
of symptoms (risk factors)
with actual risks (causes),
exemplifying magical thinking
whereby actual harm is actually caused
to those who take the drugs

Another instance of magical thinking is the increasingly prominent insinuation that hearing loss leads to (causes) dementia.

The charge on this seems to be led by Dr. Frank Lin, MD, PhD, at Johns Hopkins University:
Hearing Loss and Dementia Linked in Study
Release Date: February 14, 2011
Seniors with hearing loss are significantly more likely to develop dementia over time than those who retain their hearing, a study by Johns Hopkins and National Institute on Aging researchers suggests. The findings, the researchers say, could lead to new ways to combat dementia, a condition that affects millions of people worldwide and carries heavy societal burdens. Although the reason for the link between the two conditions is unknown, the investigators suggest that a common pathology may underlie both or that the strain of decoding sounds over the years may overwhelm the brains of people with hearing loss, leaving them more vulnerable to dementia. They also speculate that hearing loss could lead to dementia by making individuals more socially isolated, a known risk factor for dementia and other cognitive disorders.
Whatever the cause, the scientists report, their finding may offer a starting point for interventions — even as simple as hearing aids — that could delay or prevent dementia by improving patients’ hearing.”

This press release from Johns Hopkins gives the clear impression that hearing loss is a cause dementia. The last sentence also delivers the astonishingly nonsensical assertion that even if hearing loss is not the cause, treating it could have a beneficial effect on the risk of dementia!

Public media of course parrot this pseudo-scientific stuff. Most of the headlines as well as the texts of these pieces support the idea that hearing loss can lead to dementia:
A 2011 study found that hearing loss may increase your chances of developing dementia
Johns Hopkins: Hearing problems lead to dementia
Hearing loss linked to dementia — Can getting a hearing aid help prevent memory loss?
Hearing loss speeds up brain shrinkage and could lead to dementia, researchers claim
The link between hearing loss and dementia — A new discovery gives you a new reason to check your hearing now
Straining to hear and fend off dementia
Could hearing loss and dementia be connected?

Manufacturers of hearing aids jumped on the bandwagon:
Hearing loss is now linked to Alzheimer’s disease and dementia.
According to several major studies, older adults with hearing loss are more likely to develop Alzheimer’s disease and dementia, compared to those with normal hearing. Further, the risk escalates as a person’s hearing loss grows worse. Those with mild hearing impairment are nearly twice as likely to develop dementia compared to those with normal hearing. The risk increases three-fold for those with moderate hearing loss, and five-fold for those with severe impairment.
Specifically, the risk of dementia increases among those with a hearing loss greater than 25 decibels. For study participants over the age of 60, 36 percent of the risk for dementia was associated with hearing loss.
How are the conditions connected?
Although the reason for the link between hearing loss and dementia is not conclusive, study investigators suggest that a common pathology may underlie both”

Also on the bandwagon is a local Speech & Hearing Center.  From an“Ask the experts” page of SENIORS GUIDE magazine:
“Researchers have shown a strong correlation between un-treated hearing loss (i.e., having hearing loss and not wearing hearing aids) and dementia. A study completed by Dr. Lin and colleagues at Johns Hopkins and the National Institute for Communicative Disorders revealed that for every one year an individual with a mild hearing loss went without hearing aids, there was a seven year cognitive decline”.
That’s quite an extension and distortion of the published study.
That published scientific article is Lin et al., “Hearing Loss and Incident Dementia”, Archives of Neurology, 68 (2011) 214–20. Its stated conclusions are that “Hearing loss is independently associated with incident all-cause dementia. Whether hearing loss is a marker for early stage dementia or is actually a modifiable risk factor for dementia deserves further study.”
Unwary readers might take the first sentence as meaning that hearing loss does cause dementia. The second sentence makes the mistake of confusing risk factor with risk and adds to the impression of a causative link.

“[H]earing loss was independently associated with incident all-cause dementia after adjustment for sex, age, race, education, diabetes, smoking, and hypertension, and our findings were robust to multiple sensitivity analyses. The risk of all-cause dementia increased log-linearly with hearing loss severity, and for individuals >60 years in our cohort, over one-third of the risk of incident all-cause dementia was associated with hearing loss.”

Lay readers might again be inclined to take these comments as supporting a causative link. But “independently associated” means only that these particular variables were fed into a computer program looking for degrees of association. Considerable uncertainty remains because of possible effects of other variables not taken into account, notably history of health, diet, and exercise, all of which are likely to be very influential on the rate of age-related deterioration; and there are obvious uncertainties associated with the manner in which education, smoking, hypertension were coded.

But bear in mind the inescapable fact that the probabilities of every type of organ failure and physiological dysfunction increase with age. Age is indubitably independently associated with hearing loss, dementia, diabetes, hypertension, as well as cancer, kidney failure, lung disease, etc.
Hearing loss is independently associated with age.
Dementia is independently associated with age.

It would take more than this study to make a plausible let alone convincing case for hearing loss as a potential cause of dementia. The original article actually spells out quite well the uncertainties that ought to stop speculation about causation, but it steps back from those sound observations to speculate about possible causative mechanisms: “exhaustion of cognitive reserve, social isolation, environmental deafferentation [presumably meaning deficiency of environmental stimuli], or a combination of these”. None of those appears to be amenable to study in any potentially convincing manner.

By contrast, direct evidence from the people studied is waved aside: “self-reported hearing aid use was not associated with a significant reduction in dementia risk”.
The researchers measured the dementia risk in this prospective study, that was not a subjective assessment by the people in the study. They could surely, however, be regarded as largely reliable in their testimony as to use or non-use of hearing aids.
The conclusion is clear: hearing aids did not help to avoid dementia among the people studied.
However, this ugly fact might destroy the hypothesis and impede ongoing research, so reasons are offered for ignoring it: “data on other key variables (e.g. type of hearing aid used, hours worn per day, number of years used, characteristics of subjects choosing to use hearing aids, use of other communicative strategies, adequacy of rehabilitation, etc) that would affect the success of aural rehabilitation and affect any observed association were not gathered. Consequently, whether hearing advices [sic; should perhaps be devices?] and aural rehabilitative strategies could have an effect on cognitive decline and dementia remains unknown and will require further study”.

——————————————————-
*  Järvinen et al., “The true cost of pharmacological disease prevention”, British Medical Journal, 342 (2011) doi: 10.1136/bmj.d2175

 

Posted in funding research, medical practices, peer review, science is not truth | Tagged: , , | 2 Comments »

Meaningless research

Posted by Henry Bauer on 2014/07/17

Medicine and doctors are often symbolized by reference to Greek mythology:

MedicalLogoAndText

Nowadays, though, much of so-called medical research would be better represented by p-values rampant over a field of nonsensical “associations”.

I had recently drawn attention (Statistics literacy) to a fine article about the lack of understanding of statistics that pervades the medical scene (Do doctors understand test results?). It explains how the risk of false-positive tests should be — but is not — understood by doctors and communicated to patients; and how relative risks are cited instead of absolute risks — a confusion that Big Pharma does much to promulgate because it helps to sell pills.
Another set of misunderstandings underlies much of the “research” that is picked up by the media as significant news: data mining coupled with the virtually universal mistake of taking correlations as indicating causation. A recent case in point concerns Alzheimer’s Disease (AD):

”Sleep disorders may raise risk of Alzheimer’s, new research shows
Sleep disturbances such as apnea may increase the risk of Alzheimer’s disease, while moderate exercise in middle age and mentally stimulating games, such as crossword puzzles, may prevent the onset of the dementia-causing disease, according to new research to be presented Monday”.

Note that “may increase” and “may prevent” both imply causation, that sleep disorders may actually cause AD while physical and mental exercise may bring about (cause) protection. But the evidence is only that there is some sort of correlation; and it’s vitally important to keep in mind that correlation doesn’t mean an association every time, it means only that two things are found together apparently more often than one might expect purely as a result of chance.

Note too that “may” is an often-used weasel word to insinuate something while guarding against being held accountable for actually asserting it; e.g., big campaign contributions may influence politicians; conflicts of interest may influence researchers; and so on.

Note as well “to be presented”, which illustrates publicity-seeking by researchers and the complicity of the media in that, as they ignore the fact that at this moment the matter is nothing more than hearsay: science is supposed to published and evaluated before being taken at all seriously.

As to “purely as a result of chance”, everyone should understand, but few do, that the almost universally used method of calculating these probabilities as “p-values” is itself quite misleading; see “Statistics can lie, but Jack Good never did — a personal essay in tribute to I J Good, 1916-2009”.
The take-away lesson is that what researchers claim as “statistically significant” is often not at all significant, indeed it may be entirely meaningless; yet it will still be picked up by the media and ballyhooed as the latest breakthrough.

Here’s how researchers in medically related fields (and elsewhere too, of course) can generate publications effortlessly and prolifically while disseminating misleading notions:
Select a topic — AD, say.
Collect a data set of people who have that condition, the data including every conceivable characteristic: age (in several categories such as young, early middle age, middle age, late middle age, young-old, fairly old, quite old); exercise habits (light, moderate, heavy); alcohol consumption (light, moderate, heavy); diet (many variables — fat, meat, dairy, vegan, gluten-free, etc.); other medical conditions and history (many indeed); race and ethnicity; any others you can think of (urban or rural, say; employment history and type of employment — veteran; blue or white collar; innumerable possibilities); tests by MRI, complete blood analysis, etc.
Feed all the data into a computer, and set it to find correlations.

Purely by chance, at least 1 of every 20 possible pairings will produce a “p ≤ 0.5 statistically significant” result. Since p ≤ 0.5, this is publishable, especially since the written report fails to emphasize that this resulted from a random sweep through 20 times as many pairings.

Naturally such results are quite likely to make little sense, since they are random by-chance associations. For example:
“[M]oderate physical exercise in middle age could decrease the risk that their cognitive deficits progress to dementia. . . .
Oddly, however, the association did not hold for people who engaged in light or vigorous exercise in middle age or for any level of physical activity later in life.
On a similarly counterintuitive note, another study suggested that high blood pressure among people at least 90 years old — “the oldest old” — may protect against cognitive impairment. . . . although hypertension is believed to increase the risk of Alzheimer’s and dementia for middle-aged people, the risk may shift with time” [emphases added].

The reason these results seem so incongruous and counterintuitive is, of course, that they were never genuine results at all, just “associations” that occurred when looking for correlations among a whole host of possibilities.

The notion that moderate exercise but not light or heavy exercise might actually be a significant cause of something like Alzheimer’s is not entirely beyond the realm of possibility, I suppose. Still, it seems sufficiently farfetched that I would hesitate — or be ashamed — even to mention the possibility until it had been reproduced in quite a few studies.
On the other hand I’m perfectly willing to see an association between high blood pressure and good cognition in the elderly, since good cognition depends on plentiful oxygen which depends on a good blood flow; and since arteries become less flexible with age, more pressure is needed to achieve that.
On a further hand, though, the notion that high blood pressure increases the risk of dementia in middle age strikes me as sufficiently absurd as to be dismissed pending the strongest most direct possible evidence; it “is believed” by whom?

Common sense cries out to be applied whenever a p ≤ 0.5 association is touted as meaning something. Try it out on the suggestion that “A daily high dose of Vitamin E may slow early Alzheimer’s disease”. Think about the caveats in that piece, and the trivial magnitude of the reported possible effect.

That respected mass media feature such garbage may well be quite harmful. I would expect some number of people will start taking vitamin E supplements immediately, whether or not there are any indications that they are not getting enough of it already. Not to speak of all the >90-year-olds desperately trying to raise their blood pressure and puzzled about how to decide how much exercise at their age is “moderate” but not “light” or “heavy”; and all the 70-to-90-year-olds wondering at what age high blood pressure stops causing Alzheimer’s and starts protecting against it.

 

Posted in media flaws, medical practices, science is not truth, scientific literacy | Tagged: , , , | Leave a Comment »

“The scientific method” — it’s just not used, e.g. in Alzheimer’s Disease

Posted by Henry Bauer on 2014/07/17

Alzheimer’s disease is one of the dysfunctional knowledge monopolies mentioned in my book, Dogmatism in Science and Medicine  (pp. 108-9).

Decades-old dogma takes the cause of the disease to be the build-up in the brain of plaques of amyloid protein. However, a mass of actual evidence indicates that theory to be wrong: there have been “hundreds of experiments casting doubt on the neurotoxicity of amyloid”; drugs and vaccines that act against the plaque have been ineffective; amyloid injected into brains of mice caused no symptoms. Yet researchers find it very difficult to get their evidence for other causes of Alzheimer’s published or to get research support for their work.

Rationalizations that try to prop up the amyloid theory are feeble and far-fetched, as illustrated by a fairly recent paean to a “breakthrough”:
“New imaging shows Alzheimer’s unfolding in live brains” (Andy Coghlan, New Scientist, 18 September 2013):
“The two major brain abnormalities that underlie Alzheimer’s disease can now be viewed simultaneously in brain scans while people are still alive”. Amyloid plaque has been observable since 2005 by PET (positron emission tomography), but now one can also observe “tau tangles”, and “tau lesions are known to be more intimately associated with neuronal loss than plaques . . . . tau tangles accumulate first in the hippocampus — the brain’s memory centre — at a time when the plaques are already widespread. . . . Previous research has shown that the tangles rapidly kill neurons and trigger behavioural changes. . . . [The new] images suggest that the plaques are themselves harmless, but help to advance disease by spreading the tau tangles from the hippocampus to other brain regions” [emphases added].

Note first that “the scientific method” * that so many pundits still cite and believe in states that a theory is discarded when the evidence goes against it. Here, the mass of evidence against amyloid theory has not broken the grip of the dogmatic knowledge-monopoly. Even as it is acknowledged that tau tangles and not plaques are actually closely associated with loss of neurons, and that plaques were present “10 to 15 years before there are symptoms”, the amyloid theory is still paid obeisance by suggesting that amyloid plays an essential role by “spreading the tau tangles”.

But since plaque pre-dates symptoms by a decade or more, surely it makes more sense to infer that plaque “may be neutral or even beneficial, perhaps attempting to defend neurons that are under attack” since “some amyloid can be found in the brains of most people over 40”.

The New Scientist piece is based on Maruyama et al., “Imaging of Tau Pathology in a Tauopathy Mouse Model and in Alzheimer Patients Compared to Normal Controls”, Neuron, 79 [2013] 1094-1108; the “et al.” stands for 24 additional names. That article begins, “Hallmark pathologies of Alzheimer’s disease (AD) are extracellular senile plaques consisting of aggregated amyloid β peptide . . . and intraneuronal . . . pathological tau fibrils, while similar tau lesions in neurons and glia are also characteristic of other neurodegenerative disorders” [emphasis added].
Tau tangles, but not amyloid, are known to be associated with a number of neurodegenerative disorders. Where was the need to invoke amyloid rather than tau as a cause of Alzheimer’s in the first place?
Those who question established mainstream dogmas are routinely called “denialists” — “AIDS denialists”, “climate change denialists”, and so forth. In point of fact, it is typically the mainstream thatis truly denialist: evidence denialist. As Max Planck out it long ago, old theories die only as their proponents also pass away; science advances funeral by funeral.
————————————-

* See Scientific Literacy and Myth of the Scientific Method, University of Illinois Press 1992

Posted in denialism, medical practices, resistance to discovery, the scientific method | Tagged: , | Leave a Comment »

Knowledge, understanding — but then there’s Wikipedia

Posted by Henry Bauer on 2014/07/17

I’ve had much occasion to comment on the unreliability of Wikipedia on any topic where viewpoints differ (The Wiles of Wiki; Health, Wikipedia, and Common Sense; Lowest common denominator — Wikipedia and its ilk; The unqualified (= without qualifications) gurus of Wikipedia; Another horror story about Wikipedia; The Fairy-Tale Cult of Wikipedia; Beware the Internet: Amazon.com “reviews”, Wikipedia, and other sources of misinformation).

However, yesterday morning’s Public Radio warned me that I should question Wikipedia’s reliability even over what might seem to be objective factual data. Many media ran the same story, for instance the Sydney Morning Herald.

The revelation was that 8.5% of all Wiki articles, some 2.7 million of them, were “written” by one individual, Sverker Johansson: “On a good day the output can be as high as 10,000 articles”. “His claims to authorship are contested however, as they were created by a computer generated software algorithm, otherwise known as a bot. Johansson has named his Lsjbot”.

The Public Radio piece included comments from Jimmy Wales, Wiki’s founder, who said that this was actually nothing new, that “bots” had been used to “create” “content” from the very time Wiki was first established.

Johansson said that his motive is to bring knowledge to the widest possible audience.
An obvious question would have been, what is meant by “knowledge”?

A primitive answer might be, knowledge consists of facts, things that are indisputably so.
For example?
Well . . . . That all humans are mortal?
Hard to quarrel with that one, though quibblers might suggest a dependence on the definition of “human” and on the status of gods who sometimes take human shape.
So how about “the Earth is not flat”?
No quibbling there, provided we ignore as irrelevant any technicalities that concern only topologists and their multiple dimensions. But such negatives are not particularly informative, and surely “knowledge” implies being informative.
So should we have said “the Earth is spherical”? No, because quite important characteristics and phenomena depend on the fact that the Earth is not exactly spherical.

The point is, I suggest, that there’s no such thing as purely factual knowledge, because that isn’t informative. Data have meaning only in some context.
One might say that there are two kinds of knowledge, map-like and story-like. Maps tell you how to go somewhere, but give no reason for doing so, no meaningful context. Stories, on the other hand, many not be factually accurate in every respect, but they convey meaning, understanding. As Steven Weinberg has put it, “The more the universe seems comprehensible, the more it also seems pointless”. Pure facts, data, convey nothing that’s meaningful for us human beings; context, relationships, emotions, ethics, morality are what give meaning to facts.

Bots, robots, computer programs, “artificial intelligence”, “information technology” are inherently incapable of delivering meaningful knowledge, or of judging whether or not certain data are meaningful or whether they are nonsensical.
It follows that Wikipedia ought to restrict itself to things that matter to computers, automata, robots, bots.

The usefulness of Wikipedia — of anything that claims to be informative — depends inescapably on the inescapably human judgment that went into selecting and vetting what is presented as “knowledge”, even if that has the appearance of purely “objective” data.
In the earliest days of the computer-obsessed era, a principle was recognized that contemporary computeroids like Wales should re-learn: GIGO, garbage in = garbage out.

There are no databases or other repositories of supposed fact that can be relied on not to contain errors and misleading “facts”, and only human intelligence, common sense, and judgment are capable of detecting them. I learned about that early in my research career, when I was studying photolysis of organic iodine compounds. Nitric oxide, NO, could be used to combine with iodine atoms, so I searched for information about NOI, nitrosyl iodide, in the index of Chemical Abstracts, the universal source of information about chemical matters in pre-computer days. I was astonished to find that a cited source turned out to be an article not about NOI but about NaI, sodium iodide. I assumed that whoever had “written” that article had dictated to a secretary and then failed to proofread. I doubt that such errors no longer occur, albeit perhaps owing to flawed speech-recognition software rather than secretaries.
Beyond that, how is a computer or a bot to figure out whether or not the Earth should be described as spherical?
And how much more misleading would a bot be about more complex matters?
Could a bot recognize that the conclusions of a published, peer-reviewed article are not to be believed because the statistics were incompetent, or the protocol inappropriate?

Automated procedures cannot deliver reliable information. They can search databases, but they may just be collecting Garbage Input. Imagine what “purely factual” information computers would glean about HIV/AIDS, say, since just about everything in the mainstream literature has been misinterpreted (The Case against HIV).

Sadly, the computeroid nonsense doesn’t stop with Wikipedia. Books are “written” in the same way:
“Phil Parker, who is purported to be the most published author in history, has successfully published over 85,000 physical books on niche topics such as childhood acute lymphoblastic leukaemia. Each book takes less than an hour to ‘write’. In fact the writing is carried out by patented algorithms which enable computers to do all the heavy lifting.” “The books — typically non-fiction and on extremely niche topics — are compiled on-demand, based on publicly available information found on the internet and in offline sources” (Automaton author writes up a storm).
Not everyone would agree that this technique can produce non-fiction, something that is not fictional.

“Bots may also be writing the journalist out of the future of journalism. Ken Schwencke, a reporter on the Los Angeles Times, has created ‘Quakebot’, an algorithm which automatically creates and publishes a story on the newspaper’s website every time an earthquake is detected in California” (This is how Sverker Johansson wrote 8.5 per cent of everything published on Wikipedia).

This is how the world will end, not with a bang, not with a whimper *, but through the abandonment of thinking under the spell of computeroids and their bots.

————————————————

* See “The Hollow Men” by T. S. Eliot

Posted in media flaws | Tagged: , , , | 3 Comments »

Statistics literacy

Posted by Henry Bauer on 2014/07/13

Doctors are on average ignorant about statistics that are directly relevant to their practice and their advising of patients and their ability to understand the tricks played by drug companies and their representatives. A comment to my HIV/AIDS blog mentioned an excellent article, “Do doctors understand test results?”,  that everyone should read and re-read and learn from, because it is not only doctors who are woefully ignorant about statistics.

Everyone would benefit from understanding the difference between relative risk and absolute risk, and between survival rate and mortality,  for example.

Posted in medical practices, scientific literacy | Tagged: , , | 1 Comment »

When prophecy fails

Posted by Henry Bauer on 2014/07/13

True believers do not question their belief when evidence disproves it, for example when predictions turn out to be wrong. Instead they find ways to modify the predictions while keeping the core belief intact, illustrating the phenomenon of cognitive dissonance: the inability to recognize facts that contradict one’s beliefs.

The classic study is When Prophecy Fails: A Social and Psychological Study of a Modern Group that Predicted the Destruction of the World by Leon Festinger, Henry Riecken, & Stanley Schachter (University of Minnesota Press, 1956).

This is often cited in connection with religious cults and similarly disparaged groups. However, the lesson is just as applicable to true believers of all sorts, including true believers in the conventional wisdom and acolytes of scientism, true believers in the religion of Science (Scientism, the Religion of Science).

Scientists themselves are not immune. Half a century ago, Thomas Kuhn [1] pointed out that the history of science is a record of maintaining theories long after the evidence has disproved them. Bernard Barber [2] pointed out that what are now the most applauded advances in science were fiercely resisted at the time they were first proposed. Gunther Stent [3] pointed to “premature discoveries” like continental drift and quantitative genetics that were dismissed for several decades before gaining acceptance. Imre Lakatos [4] pointed out that scientists routinely make ad hoc adjustments to theories in order to maintain the core belief, just as Ptolemy added epicycles — wheels upon wheels — to sustain the credibility of Earth-centered astronomy.

The popular view that scientific theories are continually tested against evidence is wrong. It’s only in the long run that science eventually acknowledges its errors and corrects them, and sometimes that long run is very long indeed.

A contemporary case in point is HIV/AIDS theory. The evidence has long been quite plain that “HIV” is not infectious, is not transmitted sexually, and doesn’t cause “AIDS” (The Case against HIV). Prediction after prediction of the theory has been disproved, for example, that an AIDS epidemic would sweep across the heterosexual world (section 4.1 in The Case against HIV). Innumerable individuals continue to be dreadfully harmed, to the point of death, by supposedly life-saving antiretroviral drugs (section 5 in The Case against HIV). People continue to be told that they are “HIV-infected” despite the fact that there is no approved test for “HIV infection”, and increasingly the mainstream is doubling down on the harm it causes by calling for more and more widespread “HIV testing”.

Perhaps most incredibly, the idea is now being promulgated assiduously that perfectly healthy, HIV-negative people should take up a permanent regime of toxic drugs in order to decrease the likelihood of becoming “HIV-positive”:
Healthy gay men urged to take HIV drugs — WHO:
“The World Health Organization (WHO) is urging all sexually active gay men to take antiretroviral drugs to reduce the spread of HIV. The organisation says the move may help prevent a million new HIV infections over 10 years”.

What exactly is the evidence that antiretroviral drugs can prevent infection?
The Centers for Disease Control & Prevention (CDC) cite 4 trials of Pre-Exposure Prophylaxis (PrEP).
Like peer review, clinical trials are widely thought to safeguard the quality and reliability of scientific publication, but that is not now the case: vested interests of drug companies and researchers and others have made clinical trials tools for marketing drugs instead of for discovering truth [5]; innumerable devices are employed to slant results of clinical trials in directions desired by the sponsor [6, 7].

The four studies cited by CDC illustrate that great skepticism is called for.
1. “Preexposure chemoprophylaxis for HIV prevention in men who have sex with men” (New England Journal of Medicine 363 [2010] 2587-99 by Robert M. Grant (corresponding author) and 34 other authors “for the iPrEx Study Team” listed in a Supplementary Appendix on the NEJM website.
2499 subjects were followed for a median of 1.2 years. 36 in the PrEP group administered FTC (emtricitabine) plus TDF (tenofovir) became infected compared to 64 on placebo, yielding a claimed effective reduction of 44% in infection rate. How this is calculated is rather obscure, since the paper’s Figure 2 shows cumulative probabilities of infection as about 9% and about 7.5%; that decrease of 1.5% from 9% is a decrease by 1/6 which is about 15%  rather than 44%.

PrEP-Grant

 Beyond that apparent contradiction, certain details in this report seem unbelievable. Both placebo and drug recipients supposedly had identical rates of adverse events (70% and 69% respectively) and of serious adverse events (5% each).

There’s something obviously wrong here. Participants must have been significantly unhealthy if some 70% on placebo experienced adverse events and 5% serious adverse events, in little more than a year and when the average ages were 26.8 and 27.5 years in placebo and drug groups respectively. There were only very minor differences between the groups: Drugs caused more nausea (2% vs. <1%, p = 0.04) but placebo caused more diarrhea (61 vs. 49 events p = 0.36, insignificant).
More specifically: It has long been known that TDF is toxic in a number of ways, notably by causing kidney failure: Poisonous “prophylaxis”: PrEP (Pre-Exposure Prevention); Treatment Guidelines are dangerous; Unlimited insanity: Truvada to prevent HIV; Spinning Truvada; Kidney-disease denialism (a special case of HAART denialism); Tenofovir and the ethics of clinical trials.

2. “Antiretroviral prophylaxis for HIV-1 prevention among heterosexual men and women” (New England Journal of Medicine 367 [2012] 399-410) by Jared M. Baeten (corresponding author) and 44 others “for the Partners PrEP Study Team” that are listed in a Supplement.
The same oddity is reported here, of similar rates of adverse events (~85%) in two separate drug-administered cohorts as well as in the placebo group. Serious adverse events were also reported as similar at 7.3 or 7.4%. The study extended over 3 years.
Again one wonders why people on placebo, with median age in the low 30s, would experience a 2.5% per year rate of serious adverse events, even in Kenya and Uganda.
The subjects were 4758 couples, and HIV infection was reported at 0.65 per 100 person-years with TDF alone, 0.50 with FTC/TDF, and 1.99 on placebo; thus reductions of 67% and 75% respectively: about twice the 44% reported by Grant for FTC/TDF.

3. “Antiretroviral preexposure prophylaxis for heterosexual HIV transmission in Botswana” (New England Journal of Medicine 367 [2012] 423-34) by Michael C. Thigpen (corresponding author) and 23 others plus further members of the TDF2 Study Group listed in the Supplementary Appendix.
1219 individuals were studied for a median of 1.1 years, with reported efficacy of TDF/FTC at 62.2%: 1.2 and 3.1 infections per 100 person-years, respectively. The drugs did produce more “nausea (18.5% vs. 7.1%, P<0.001), vomiting (11.3% vs. 7.1%, P = 0.008), and dizziness (15.1% vs. 11.0%, P = 0.03) than the placebo group, but the rates of serious adverse events were similar (P = 0.90)” [emphasis added].
Once again it seems more than strange that the rates of serious adverse events on placebo should be the same as on the drugs: why would 7% of people healthy enough to enroll in a clinical trial experience a serious adverse event in little more than a year? When the average age was only in the 20s?
But incredible details aside, this article should never have been published: “Because of
low retention and logistic limitations, we concluded the study early and followed
enrolled participants through an orderly study closure rather than expanding enrollment”. It is an elementary principle that when protocols cannot be followed, “results” must not be given any credence.

4. “Antiretroviral prophylaxis for HIV infection in injecting drug users in Bangkok, Thailand (the Bangkok Tenofovir Study): a randomised, double-blind, placebo-controlled phase 3 trial” (Lancet 381 [2013] 2083-90) by Michael Martin (corresponding author) and 16 others “for the Bangkok Tenofovir Study Group”.
2413 individuals were assigned to TDF or placebo, with apparent infection rates of 0.35 and 0.68 per 100 person-years respectively, presumably from sharing of infected needles rather than from sexual transmission. One may be excused for being skeptical about this given that other studies have shown that drug abusers who don’t share needles tend to be “infected” at a greater rate than those who do share needles (section 3.3.8 in The Case against HIV).
Here again the “occurrence of serious adverse events was much the same between the two groups”, albeit ill health among drug abusers is to be expected; median age was 31. “Nausea was more common in participants in the tenofovir group than in the placebo group (p=0·002)”.

Those are the data that supposedly justify administering highly toxic drugs to perfectly healthy individuals continually during their years of sexual activity.
What the reports actually demonstrate is that clinical trials can be and are biased unscrupulously to produce highly misleading “data”, “showing” for example that a drug of known toxicity is no more harmful than placebo.

In an honest world, the perpetrators of such schemes, Big Pharma and its “researcher” shills, would be charged with manslaughter if not murder.
————————————————–
[1] Thomas S. Kuhn, The Structure of Scientific Revolutions, University of Chicago Press, 1970
[2] Bernard Barber, “Resistance by scientists to scientific discovery”,  Science, 134 (1961) 596-602
[3] Gunther Stent, “Prematurity and uniqueness in scientific discovery”, Scientific American, December 1972, 84-93
[4] Imre Lakatos, “History of Science and its Rational Reconstruction”, pp. 1-40 in Method and Appraisal in the Physical Sciences, ed. Colin Howson, Cambridge University Press, 1976
[5] David Healy, Pharmageddon, University of California Press, 2012
[6] Ben Goldacre, Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients, Faber & Faber, 2013
[7] Peter C. Gøtzsche, Deadly Medicines and Organised Crime: How Big Pharma Has Corrupted Healthcare, Radcliffe, 2013

Posted in conflicts of interest, fraud in medicine, fraud in science, medical practices, prescription drugs, unwarranted dogmatism in science | Tagged: , , , | 4 Comments »

Idiotae non carborundum

Posted by Henry Bauer on 2014/07/01

Common slang advice for coping with nincompoops is the pseudo-Latin phrase, “Illegitimi non carborundum” — “Don’t let the bastards grind you down”. But I grew up in Australia, where a common affectionate greeting to a friend ran, “How are ya, ya old bastard?”.

I have no friendly feelings at all for those who parrot shibboleths about matters of science without knowing anything about the particular subject, so I prefer the less friendly and more accurate “Idiotae non carborundum”: “idiota, idiotae = uneducated person, ignorant person, layman” (New College Latin& English Dictionary).

It does wear me down, though, especially when people with whom I agree over lots of important things hold forth about matters of science about which they know nothing. I don’t so much mind as they prattle on about carbs, proteins, vitamins, “getting your potassium from bananas”, and so forth, because that may harm only themselves and their dependents. But I do care when it’s about HIV/AIDS or global warming, because promulgating untruths about those does tangible damage to hordes of people.

The evidence that HIV doesn’t cause AIDS (The Case against HIV), and that human-caused release of carbon dioxide has not appreciably warmed the Earth, is very strong, and has been published by competent scientists for several decades. The most dispassionate and objective possible take on these issues is that the mainstream consensus remains to be translated into beyond-doubt-proven fact because of the competent fact-based objections raised against the mainstream interpretation. Yet most media and most pundits have been seduced into regarding HIV=AIDS and human-caused global warming (AGW, for anthropogenic GW) as “settled science”.

Sadly, people acquire their beliefs on these issues not from any acquaintance with the evidence but according to their political ideologies: left-leaners tend to believe one thing and right-leaners tend to believe the opposite (A politically liberal global-warming skeptic?). That’s a dreadful way to form views on matters of science. One wonders for how long a democracy can function when facts take second place to ideology.

I’m politically and socially left of center (though strongly critical of the politically correct extreme Left), and it saddens me deeply that President Obama is in the thrall of his ideologue Science Advisor John Holdren, that the most accurate view I heard recently on global warming came from a Republican politician (Marco Rubbio), and that my usually favorite sources of insight on television (The Daily Show, The Colbert Report, GPS) treat global warming “skeptics” as willfully ignorant denialists.

The Daily Show of Monday, 2 June 2014, labeled as equally flat-earthish those who campaign against vaccination and those who question a human cause of global warming.
But think for just a moment about what substantive commonality there might be between those two matters.
There is absolutely none.
The only commonality is the non-substantive one that both are contrary to the contemporary official mainstream consensus. Yet the lesson of history is absolutely clear, that no contemporary mainstream scientific consensus can be counted on in the long run; indeed, the greatest scientific advances have come though overturning well established scientific dogmas, even ones that had held sway for decades. If there is one fact that everyone should know, it is that contemporary scientific experts and any contemporary scientific consensus are to be trusted just as much as, but no more than, experts on economic or social or political or religious matters. When all of them agree, then they may be right (but they may still all be wrong, as history proves). But if even a few competent ones disagree with the majority, then it is far from a settled matter. Always remember Michael Crichton on consensus: No one says there’s a consensus that E = mc², “consensus” is invoked only when the matter is not settled.

The actual evidence for the efficacy of vaccination is of an entirely different order than the evidence for human-caused global warming (AGW, for anthropogenic GW). The Daily Show doesn’t understand that because it has not looked at the evidence. Vaccination against smallpox appears to have eliminated that scourge, as evidenced by the tangible fact that people don’t get smallpox nowadays. (That not all vaccinations are of proven value doesn’t gainsay that the concept has strong facts in its support — so strong, indeed, that unwarranted extrapolations of the concept have seduced large swaths of society, as with Gardasil (Deadly vaccines).

On the other hand, there is no good evidence at all that carbon dioxide causes global warming.

The trouble is that the AGW enthusiasts have managed to monopolize official agencies and the media, as illustrated by the pertinent entries at Wikipedia (The Wiles of Wiki). Furthermore, although there’s a vast literature debunking the claims of human-caused global warming, it’s of highly uneven quality. Books come from small or niche publishers (1,2), or they are self-published (3), often with incompetent copyediting (or none at all), perhaps lacking an index and with sourcing only to Internet sites (2). The best-presented as well as substantively sound works are much denigrated ad hominem because the authors or publishers are politically conservative (4-8).

A further difficulty in bringing dissenting views to public attention is that those who dissent from an entrenched mainstream dogma tend to become frustrated and to react in counterproductive ways (9,10). Some of the books mentioned above illustrate this with repetitive rants that detract from their substantive message.

As to the insinuation that funding by conservative viewpoints drives dissent from AGW dogma, it is at least equally true that funding drives the mainstream claim that global warming is significantly human-caused. Huge resources are available from governments and official agencies for research specifically on human-caused global warming because the dogma is promulgated by  “The World Meteorological Organization (WMO) [which] is a specialized agency of the United Nations. It is the UN system’s authoritative voice on the state and behaviour of the Earth’s atmosphere, its interaction with the oceans, the climate it produces and the resulting distribution of water resources”. Together with the United Nations Environment Programme (UNEP), “the voice for the environment within the United Nations system”, in 1988 WMO established the Intergovernmental Panel on Climate Change (IPCC) which issues periodic reports about how human activities affect the climate.

Yet the case against the mainstream consensus includes some indisputable and easily understood points. For one, the consensus is based on computer models that are inherently, inevitably incapable of reflecting accurately the complex interactions among innumerable variables that determine global climate (2:111ff., 11). Further, the models consider only very recent times, a century or two, and fail to account properly for hotter temperatures only a millennium ago (the Medieval Warm Period) or the even more recent Little Ice Age which made it inevitable that present times would be experiencing warming from entirely natural causes. And none of the models can account for the undisputed fact that there has been no warming for at least the last 15-18 years despite significant increases in atmospheric carbon dioxide:

17yearsNoWarming

In ignorance of these facts, Fareed Zakaria’s GPS (CNN TV) of 29 June 2014 featured the bipartisan pair of esteemed economic experts, Henry Paulson and Richard Rubin, promulgating the Risky Business Report, “The Economic Risks of Climate Change”, which accepts without question the most dire predictions made by the proponents of worst-case AGW: “Risk catastrophic to life on Earth as we know it”, said Rubin.

What the media fail to reveal, culpably and unforgivably, is that even the IPCC’s own Scientific Reports make abundantly clear that the computer modeling is beset with inescapable uncertainties, whereas the IPCC’s “Summaries for Policy Makers”, released to press and public before the Scientific Reports, portray the role of carbon dioxide as established beyond doubt and its consequences as terrifying.

(The same tactic is used by UNAIDS, where Foreword or Preface signed by some eminent person lays out the horrible consequences of the continuing epidemic, while the actual data in the body of the Report contradict those projections [10: 197ff.]. What media and pubic need to know is that “Official reports are not scientific publications” [10: chapter 8].)

Those who insist on the catastrophic progression of human-caused global warming are either self-interested because their careers are vested in that conclusion or they are the idiotae of this blog-post’s title, people who take on faith what the mainstream scientific consensus is and then do not hesitate to parrot it and to malign dissenters who know far more about the issues than they do. Thus a lawyer holds forth about “Overheated: The Human Cost of Climate Change”, and a respected academic press (Oxford) publishes him (12). A compendium about “Junk Science” (13) is rightly critical of many things but is woefully ignorant about the credentials of global warming “skeptics” (and I’m always suspicious when both author and well established publisher feel the need to emphasize the author’s “Ph.D.” on the title page).

There is no lack of examples in hardcover and in softcover and in “news” reports and television punditry and internet blogs and comments that idiotae feel free to hold forth passionately for or against, depending quite predictably on political ideology and displaying no interest in the actual evidence.

Anyone who wants an informed opinion needs to dig into the evidence. Eventually, fairly well documented and fairly evenhanded sources can be found. They are easily recognized by relatively measured tone and by concentration on evidence instead of ad hominem charges. My recommendation currently goes to Warren Meyer’s site Climate Skeptic.
——————————————————————
1 A. W. Montford, The Hockey Stick Illusion: Climategate and the Corruption of Science, Stacey International, 2010
2 Tim Ball, The Deliberate Corruption of Climate Science, Stairway Press, 2014
3 David Dilley, Natural Climate Pulse Global Warming — Global Cooling — Carbon Dioxide, free download at http://www.globalweatheroscillations.com/#!climate-pulse-e-book/cav2
4 S. Fred Singer & Frederick Seitz, Hot Talk, Cold Science Global Warming’s Unfinished Debate, The Independent Institute, 1999
5 S. Fred Singer and Dennis T. Avery, Unstoppable Global Warming: Every 1,500 Years, Rowman & Littlefield, 2008
6 Patrick J. Michaels, Meltdown: The Predictable Distortion of Global Warming by Scientists, Politicians, and the Media, Cato Institute, 2005
7 Roy W. Spencer, The Great Global Warming Blunder: How Mother Nature Fooled the World’s Top Climate Scientists, Encounter Books, 2010
8 Brian Sussman, Climategate: A Veteran Meteorologist Exposes the Global Warming Scam, WND Books, 2010
9 Henry H. Bauer, Confession of an “AIDS Denialist”: How I became a crank because we’re being lied to about HIV/AIDS, pp. 278-82 in YOU ARE STILL BEING LIED TO: The REMIXED Disinformation Guide to Media Distortion, Historical Whitewashes and Cultural Myths, ed. Russ Kick, The Disinformation Company, 2009
10 Henry H. Bauer, Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012, p. 251
11 Ian Plimer, Heaven and Earth: Global Warming — The Missing Science, Taylor Trade Publishing, 2009
12 Andrew T. Guzman, Overheated: The Human Cost of Climate Change, Oxford University Press, 2013
13 Dan Agin, Ph.D., How Politicians, Corporations, and Other Hucksters Betray Us, Thomas Dunne Books, St. Martin’s Press, 2006

 

 

Posted in conflicts of interest, consensus, funding research, global warming, media flaws, politics and science, science is not truth, science policy, scientific culture, unwarranted dogmatism in science | Tagged: , , | 2 Comments »