Skepticism about science and medicine

In search of disinterested science

HPV vaccines: risks exceed benefits

Posted by Henry Bauer on 2017/07/09

“Vaccination” is publicly argued in black/white, yes/no fashion, as though one had to be either for or against ALL vaccinations. But the fact is that the benefits of some vaccinations far outweigh the dangers of occasional harmful “side” effects whereas that is not clear with other vaccines. Polio vaccine, for example, seems to have been wonderfully effective and is still so in many countries; on the other hand, in regions where polio is no longer endemic, the risk of contracting polio from oral vaccine exceeds the danger of contracting it when not vaccinated (see links near the end of What to believe? Science is a red herring and a wild-goose chase).

Immune systems are complex and not fully understood, and there are individual variations galore — as when one of my friends came down with shingles shortly after being vaccinated against shingles. (The doctor of course assured him that the outbreak would have been more painful had he not been vaccinated; an ex cathedra assertion without possibility of verification.)

I was reminded of the issue of HPV vaccination by a brouhaha in Europe between the European Medicines Agency (EMA) and medical practitioners and researchers who had come across a substantial number of cases of harm seemingly following HPV vaccination, harm specifically in the form of chronic autoimmune ailments. Since vaccination affects the immune system, such an undesired effect in some individuals seems perfectly plausible.

The Nordic Cochrane Center exists for the purpose of evaluating the evidence underlying medical practices. The Cochrane Center and others have been campaigning for many years to have the data from clinical trials made available to all researchers (1). Last year it lodged a complaint (2) against EMA for conflicts of interest with drug companies exacerbated by the secrecy of discussions that led to criticism of physicians’ reports about autoimmune symptoms appearing after vaccination against HPV. That secrecy is truly extraordinary, virtually an admission of conspiracy: “experts who are involved in the process are not named and are bound by lifelong secrecy about what was discussed” (3).

An EMA publication had severely criticized publications by Louise Brinth and others who had published reports of autoimmune symptoms following vaccination (4); Brinth has delivered a blistering response to the EMA insinuations (5).

The supposed benefit of vaccinating against HPV is to decrease the risk of certain cancers, primarily of the cervix. There are perhaps a hundred types of HPV, of which about 40 are sexually transmitted, and two to four of these seem to be statistically correlated with cancer:
“High-risk HPV strains include HPV 16 and 18 . . . . Other high-risk HPV viruses include 31, 33, 45, 52, 58, and a few others. Low-risk HPV strains, such as HPV 6 and 11, cause about 90% of genital warts, which rarely develop into cancer” (What is HPV?).

HPV infections are the most common sexually transmitted infection: “HPV is so common that nearly all sexually active men and women get the virus at some point in their lives” (Human Papillomavirus (HPV) Statistics). Thus most infections do not lead to cancer, which might induce thought about what “cause” could mean in this context. About 4% of Americans are infected each year with a “high-risk” strain, about 6 million women (USA population is about 320 million, so roughly 160 million women). There are only about 12,000 cases annually of cervical cancer: thus only about 1 in 500 of even “high-risk” infections is associated with this cancer. Thus vaccinating about 500 “high-risk” women might prevent 1 cervical cancer; NNT (number needed to be treated for 1 person to benefit) = 500.

On the other hand, there appears to be about 1 chance in 200 of an adverse effect from vaccination by Gardasil (Gardasil and the sad state of present-day medical practices); about 8% (~ 1 in 12) of adverse events are “serious”, so there’s about 1 chance in 2500 of a serious adverse event. NNH (number needed to be treated for one person to be seriously harmed) = 2500.

For any medical treatment to be desirable, it should be necessary to treat many more people to harm a single one than the number needed to be treated to benefit a single person; NNH should exceed NNT by a substantial amount.
The numbers just mentioned yield a ratio of only 5 — in other words, there’s something like a 1 in 5 chance, 20%, that HPV vaccination would harm rather than benefit. But those numbers apply if only those women infected with high-risk strains are vaccinated. However, the advocates of HPV vaccination, which includes official agencies in the USA and some other countries, recommend HPV vaccination for all girls. That increases NNT by a factor of 25 and reverses drastically the benefit/cost ratio: It is 5 times more likely that an HPV vaccination will result in a serious adverse event than that the vaccination prevents a case of cervical cancer — even if HPV is the actual cause of cervical cancer, which remains to be proved beyond a mere weak statistical correlation.

It is simply not known whether HPV causes cancer at all. Certainly it does not always cause cancer. An extended article on the invaluable Snopes.com website that debunks urban legends is judicious on this matter by pointing out that the claimed association of HPV vaccination with autoimmune symptoms is only speculative. On the other hand, it also concludes in an update of 12 June 2017:
“An earlier version of this story incorrectly stated that countries with high HPV vaccination rates show declines in cervical cancer diagnoses. Both Gardasil and Cervarix have demonstrated efficacy in preventing HPV infections that cause cervical cancer, and evidence suggests declines in precancerous lesions and other abnormal growths as a result of HPV vaccination. There is debate over evidence for declines in cervical cancer diagnoses — as well as over how much time it would take after the introduction of the vaccine to see any effect on cancer diagnoses” [italics added].

The vaccines against HPV are successful against HPV — but it has never been proved that HPV (or the four strains of it supposed to be associated with cervical cancer) actually causes cancer. Since the rate of HPV infections exceeds the rate of cervical cancer by a huge amount, any “causative” action of HPV must be very indirect, especially since only a small percentage of HPV strains shows even a statistical association with cancer.
Recall that the usual test of “statistical significance” in medicine is p ≤ 0.05, meaning that there is less than a 5% chance that the association is owing only to chance. If there are 100 possible associations, about 5 of them will seem significant even though they are not, being picked out purely by chance because of the (weak!) criterion for statistical significance (6). If there are 100 strains of HPV, then at p ≤ 0.05, purely by chance about 5 strains will seem to be correlated with cervical cancer — or with just about anything else.
Before accepting any role fort HPV in cervical cancer, one should want a demonstration of the mechanism of the claimed causative effect.

***************************************
(1) “Opening up data at the European Medicine”, Peter Gøtzsche & Anders Jørgensen, British Medical Journal, 342 (28 MAY 2011) 1184-6; “EMA must improve the quality of its clinical trial reports”, Corrado Barbui , Cinzia Baschirotto & Andrea Cipriani, ibid., 1187-9
(2) Complaint to the European Medicines Agency (EMA) over maladministration at the EMA, 26 May 2016
(3) “Complaint filed over EMA’s handling of HPV Vaccine safety issues”, Zosia Chustecka, 5 July 2016
(4) “Suspected side effects to the quadrivalent human papilloma vaccine”, Louise Brinth, Ann Cathrine Theibel1, Kirsten Pors & Jesper Mehlsen, Danish Medical Journal, 62 (#4, 2015) A5064
(5) “Responsum to Assessment Report on HPV-vaccines released by EMA November 26th 2015” by Louise Brinth, MD PhD, Syncope Unit, Bispebjerg and Frederiksberg Hospital, Copenhagen, December 15th 2015
(6) For a thorough discussion of the pitfalls of interpreting p values, see Gerd Gigerenzer, “Mindless Statistics”, Journal of Socio-Economics, 33 (2004) 587-606.

Posted in medical practices, prescription drugs, science policy, unwarranted dogmatism in science | Tagged: , , | 2 Comments »

What science says about global warming and climate change

Posted by Henry Bauer on 2017/07/06

There is strong evidence that global temperatures are not significantly dependent on the amount of carbon dioxide in the atmosphere (Climate-change facts: Temperature is not determined by carbon dioxide).

That’s what science — the evidence, the facts — says.

Nevertheless, the overwhelmingly widespread belief among public and governments is the opposite, believing carbon dioxide to be the single most important determinant of global temperature and climate.

How could such a disparity between fact and public belief come about?

President Eisenhower foresaw the possibility half a century ago:
“in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite” (Farewell speech, 17 January 1961).

Such influence of a scientific-technological elite is possible because “science” has become believed in superstitiously: on authority, not because it offers sound evidence and logic (Superstitious belief in science). A number of popular misunderstandings about science conspire to maintain this state of affairs, notably a failure to appreciate how drastically different scientific activities became following World War II, different from earlier times; science nowadays is not self-correcting and it does not follow the so-called scientific method. A full discussion of those points is in my just-published Science Is Not What You Think — How it has changed, Why we can’t trust it, How it can be fixed.

The “fix” refers to the possible establishment of a Science Court to adjudicate expert differences over technical issues. That was first suggested more than half a century ago when the experts were at loggerheads and arguing publicly over whether power could be generated safely using nuclear reactors.
More recently, some legal scholars have pointed out that such an institution could help the legal system to cope with cases where technical issues play an important role.
Beyond that, I suggest that a Science Court is needed to force the prevailing “scientific consensus” to respond substantively to critiques like those made by the many critics of human-caused global warming and climate change.

Posted in consensus, global warming, politics and science, science is not truth, science policy, scientific literacy, scientism | Tagged: , | 1 Comment »

Has all academic publishing become predatory? Or just useless? Or just vanity publishing?

Posted by Henry Bauer on 2017/06/14

A pingback to my post “Predatory publishers and fake rankings of journals” led me to “Where to publish and not to publish in bioethics – the 2017 list”.

That essay brings home just how pervasive has become for-profit publishing of purportedly scholarly material. The sheer volume of the supposedly scholarly literature is such as to raise the question, who looks at any part of this literature?

One of the essay’s links leads to a listing by the Kennedy Center for Ethics of 44 journals in the field of bioethics.  Another link leads to a list of the “Top 100 Bioethics Journals in the World, 2015” by the author of the earlier “Top 50 Bioethics Journals and Top 250 Most Cited Bioethics Articles Published 2011-2015

What, I wonder, does any given bioethicist actually read? How many of these journals have even their Table of Contents scanned by most bioethicists?

Beyond that: Surely the potential value of scholarly work in bioethics is to improve the ethical practices of individuals and institutions in the real world. How does this spate of published material contribute to that potential value?

Those questions are purely rhetorical, of course. I suggest that the overwhelming mass of this stuff has no influence whatever on actual practices by doctors, researchers, clinics and other institutions.

This literature does, however, support the existence of a body of bioethicists whose careers are tied in some way to the publication of articles about bioethics.

The same sort of thing applies nowadays in every field of scholarship and science. The essay’s link to Key Journals in The Philosopher’s Index brings up a 79-page list, 10 items per page, of key [!] journals in philosophy.

This profusion of scholarly journals supports not only communities of publishing scholars in each field, it also nurtures an expanding community of meta-scholars whose publications deal with the profusion of publication. The earliest work in this genre was the Science Citation Index which capitalized on information technology to compile indexes through which all researchers could discover which of their published work had been cited and where.

That was unquestionably useful, including by making it possible to discover people working in one’s own specialty. But misuse became abuse, as administrators and bureaucrats began simply to count how often an individual’s work had been cited and to equate that number with quality.

No matter how often it has been pointed out that this equation is so wrong as to be beyond rescuing, this attraction of supposedly objective numbers and the ease of obtaining them has made citation-counting an apparently permanent part of the scholarly literature.

Not only that. The practice has been extended to judging the influence a journal has by counting how often the articles in it have been cited, yielding a “journal impact factor” that, again, is typically conflated with quality, no matter how often or how learnedly the meta-scholars point out the fallacies in that equation — for example different citing practices in different fields, different editorial practices that sometimes limit number of permitted citations, the frequent citation of work that had been thought important but that turned out to be wrong.

The scholarly literature had become absurdly voluminous even before the advent of on-line publishing. Meta-scholars had already learned several decades ago that most published articles are never cited by anyone other than the original author(s): see for instance J. R. Cole & S. Cole, Social Stratification in Science (University of Chicago Press, 1973); Henry W. Menard, Science: Growth and Change (Harvard University Press, 1971); Derek de Solla Price, Little Science, Big Science … And Beyond (Columbia University Press, 1986).

Derek Price (Science Since Babylon, Yale University Press, 1975) had also pointed out that the growth of science at an exponential rate since the 17th century had to cease in the latter half of the 20th century since science was by then consuming several percent of the GDP of developed countries. And indeed there has been cessation of growth in research funds; but the advent of the internet has made it possible for publication to continue to grow exponentially.

Purely predatory publishing has added more useless material to what was already unmanageably voluminous, with only rare needles in these haystacks that could be of any actual practical use to the wider society.

Since almost all of this publication has to be paid for by the authors or their research grants or patrons, one could also characterize present-day scholarly and scientific publication as vanity publishing, serving to the benefit only of the author(s) — except that this glut of publishing now supports yet another publishing community, the scholars of citation indexes and journal impact factors, who concern themselves for example with “Google h5 vs Thomson Impact Factor” or who offer advice for potential authors and evaluators and administrators about “publishing or perishing”.

To my mind, the most damaging aspect of all this is not the waste of time and material resources to produce useless stuff, it is that judgment of quality by informed, thoughtful individuals is being steadily displaced by reliance on numbers generated via information technology by procedures that are understood by all thinking people to be invalid substitutes for informed, thoughtful human judgment.

 

Posted in conflicts of interest, funding research, media flaws, scientific culture | Tagged: , , | 3 Comments »

How to interpret statistics; especially about drug efficacy

Posted by Henry Bauer on 2017/06/06

How (not) to measure the efficacy of drugs  pointed out that the most meaningful data about a drug are the number of people needed to be treated for one person to reap benefit, NNT, and the number needed to be treated for one person to be harmed, NNH.

But this pertinent, useful information is rarely disseminated, and most particularly not by drug companies. Most commonly cited are statistics about drug performance relative to other drugs or relative to placebo. Just how misleading this can be is described in easily understood form in this discussion of the use of anti-psychotic drugs.

 

That article (“Psychiatry defends its antipsychotics: a case study of institutional corruption” by Robert Whitaker) has many other points of interest. Most important, of course, the potent demonstration that official psychiatric practice is not evidence-based, rather, its aim is to defend the profession’s current approach.

 

In these ways, psychiatry differs only in degree from the whole of modern medicine — see WHAT’S WRONG WITH PRESENT-DAY MEDICINE  — and indeed from contemporary science on too many matters: Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, Jefferson (NC): McFarland 2012.

Posted in conflicts of interest, consensus, media flaws, medical practices, peer review, prescription drugs, scientific culture, unwarranted dogmatism in science | Tagged: , | Leave a Comment »

Vaccines: The good, the bad, and the ugly

Posted by Henry Bauer on 2017/05/21

Only in recent years have I begun to wonder whether there are reasons not to follow official recommendations about vaccination. In the 1930s, I had the then-usual vaccinations, including (in Austria, perhaps Europe) against smallpox. A few others in later years when I traveled quite a bit.

But the Andrew Wakefield affair *, and the introduction of Gardasil **, showed me that official sources had become as untrustworethy about vaccines as they have become about prescription drugs.

It seems that Big Pharma had just about run out of new diseases to invent against which to create drugs and had turned to snake-oil-marketing of vaccines. We are told, for example, that 1 in 3 people will experience shingles in their lifetime and should get vaccinated against it. Have one in three of your aged friends ever had shingles? Not among my family and friends. One of my buddies got himself vaccinated, and came down with shingles a couple of weeks later. His physician asserted that the attack would have been more severe if he hadn’t been vaccinated — no need for a control experiment, or any need to doubt official claims.

So it’s remarkable that the Swedish Government has resisted attempts to make vaccinations compulsory (“Sweden bans mandatory vaccinations over ‘serious health concerns’” by Baxter Dmitry, 12 May 2017).

That article includes extracts from an interview of Robert F. Kennedy, Jr., on the Tucker Carlson Show, which included such tidbits as the continued presence of thimerosal (organic mercury compound) in many vaccines including the seasonal flu vaccines that everyone is urged to get; and the huge increase in number of things against which vaccination is being recommended:

“I got three vaccines and I was fully compliant. I’m 63 years old. My children got 69 doses of 16 vaccines to be compliant. And a lot of these vaccines aren’t even for communicable diseases. Like Hepatitis B, which comes from unprotected sex, or using or sharing needles – why do we give that to a child on the first day of their life? And it was loaded with mercury.”

 

————————————————–

“Autism and Vaccines: Can there be a final unequivocal answer?”
      “YES: Thimerosal CAN induce autism”

** See “Gardasil and Cervarix: Vaccination insanity” and many other posts recovered with SEARCH for “Gardasil” on my blogs: https://scimedskeptic.wordpress.com/?s=gardasil and https://hivskeptic.wordpress.com/?s=gardasil

Posted in fraud in medicine, legal considerations, medical practices, politics and science, prescription drugs, science is not truth, science policy, unwarranted dogmatism in science | Tagged: | Leave a Comment »

Superstitious belief in science

Posted by Henry Bauer on 2017/05/16

Most people have a very misled, unrealistic view of “science”. A very damaging consequence is that scientific claims are given automatic respect even when that is unwarranted — as it always is with new claims, say about global warming. Dramatic changes in how science is done, especially since mid-20th century, make it less trustworthy than earlier.

In 1987, historian John Burnham published How Superstition Won and Science Lost, arguing that modern science had not vanquished popular superstition by inculcating scientific, evidence-based thinking; rather, science had itself become on worldly matters the accepted authority whose pronouncements are believed without question, in other words superstitiously, by society at large.

Burnham argued through detailed analysis of how science is popularized, and especially how that has changed over the decades. Some 30 years later, Burnham’s insight is perhaps even more important. Over those years, certain changes in scientific activity have also become evident that support Burnham’s conclusion from different directions: science has grown so much, and has become so specialized and bureaucratic and dependent on outside patronage, that it has lost any ability to self-correct. As with religion in medieval times, official pronouncements about science are usually accepted without further ado, and minority voices of dissent are dismissed and denigrated.

A full discussion with source references, far too long for a blog post, is available here.

Posted in conflicts of interest, consensus, denialism, politics and science, science is not truth, scientific culture, scientific literacy, scientism, scientists are human, unwarranted dogmatism in science | Tagged: | Leave a Comment »

Climate-change orthodoxy: alternative facts, uncertainty equals certainty, projections are not predictions, and other absurdities of the “scientific consensus”

Posted by Henry Bauer on 2017/05/10

G. K. Chesterton once suggested that the best argument for accepting the Christian faith lies in the reasons offered by atheists and skeptics against doing so. That interesting slant sprang to mind as I was trying to summarize the reasons for not believing the “scientific consensus” that blames carbon dioxide for climate change.

Of course the very best reason for not believing that CO2 causes climate change are the data, as summarized in an earlier post

–>      Global temperatures have often been high while CO2 levels were low, and vice versa

–>     CO2 levels rise or fall after temperatures have risen or fallen

–>     Temperatures decreased between the 1940s and 1970s, and since about 1998 there has been a pause in warming, perhaps even cooling, while CO2 levels have risen steadily.

But disbelieving the official propaganda becomes much easier when one recognizes the sheer absurdities and illogicalities and self-contradictions committed unceasingly by defenders of the mainstream view.

1940s-1970s cooling
Mainstream official climate science is centered on models: computer programs that strive to simulate real-world phenomena. Any reasonably detailed description of such models soon reveals that there are far too many variables and interactions to make that feasible; and moreover that a host of assumptions are incorporated in all the models (1). In any case, the official models do not simulate the cooling trend of these three decades.
“Dr. James Hansen suspects the relatively sudden, massive output of aerosols from industries and power plants contributed to the global cooling trend from 1940-1970” (2).
But the models do not take aerosols into account; they are so flawed that they are unable to simulate a thirty-year period in which carbon emissions were increasing and temperatures decreasing. An obvious conclusion is that no forecast based on those models deserves to be given any credence.

One of the innumerable science-groupie web-sites expands on the aerosol speculation:
“40’s to 70’s cooling, CO2 rising?
This is a fascinating denialist argument. If CO2 is rising, as it was in the 40’s through the 70’s, why would there be cooling?
It’s important to understand that the climate has warmed and cooled naturally without human influence in the past. Natural cycle, or natural variability need to be understood if you wish to understand what modern climate forcing means. In other words modern or current forcing is caused by human industrial output to the atmosphere. This human-induced forcing is both positive (greenhouse gases) and negative (sulfates and aerosols).”

Fair enough; but the models fail to take account of natural cycles.

Rewriting history
The Soviet Union had an official encyclopedia that was revised as needed, for example by rewriting history to delete or insert people and events to correspond with a given day’s political correctness. Some climate-change enthusiasts also try to rewrite history: “There was no scientific consensus in the 1970s that the Earth was headed into an imminent ice age. Indeed, the possibility of anthropogenic warming dominated the peer-reviewed literature even then” (3). Compare that with a host of reproductions and citations of headlines from those cold times when media alarms were set off by what the “scientific consensus” indeed then was (4). And the cooling itself was, of course, real, as is universally acknowledged nowadays.

The media faithfully report what officialdom disseminates. Routinely, any “extreme” weather event is ascribed to climate change — anything worth featuring as “breaking news”, say tsunamis, hurricanes, bushfires in Australia and elsewhere. But the actual data reveal no increase in extreme events in recent decades: not Atlantic storms, nor Australian cyclones, nor US tornadoes, nor “global tropical cyclone accumulated energy”, nor extremely dry periods in the USA, in the last 150 years during which atmospheric carbon dioxide increased by 40% (pp. 46-51 in (1)). Nor have sea levels been rising in any unusual manner (Chapter 6 in (1)).

Defenders of climate-change dogma tie themselves in knots about whether carbon dioxide has already affected climate, whether its influence is to be seen in short-term changes or only over the long term. For instance, the attempt to explain 1940s-70s cooling presupposes that CO2 is only to be indicted for changes over much longer time-scales than mere decades. Perhaps the ultimate demonstration of wanting to have it both ways — only long-term, but also short-term — is illustrated by a pamphlet issued jointly by the Royal Society of London and the National Academy of Science of the USA (5, 6).

No warming since about 1998
Some official sources deny that there has been any cessation of warming in the new century or millennium. Others admit it indirectly by attempting to explain it away or dismiss it as irrelevant, for instance “slowdowns and accelerations in warming lasting a decade or more will continue to occur. However, long- term climate change over many decades will depend mainly on the total amount of CO2 and other greenhouse gases emitted as a result of human   activities” (p. 2 in (5)); “shorter-term variations are mostly due to natural causes, and do not contradict our fundamental understanding that the long-term warming trend is primarily due to human-induced changes in the atmospheric levels of CO2 and other greenhouse gases” (p. 11 in (5)).

Obfuscating and misdirecting
The Met Office, the UK’s National Meteorological Service, is very deceptive about the recent lack of warming:

“Should climate models have predicted the pause?
Media coverage … of the launch of the 5th Assessment Report of the IPCC has again said that global warming is ‘unequivocal’ and that the pause in warming over the past 15 years is too short to reflect long-term trends.

[No one disputes the reality of long-term global warming — the issue is whether natural forces are responsible as opposed to human-generated carbon dioxide]

… some commentators have criticised climate models for not predicting the pause. …
We should not confuse climate prediction with climate change projection. Climate prediction is about saying what the state of the climate will be in the next few years, and it depends absolutely on knowing what the state of the climate is today. And that requires a vast number of high quality observations, of the atmosphere and especially of the ocean.
On the other hand, climate change projections are concerned with the long view; the impact of the large and powerful influences on our climate, such as greenhouse gases.

[Implying sneakily and without warrant that natural forces are not “large and powerful”. That is quite wrong and it is misdirection, the technique used by magicians to divert attention from what is really going on. By far the most powerful force affecting climate is the energy coming from the sun.]

Projections capture the role of these overwhelming influences on climate and its variability, rather than predict the current state of the variability itself.
The IPCC model simulations are projections and not predictions; in other words the models do not start from the state of the climate system today or even 10 years ago. There is no mileage in a story about models being ‘flawed’ because they did not predict the pause; it’s merely a misunderstanding of the science and the difference between a prediction and a projection.
[Misdirection again. The IPCC models failed to project or predict the lack of warming since 1998, and also the cooling of three decades after 1940. The point is that the models are inadequate, so neither predictions nor projections should be believed.]

… the deep ocean is likely a key player in the current pause, effectively ‘hiding’ heat from the surface. Climate model projections simulate such pauses, a few every hundred years lasting a decade or more; and they replicate the influence of the modes of natural climate variability, like the Pacific Decadal Oscillation (PDO) that we think is at the centre of the current pause.
[Here is perhaps the worst instance of misleading. The “Climate model projections” that are claimed to “simulate such pauses, a few every hundred years lasting a decade or more” are not made with the models that project alarming human-caused global warming, they are ad hoc models that explore the possible effects of variables not taken into account in the overall climate models.]”

The projections — which the media (as well as people familiar with the English language) fail to distinguish from predictions — that indict carbon dioxide as cause of climate change are based on models that do not incorporate possible effects of deep-ocean “hidden heat” or such natural cycles as the Pacific Decadal Oscillation. Those and other such factors as aerosols are considered only in trying to explain why the climate models are wrong, which is the crux of the matter. The climate models are wrong.

Asserting that uncertainty equals certainty
The popular media disseminated faithfully and uncritically from the most recent official report that “Scientists are 95% certain that human are responsible for the ‘unprecedented’ warming experienced by the Earth over the last few decades

Leave aside that the warming cannot be known to be “unprecedented” — global temperatures have been much higher in the past, and historical data are not fine-grained enough to compare rates of warming over such short time-spans as mere decades or centuries.

There is no such thing as “95% certainty”.
Certainty means 100%; anything else is a probability, not a certainty.
A probability of 95% may seem very impressive — until it is translated into its corollary: 5% probability of being wrong; and 5% is 1 in 20. I wouldn’t bet on anything that’s really important to me if there’s 1 chance in 20 of losing the bet.
So too with the frequent mantra that 97% or 98% of scientists, or some other superficially impressive percentage, support the “consensus” that global warming is owing to carbon dioxide (7):

 

“Depending on exactly how you measure the expert consensus, it’s somewhere between 90% and 100% that agree humans are responsible for climate change, with most of our studies finding 97% consensus among publishing climate scientists.”

In other words, 3% (“on average”) of “publishing climate scientists” disagree. And the history of science teaches unequivocally that even a 100% scientific consensus has in the past been wrong, most notably on the most consequential matters, those that advanced science spectacularly in what are often called “scientific revolutions” (8).
Furthermore, “publishing climate scientists” biases the scales a great deal, because peer review ensures that dissenting evidence and claims do not easily get published. In any case, those percentages are based on surveys incorporating inevitable flaws (sampling bias as with peer review, for instance). The central question is, “How convinced are you that most recent and near future climate change is, or will be, the result of anthropogenic causes”? On that, the “consensus” was only between 33% and 39%, showing that “the science is NOT settled” (9; emphasis in original).

Science groupies — unquestioning accepters of “the consensus”
The media and countless individuals treat the climate-change consensus dogma as Gospel Truth, leading to such extraordinary proposals as that by Professor of Law, Philippe Sands, QC, that “False claims from climate sceptics that humans are not responsible for global warming and that sea level is not rising should be scotched by an international court ruling”.

I would love to see any court take up the issue, which would allow us to make defenders of the orthodox view attempt to explain away all the data which demonstrate that global warming and climate change are not driven primarily by carbon dioxide.

The central point

Official alarms and established scientific institutions rely not on empirical data, established facts about temperature and CO2, but on computer models that are demonstrably wrong.

Those of us who believe that science should be empirical, that it should follow the data and change theories accordingly, become speechless in the face of climate-change dogma defended in the manner described above. It would be screamingly funny, if only those who do it were not our own “experts” and official representatives (10). Even the Gods are helpless in the face of such determined ignoring of reality (11).

___________________________________

(1)    For example, chapter 10 in Howard Thomas Brady, Mirrors and Mazes, 2016; ISBN 978-1522814689. For a more general argument that models are incapable of accurately simulating complex natural processes, see, O. H. Pilkey & L. Pilkey-Jarvis, Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future, Columbia University Press, 2007
(2)    “40’s to 70’s cooling, CO2 rising?”
(3)    Thomas C. Peterson, William M. Connolley & John Fleck, “The myth of the 1970s global cooling scientific consensus”, Bulletin of the American Meteorological Society, September 2008, 1325-37
(4)    “History rewritten, Global Cooling from 1940 – 1970, an 83% consensus, 285 papers being ‘erased’”; 1970s Global Cooling Scare; 1970s Global Cooling Alarmism
(5)    Climate Change: Evidence & Causes—An Overview from the Royal   Society and the U.S. National Academy of Sciences, National Academies Press; ISBN 978-0-309-30199-2
(6)    Relevant bits of (e) are cited in a review, Henry H. Bauer, “Climate-change science or climate-change propaganda?”, Journal of Scientific Exploration, 29 (2015) 621-36
(7)    The 97% consensus on global warming
(8) Thomas S. Kuhn, The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1970; Bernard Barber, “Resistance by scientists to scientific discovery”, Science, 134 (1961) 596–602. Gunther Stent, “Prematurity and uniqueness in   scientific discovery”, Scientific American, December 1972, pp. 84-93. Hook, Ernest B. (ed), Prematurity in Scientific Discovery: On Resistance and Neglect, University of California Press, 2002
(9)    Dennis Bray, “The scientific consensus of climate change revisited”, Environmental Science & Policy, 13 (2010) 340 –50; see also “The myth of the Climate Change ‘97%’”, Wall Street Journal, 27 May 2014, p. A.13, by Joseph Bast & Roy Spencer
(10) My mother’s frequent repetitions engraved in my mind the German folk-saying, “Wenn der Narr nicht mein wär’, lacht’ ich mit”. Google found it in the Deutsches sprichwörter-lexikon edited by Karl Friedrich Wilhelm Wander (#997, p. 922)
(11)  “Mit der Dummheit kämpfen Götter selbst vergebens”; Friedrich Schiller, Die Jungfrau von Orleans.

 

Posted in consensus, denialism, global warming, media flaws, peer review, resistance to discovery, science is not truth, science policy, scientism, unwarranted dogmatism in science | Tagged: , , | 6 Comments »

Climate-change facts: Temperature is not determined by carbon dioxide

Posted by Henry Bauer on 2017/05/02

The mainstream claims about carbon dioxide , global warming, and climate change, parroted by most media and accepted by most of the world’s governments, are rather straightforward: carbon dioxide released in the burning of “fossil fuels” (chiefly coal and oil) drives global warming because CO2 is a “greenhouse gas”, absorbing heat that would otherwise radiate harmlessly out into space. Since the mid-19th century, when the Industrial Revolution set off this promiscuous releasing of CO2, the Earth has been getting hotter at an unprecedented pace.

The trouble with these claims is that actual data demonstrate that global temperature is not determined by the amount of CO2 in the atmosphere.

For example, during the past 500 million years, CO2 levels have often been much higher than now, including times when global temperatures were lower (1):

“The gray bars at the top … correspond to the periods when the global climate was cool; the intervening white space corresponds to the warm modes … no correspondence between pCO2 and climate is evident …. Superficially, this observation would seem to imply that pCO2 does not exert dominant control on Earth’s climate …. A wealth of evidence, however, suggests that pCO2 exerts at least some control …. [but this Figure] … shows that the ‘null hypothesis’ that pCO2 and climate are unrelated cannot be rejected on the basis of this evidence alone.” [To clarify convoluted double negative: All the evidence cited in support of mainstream claims is insufficient to over-rule what the above Figure shows, that CO2 does not determine global temperatures (the “null hypothesis”).]

Again, with temperature levels in quantitative detail (2):

Towards the end of the Precambrian Era, CO2 levels (purple curve) were very much higher than now while temperatures (blue curve) were if anything lower. Over most of the more recent times, CO2 levels have been very much lower while temperatures most of the time were considerably higher.

Moreover, the historical range of temperature fluctuations makes a mockery of contemporary mainstream ambitions to prevent global temperatures rising by as much as 2°C; for most of Earth’s history, temperatures have been about 6°C higher than at present.

Cause precedes effect

The data just cited do not clearly demonstrate whether rising CO2 brings about subsequent rises in temperature — or vice versa. However, ice-core data back as far as 420,000 years do show which comes first: temperature changes are followed by CO2 changes (3):

On average, CO2 rises lag about 800 years behind temperature rises; and CO2 levels also decline slowly after temperatures have fallen.

Since the Industrial Revolution

Over the last 150 years, global temperatures have risen, and levels of CO2 have risen. This period is minuscule by comparison to the historical data summarized above. Crucially, what has happened in this recent sliver of time cannot be compared directly to the past because the historical data are not fine-grained enough to discern changes over such short periods of time. What is undisputed, however, is that CO2 and temperature have not increased in tandem in this recent era, just as over geological time-spans. From the 1940s until the 1970s, global temperatures were falling, and mainstream experts were telling the mass media that an Ice Age was threatening (4) — at the same time as CO2 levels were continuing their merry rise with fossil fuels being burnt at an ever-increasing rate (5):

1945 to 1977 cool period with soaring CO2 emissions. Global temperatures began to cool in the mid–1940’s at the point when CO2 emissions began to soar … . Global temperatures in the Northern Hemisphere dropped about 0.5°C (0.9° F)
from the mid-1940s until 1977 and temperatures globally
cooled about 0.2°C (0.4° F) …. Many of the world’s glaciers advanced during this time and recovered a good deal
of the ice lost during the 1915–1945 warm period”.

Furthermore (5):

Global cooling from 1999 to 2009. No global warming has occurred above the 1998 level. In 1998, the PDO [Pacific Decadal Oscillation] was in its warm mode. In 1999, the PDO flipped from its warm mode into its cool mode and satellite imagery confirms that the cool mode has become firmly entrenched since then and global cooling has deepened significantly in the past few years.”

In short:
–>       Global temperatures have often been high
while CO2 levels were low, and vice versa
–>        CO2 levels rise or fall after temperatures have risen or fallen
–>         CO2 levels have risen steadily but temperatures decreased
between the 1940s and 1970s, and since about 1998
there has been a pause in warming, perhaps even cooling

Quite clearly, CO2 is not the prime driver of global temperature. Data, facts, about temperature and CO2 demonstrate that something else has far outweighed the influence of CO2 levels in determining temperatures throughout Earth’s history, including since the Industrial Revolution. “Something else” can only be natural forces. And indeed there are a number of known natural forces that affect Earth’s temperature; and many of those forces vary cyclically over time. The amount of energy radiated to Earth by the Sun varies in correlation with the 11-year periodic cycle of sun-spots, which is fairly widely known; but there are many other cycles known only to specialists, say the 9-year Lunisolar Precession cycle; and these natural forces have periodically warmed and cooled the Earth in cycles of glaciation and warmth at intervals of roughly 100,000 – 120,000 years (the Milankovitch Cycles), with a number of other cycles superposed on those (6).

So the contemporary mainstream view, the so-called “scientific consensus”, is at odds with the evidence, the facts.

That will seem incredible to many people, who might well ask how that could be possible. How could “science” be so wrong?

In brief: because of facts about science that are not much known outside the ranks of historians and philosophers and sociologists of science (7): that the scientific consensus at any given time on any given matter has been wrong quite often over the years and centuries (8); and that science nowadays has become quite different from our traditional view of it (9).

____________________________________

(1)    Daniel H. Rothman, Proceedings of the National Academy of Sciences of the United States of America, 99 (2002) 4167-71, doi: 10.1073/pnas.022055499
(2)    Nahle Nasif, “Cycles of Global Climate Change”, Biology Cabinet Journal Online, #295 (2007); primary sources of data are listed there
(3)    The 800 year lag in CO2 after temperature – graphed; primary sources are cited there
(4)    History rewritten, Global Cooling from 1940 – 1970, an 83% consensus, 285 papers being “erased”;
 1970s Global Cooling Scare;
 1970s Global Cooling Alarmism 
(5)    Don Easterbrook,
 “Global warming and CO2 during the past century”
(6)    David Dilley, Natural Climate Pulse, January 2012;
(7)    For example:
What everyone knows is usually wrong (about science, say)
Scientific literacy in one easy lesson
The culture and the cult of science
(8)    For example:
Bernard Barber, “Resistance by scientists to scientific discovery”,
Science, 134 (1961) 596–602.
Gunther Stent, “Prematurity and uniqueness in
scientific discovery”, Scientific American,
December 1972, pp. 84-93.
Hook, Ernest B. (ed), Prematurity in Scientific Discovery:
On Resistance and Neglect,
                                          University of California Press, 2002.
Science: A Danger for Public Policy?!
(9)   For example:
How Science Has Changed — notably since World War II
The Science Bubble
The business of for-profit “science”
From Dawn to Decadence: The Three Ages of Modern Science

Posted in consensus, global warming, resistance to discovery, science is not truth, science policy, scientific culture, the scientific method, unwarranted dogmatism in science | Tagged: | 2 Comments »

The banality of evil — Psychiatry and ADHD

Posted by Henry Bauer on 2017/04/25

“The banality of evil” is a phrase coined by Hannah Arendt when writing about the trial of Adolf Eichmann who had supervised much of the Holocaust. The phrase has been much misinterpreted and misunderstood. Arendt was pointing to the banality of Eichmann, who “had no motives at all” other than “an extraordinary diligence in looking out for his personal advancement”; he “never realized what he was doing … sheer thoughtlessness … [which] can wreak more havoc than all the evil instincts” (1). There was nothing interesting about Eichmann. Applying Wolfgang Pauli’s phrase, Eichmann was “not even wrong”: one can learn nothing from him other than that evil can result from banality, from thoughtlessness. As Edmund Burke put it, “The only thing necessary for the triumph of evil is for good men to do nothing” — and not thinking is a way of doing nothing.

That train of thought becomes quite uncomfortable with the realization that sheer thoughtlessness nowadays pervades so much of the everyday practices of science, medicine, psychiatry. Research simply — thoughtlessly — accepts contemporary theory as true, and pundits, practitioners, teachers, policy makers all accept the results of research without stopping to think about fundamental issues, about whether the pertinent contemporary theories or paradigms make sense.

Psychiatrists, for example, prescribe Ritalin and other stimulants as treatment for ADHD — Attention-Deficit/Hyperactivity Disorder — without stopping to think about whether ADHD is even “a thing” that can be defined and diagnosed unambiguously (or even at all).

The official manual, which one presumes psychiatrists and psychologists consult when assigning diagnoses, is the Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the American Psychiatric Association, now (since 2013) in its 5th edition (DSM-5). DSM-5 has been quite widely criticized, including by such prominent psychiatrists as Allen Frances who led the task force for the previous, fourth, edition (2).

Even casual acquaintance with the contents of this supposedly authoritative DSM-5 makes it obvious that criticism is more than called for. In DSM-5, the Diagnostic Criteria for ADHD are set down in five sections, A-E.

A: “A persistent pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development, as characterized by (1) and/or (2):
     1.   Inattention: Six (or more) of the following symptoms have persisted for at least 6 months to a degree that is inconsistent with developmental level and that negatively impacts directly on social and academic/occupational activities
           Note: The symptoms are not solely a manifestation of oppositional behavior, defiance, hostility, or failure to understand tasks or instructions. For older adolescents and adults (age 17 and older), at least five symptoms are required.
a.     Often fails to give close attention to details or makes careless mistakes in schoolwork, at work, or during other activities (e.g., overlooks or misses details, work is inaccurate)
b.     Often has difficulty sustaining attention in tasks or play activities (e.g., has difficulty remaining focused during lectures, conversations, or lengthy reading).”
and so on through c-i, for a total of nine asserted characteristics of inattention.

Paying even cursory attention to these “criteria” makes plain that they are anything but definitive. Why, for example, are six symptoms required up to age 16 when five are sufficient at 17 years and older? There is nothing clear-cut about “inconsistent with developmental level”, which depends on personal judgment about both the consistency and the level of development. Different people, even different psychiatrists no matter how trained, are likely to judge inconsistently in any given case whether the attention paid (point “a”) is “close” or not. So too with “careless”, “often”, “difficulty”; and so on.

It is if anything even worse with Criteria A(2):

“2.    Hyperactivity and Impulsivity:
Six (or more) of the following symptoms have persisted for at least 6 months to a degree that is inconsistent with developmental level and that negatively impacts directly on social and academic/occupational activities
       Note: The symptoms are not solely a manifestation of oppositional behavior, defiance, hostility, or failure to understand tasks or instructions. For older adolescents and adults (age 17 and older), at least five symptoms are required.
a.    Often fidgets with or taps hands or feet or squirms in seat.”
and so on through b-i, for again a total of nine supposed characteristics of inattention. There is no need to cite any of those since “a” amply reveals the absurdity of designating as the symptom of a mental disorder a type of behavior that is perfectly normal for the majority of young boys. This “criterion” makes self-explanatory the reported finding that boys are three times more likely than girls to be diagnosed with ADHD, though experts make heavier weather of it by suggesting that sex hormones may be among the unknown causes of ADHD (3).

A(1) and (2) are followed by
“B. Several inattentive or hyperactivity-impulsivity symptoms were present prior to age 12 years.
C. Several inattentive or hyperactivity-impulsivity symptoms are present in two or more
settings  (e.g., at home, school, or work; with friends or relatives; in other activities).
D. There is clear evidence that the symptoms interfere with, or reduce the quality of, social,
academic, or occupational functioning.
E. The symptoms do not occur exclusively during the course of schizophrenia or another
psychotic disorder and are not better explained by another mental disorder (e.g., mood
disorder, anxiety disorder, dissociative disorder, personality disorder, substance
intoxication or withdrawal).”

It should be plain enough that this set of so-called criteria is not based on any definitive empirical data, as a simple thought experiment shows: What clinical (or any other sort of) trial could establish by observation that six symptoms are diagnostic up to age 17 whereas five can be decisive from that age on? What if the decisive symptoms were apparent for only 5 months rather than six; or five-and-three-quarters months? How remarkable, too, that “inattention” and hyperactivity and impulsivity” are both characterized by exactly nine possible symptoms.

Leaving aside the deplorable thoughtlessness of the substantive content of DSM-5, it is also saddening that something published by an authoritative medical society should reflect such carelessness or thoughtlessness in presentation. Competent copy-editing would have helped, for example by eliminating the many instances of “and/or”: “this ungraceful phrase … has no right to intrude in ordinary prose” (4) since just “or” would do nicely; if, for instance, I tell you that I’ll be happy with A or with B, obviously I’ll be perfectly happy also if I get both.
Good writing and proper syntax are not mere niceties; their absence indicates a lack of clear substantive thought in what is being written about, as Richard Mitchell ( “The Underground Grammarian”), liked to illustrate by quoting Ben Jonson: “Neither can his Mind be thought to be in Tune, whose words do jarre; nor his reason in frame, whose sentence is preposterous”.

At any rate, ADHD is obviously an invented condition that has no clearly measurable characteristics. Assigning that diagnosis to any given individual is an entirely subjective, personal judgment. That this has been done for some large number of individuals strikes me as an illustration of the banality of evil. Countless parents have been told that their children have a mental illness when they are behaving just as children naturally do. Countless children have been fed mind-altering drugs as a consequence of such a diagnosis. Some number have been sent to special schools like Eagle Hill, where annual tuition and fees can add up to $80,000 or more.

Websites claim to give information that is patently unfounded or wrong, for example:

“Researchers still don’t know the exact cause, but they do know that genes, differences in brain development and some outside factors like prenatal exposure to smoking might play a role. … Researchers looking into the role of genetics in ADHD say it can run in families. If your biological child has ADHD, there’s a one in four chance you have ADHD too, whether it’s been diagnosed or not. … Some external factors affecting brain development have also been linked to ADHD. Prenatal exposure to smoke may increase your child’s risk of developing ADHD. Exposure to high levels of lead as a toddler and preschooler is another possible contributor. … . It’s a brain-based biological condition”.

Those who establish such websites simply follow thoughtlessly, banally, what the professional literature says; and some number of academics strive assiduously to ensure the persistence of this misguided parent-scaring and children-harming. For example, by claiming that certain portions of the brains of ADHD individuals are characteristically smaller:

“Subcortical brain volume differences in participants with attention deficit hyperactivity disorder in children and adults: a cross-sectional mega-analysis” by Martine Hoogman et al., published in Lancet Psychiatry (2017, vol. 4, pp. 310–19). The “et al.” stands for 81 co-authors, 11 of whom declared conflicts of interest with pharmaceutical companies. The conclusions are stated dogmatically: “The data from our highly powered analysis confirm that patients with ADHD do have altered brains and therefore that ADHD is a disorder of the brain. This message is clear for clinicians to convey to parents and patients, which can help to reduce the stigma that ADHD is just a label for difficult children and caused by incompetent parenting. We hope this work will contribute to a better understanding of ADHD in the general public”.

An extensive detailed critique of this article has been submitted to the journal as a basis for retracting that article: “Lancet Psychiatry Needs to Retract the ADHD-Enigma Study” by Michael Corrigan & Robert Whitaker. The critique points to a large number of failings in methodology, including that the data were accumulated from a variety of other studies with no evidence that diagnoses of ADHD were consistent or that controls were properly chosen or available — which ought in itself be sufficient reason not to find publication.

Perhaps worst of all: Nowhere in the article is IQ mentioned; yet the Supplementary Material contains a table revealing that the “ADHD” subjects had on average higher IQ scores than the “normal” controls. “Now the usual assumption is that ADHD children, suffering from a ‘brain disorder,’ are less able to concentrate and focus in school, and thus are cognitively impaired in some way. …. But if the mean IQ score of the ADHD cohort is higher than the mean score for the controls, doesn’t this basic assumption need to be reassessed? If the participants with ADHD have smaller brains that are riddled with ‘altered structures,’ then how come they are just as smart as, or even smarter than, the participants in the control group?”

[The Hoogman et al. article in many places refers to “(appendix)” for details, but the article — which costs $31.50 — does not include an appendix; one must get it separately from the author or the journal.]

As usual, the popular media simply parroted the study’s claims, illustrated by headlines cited in the critique:

And so the thoughtless acceptance by the media of anything published in an established, peer-reviewed journal contributes to making this particular evil a banality. The public, including parents of children, are further confirmed in the misguided, unproven, notion that something is wrong with the brains of children who have been designated with a diagnosis that is no more than a highly subjective opinion.

The deficiencies of this article also illustrate why those of us who have published in peer-reviewed journals know how absurd it is to regard “peer review” as any sort of guarantee of quality, or even of minimal standards of competence and honesty. As Richard Horton, himself editor of The Lancet, has noted, “Peer review . . . is simply a way to collect opinions from experts in the field. Peer review tells us about the acceptability, not the credibility, of a new finding” (5).

The critique of the Hoogman article is just one of the valuable pieces at the Mad in America website. I also recommend highly Robert Whitaker’s books, Anatomy of an Epidemic and Mad in America.


(1)  Hannah Arendt, Eichmann in Jerusalem — A Report on the Banality of Evil. Viking Press,
1964 (rev. & enlarged ed.). Quotes are at p. 134 of PDF available at
https://platypus1917.org/wp-content/uploads/2014/01/arendt_eichmanninjerusalem.pdf
(2)  Henry H. Bauer, “The Troubles With Psychiatry — essay review of Saving Normal by Allen
Frances and The Book Of Woe by Gary Greenberg”, Journal of Scientific Exploration,
29  (2015) 124-30
(3)   Donald W. Pfaff, Man and Woman: An Inside Story, Oxford University Press, 2010: p. 147
(4)   Modern American Usage (edited & completed by Jacques Barzun et al. from the work of
Wilson Follett), Hill & Wang 1966
(5)    Health Wars: On the Global Front Lines of Modern Medicine, New York Review Books,
2003, p. 306

 

Posted in conflicts of interest, consensus, media flaws, medical practices, peer review, prescription drugs, science is not truth, unwarranted dogmatism in science | Tagged: , , , , | Leave a Comment »

Predatory publishers and fake rankings of journals

Posted by Henry Bauer on 2017/04/06

I get invitations to submit articles to a variety of journals that I have never heard of and whose supposed field of interest may bear little relation to my intellectual biography. The invitations come about as often as the notifications of winning lottery tickets or other windfalls.

More than a decade ago, librarian Jeffrey Beall began to compile lists of the “publishers” who put out these “journals” to profit from what authors pay to get published under the widespread need for academics to “publish or perish”. Beall no longer updates the lists and the web site is no longer operative, but a late version is available courtesy of the Wayback Machine.

One recent invitation came to me from the International Organization of Scientific Research (IOSR), which is on Beall’s list and offers some 22 “journals”. I tried sampling them and was not able to access articles from all of them, but did get to the IOSR Journal of Applied Physics and savored the article “Of void (vacuum) energy and quantum field: – a abstraction-subtraction model” by “Dr K N Prasanna Kumar || Prof B S Kiranagi || Prof C S Bagewadi” . At 63 pages, this is quite an article, featuring such insights as “Vacuum energy arises naturally in quantum mechanics due to the uncertainty principle”:
Abstract: A system of quantum field dissipating void and a parallel system of quantum field and void system that contribute to the dissipation of the velocity of void is investigated. It is shown that the time independence of the contributions portrays another system by itself and constitutes the equilibrium solution of the original time independent system. Methodology reinforced with the explanations, we write the governing equations with the nomenclature for the systems in the foregoing. Further papers extensively draw inferences upon such concatenation process, ipsofacto. Significantly consummation and consolidation of this model with that of the Grand Unified Theory is the one that results in the Quantum field giving rise to the basic forces which is purported to have been combined at the high temperatures at the Big Bang Vacuum energy is reported to be the reason for the consummation of the four forces at the scintillatingly high temperature.

It may be obvious why this reminds me of the hoax that Alan Sokal perpetrated on the journal Social Text.

The invitation from IOSR included a wrinkle I had not come across before: it flaunted its high ranking among journals (e-mail, 5 April 2017, from JPBS.journal JOURNALS <jpbs.journal@mail4iosr.org>):

I was tickled by the concept of the African Quality Center for Journals and Googled for it, half expecting that it did not exist. But it does, though its self-description did fit my suspicions:

The academic community has long been demanding more transparency, choice and accuracy in journal assessment. Currently, the majority of academic output is evaluated based on a single ranking of journal impact. African Quality centre [sic] for Journals (AQCJ) perform this job as precisely as possible.

Impact Factor is a measure reflecting the average number of citations to articles published in journals, books, patent document, thesis, project reports, newspapers, conference/ seminar proceedings, documents published in internet, notes and any other approved documents. It is measure the relative importance of a journal within its field, with journals of higher journal impact factors deemed to be more important than those with lower ones.

Evaluation Methodology
AQCJ consider following parameters for calculation Impact factor (AQCJ)
Ø              Citation : The impact factor for a journal is calculated based on a three-year period, and can be considered to be the average number of times published papers are cited up to two years after publication.
Ø             Originality : AQCJ checks random selects published article’s originality and quality. Only citation is not perfect way of Impact factor calculation.
Ø              Time publication : Periodicity of publication should be uniform. If it is not uniform, the quality of particular publication cannot impressible.
Ø              Geographical coverage : Only particular small area based publication cannot get good marks as it is not covering all around world research.
Ø              Editorial Quality : Editor Board of particular Journal gives the direction to any Journal. So it must be good and considerable for evaluation.”

And it does offer a list of “Top 20 Publishers”:

At a cursory glance, noting Nature, Cambridge, Springer, Wiley, this might briefly pass muster as plausible, until one looks more closely and decides to check up on, say, “Barker Deane Publishing” in Australia which is ranked above Taylor & Francis; Barker Deane Publishing specializes in “Self publishing and publishing for health, spirituality, positive living, new age”.

The website of the African Quality Center for Journals did not offer me a way to discover its possible connection to IOSR, but I suspect quite strongly that there is one. The syntax on the two web sites is rather strikingly congruent.

 

Posted in fraud in science, media flaws | Tagged: , , , , , , | Leave a Comment »