Skepticism about science and medicine

In search of disinterested science

Archive for the ‘funding research’ Category

21st century science:   Group-Thinking Elites and Fanatical Groupies

Posted by Henry Bauer on 2018/08/11

Science has been a reliable resource for official policies and actions for much of the era of modern science, which is usually regarded as having begun around the 17th century.

It is almost without precedent that a mistaken scientific consensus should lead to undesirable and damaging public actions, yet that is now the case in two instances: the belief that carbon dioxide generated by the burning of fossil fuels is primarily responsible for global warming and climate change; and the belief that HIV is the cause of AIDS.

Both those beliefs gained hegemony during the last two or three decades. That these beliefs are mistaken seems incredible to most people, in part because of the lack of any well known precedent and in part because the nature of science is widely misunderstood; in particular it is not yet widely recognized how much science has changed since the middle of the 20th century.

The circumstances of modern science that conspire to make it possible for mistaken theories to bring misguided public policies have been described in my recent book, Science Is Not What You Think [1]. The salient points are these:

Ø     Science has become dysfunctionally large

Ø     It is hyper-competitive

Ø     It is not effectively self-correcting

Ø     It is at the mercy of multiple external interests and influences.

A similar analysis was offered by Judson [2]. That title reflects the book’s opening theme of the prevalence of fraud in modern science (as well as in contemporary culture). It assigns blame to the huge expansion in the number of scientists and the crisis that the world of science faces as it finds itself in something of a steady-state so far as resources are concerned, after a period of some three centuries of largely unfitted expansion: about 80% of all the scientists who have ever lived are extant today; US federal expenditure on R&D increased 4-fold (inflation-adjusted!) from 2003 to 2002, and US industry increased its R&D spending by a factor of 26 over that period! Judson also notes the quintessential work of John Ziman explicating the significance of the change from continual expansion to what Ziman called a dynamic steady-state [3].

Remarkably enough, President Eisenhower had foreseen this possibility and warned against it in his farewell address to the nation: “in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite”. The proponents of human-caused-climate-changer theory and of HIV/AIDS theory are examples of such elites.

A crucial factor is that elites, like all other groups, may be dysfunctionally affected by the phenomenon of Groupthink.

Janis [4] showed in detail several decades ago how that phenomenon of Groupthink had produced disastrously bad policy actions by the United States. The same phenomenon of Groupthink can cause bad things to happen in other social sectors than the government. Recently, Booker [5] has shown how Groupthink has been responsible for making it a worldwide belief, a shibboleth, a cliché, that humankind’s use of fossil fuels is causing global warming and climate change through the release of carbon dioxide.

Commonly held ideas about science do not envisage the possibility that a scientific consensus could bring misguided policies and actions on a global scale. What most people know — think they know — about science is that its conclusions are based on solid evidence, and that the scientific method safeguards against getting things wrong, and that science that has been primarily responsible for civilization’s advances over the last few centuries.

Those things that most people know are also largely mistaken [1, 6]. Science is a human activity and is subject to all the frailties and fallibilities of any human activity. The scientific method and the way in which it is popularly described does not accurately portray how science is actually done.

While much of the intellectual progress in understanding how the world works does indeed stand to the credit of science, what remains to be commonly realized is that since about the middle of the 20th century, science has become too big for its own good. The huge expansion of scientific activity since the Second World War has changed science in crucial ways. The number of people engaged in scientific activity has far outstripped the available resources, leading to hyper-competition and associated sloppiness and outright dishonesty. Scientists nowadays are in no way exceptional individuals, people doing scientific work are as common as are teachers, doctors, or engineers. It is in this environment that Groupthink has become significantly and damagingly important.

Booker [5] described this in relation to the hysteria over the use of fossil fuels. A comparable situation concerns the belief that HIV is the cause of AIDS [7]. The overall similarities in these two cases are that a quite small number of researchers arrived initially at more or less tentative conclusions; but those conclusions seemed of such great import to society at large that they were immediately seized upon and broadcast by the media as breaking news. Political actors become involved, accepting those conclusions quickly became politically correct, and those who then questioned and now question the conclusions are vigorously opposed, often maligned as unscientific and motivated by non-scientific agendas.

 

At any rate, contemporary science has become a group activity rather than an activity of independent intellectual entrepreneurs, and it is in this environment that Groupthink affects the elites in any given field — the acknowledged leading researchers whose influence is entrenched by editors and administrators and other bureaucrats inside and outside the scientific community.

A concomitant phenomenon is that of fanatical groupies. Concerning both human-caused climate change and the theory that HIV causes AIDS, there are quite large social groups that have taken up the cause with fanatical vigor and that attack quite unscrupulously anyone who differs from the conventional wisdom. These groupies are chiefly people with little or no scientific background, or whose scientific ambitions are unrequited (which includes students). As with activist groups in general, groupie organizations are often supported by (and indeed often founded by) commercial or political interests. Non-profit organizations which purportedly represent patients and other concerned citizens and which campaign for funds to fight against cancer, multiple sclerosis, etc., are usually funded by Big Pharma, as are HIV/AIDS activist groups.

__________________________________

[1]  Henry H. Bauer, Science Is Not What You Think — how it has changed, why we can’t trust it, how it can be fixed, McFarland 2017

[2] Horace Freeland Judson, The Great Betrayal, Harcourt 2004

[3]  John Ziman, Prometheus Bound, Cambridge University Press 1994

[4]  I. L. Janis, Victims of Groupthink, 1972; Groupthink, 1982, Houghton Mifflin.

[5]  Christopher Booker, GLOBAL WARMING: A case study in groupthink, Global Warming Policy Foundation, Report 28; Human-caused global warming as Groupthink

[6]  Henry H. Bauer, Scientific Literacy and Myth of the Scientific Method, University of Illinois Press 1992

[7]  Henry H. Bauer, The Origin, Persistence and Failings of HIV/AIDS Theory, McFarland 2007

Advertisements

Posted in conflicts of interest, consensus, fraud in science, funding research, global warming, media flaws, science is not truth, science policy, scientific culture, scientific literacy, scientism, scientists are human, the scientific method, unwarranted dogmatism in science | Tagged: , , | Leave a Comment »

Who guards the guardians? Who guards science?

Posted by Henry Bauer on 2018/06/24

Quis custodiet ipsos custodes? This quotation attributed to Juvenal describes the inescapable dilemma as to how societies can be governed .

Today’s guardian of reliable knowledge is science. It is the acknowledged authority on the natural world, on what exists in the world and on how those things behave. Most governments accept as reliable, as true for all practical purposes, whatever the current scientific consensus is: on matters of health, the environment, the solar system, the universe. The mass media, too, accept that scientific consensus; and that largely determines what the general public believes, “what everyone knows”.

Nowadays in that category of “what everyone knows” there are literally innumerable things; among them that the universe began with a Big Bang; that ghosts and Loch Ness Monsters do not exist; that HIV causes AIDS; that hypertension causes heart attacks and strokes; that carbon dioxide released by burning fossil fuels is causing climate change and bringing more frequent and more extreme and more damaging events like hurricanes; etc., etc.

But what guards against the scientific consensus being wrong?

Nothing and nobody.

That really matters, because the history of science is crystal clear that contemporary science, the contemporary scientific consensus, has almost invariably been wrong until further progress superseded and replaced it.

That steady improvement over the centuries gave rise to a comforting shibboleth, that “science is self-correcting”. At any given moment, however, the scientific consensus stands possibly uncorrected and awaiting future “self”-correction. One cannot justifiably assert, therefore, that any contemporary scientific consensus is known to be unquestionably true. It is not known with absolute certainty that the universe began with a Big Bang; that ghosts and Loch Ness Monsters do not exist; that HIV causes AIDS; that hypertension causes heart attacks and strokes; that carbon dioxide released by burning fossil fuels is causing climate change and bringing more frequent and more extreme and more damaging events like hurricanes; etc., etc.

Nevertheless, contemporary society treats these and other contemporary scientific consensuses as true. This amounts to what President Eisenhower warned against: that “public policy could itself become the captive of a scientific-technological elite” [1]. Science can indeed mislead public policy, as when tens of thousands of Americans were forcibly sterilized in the misguided belief that this improved the genetic stock [2]. Science is far from automatically or immediately self-correcting [3].

I’ve wondered how Eisenhower could have been so prescient in 1960, because the conditions that conduce to public policies being misled by science were then just beginning to become prominent: the massive governmental stimulation of scientific activity that has produced today’s dysfunctional hyper-competitiveness, with far too many would-be researchers competing for far too few reliably permanent positions and far too little support for the resources that modern research needs [4]. Moreover, the scientific consensus is guarded not only by the scientists who generated it, powerful societal institutions are vested in the correctness of the scientific consensus [4]: It is virtually inconceivable, for instance, that official bodies like the National Institutes of Health, the Food and Drug Administration, the Centers for Disease Control & Prevention, the World Health Organization, and the like would admit to error of the views that they have promulgated; try to imagine, for example, how it could ever be officially admitted that HIV does not cause AIDS [5].

SUGGESTION TO THE READER:
Reflect on how you formed an opinion about — Big-Bang theory? Loch Ness Monsters? Ghosts? Climate change? … etc. etc. Almost always it will not have been by looking into the evidence but rather by trusting someone’s assertion.

Who has the interest, time, and energy to study all those things? Obviously we must take our beliefs on many matters from trusted authorities; and for a couple of centuries the scientific consensus has been a better guide than most others. But that is no longer the case. The circumstances of 21st-century science mean that society needs guardians to check that what the scientific consensus recommends for public policy corresponds to the best available evidence. On many issues, a minority of experts differs from the scientific consensus, and it would be valuable to have something like a Science Court to assess the arguments and evidence pro and con [6].

I’ve had the luxury of being able to look into quite a few topics because that was appropriate to the second phase of my academic career, in Science & Technology Studies (STS). Through having made a specialty of studying unorthodoxy in science, I stumbled on copious examples of the scientific consensus treating, in recent times, competent minority opinions well within the scientific community with the same disdain, or even worse, as that traditionally directed towards would-be science, fringe science — Loch Ness Monsters, ghosts, UFOS, and the like.

In Dogmatism in Science and Medicine [7], I pointed to the evidence that the contemporary scientific consensus is wrong about Big-Bang theory, global warming and climate change, HIV/AIDS, extinction of the dinosaurs, and more, including what modern medicine says about prescription drugs. The failings of the scientific consensus in modern medicine have been detailed recently by Richard Harris [8] as well as in many works of the last several decades [9]. That the scientific consensus is wrong about HIV and AIDS is documented more fully in The Origin, Persistence and Failings of HIV/AIDS Theory (McFarland, 2007). Why science has become less believable is discussed in [4], which also describes many misconceptions about science and about statistics, the latter bearing a large part of the blame for what’s wrong with today’s medical practices.

But my favorite obsession over where the scientific consensus is wrong remains the existence of Loch Ness “Monsters”, Nessies. It was my continuing curiosity about this that led to my career change from chemistry to STS, which brought many unforeseeable and beneficial side-effects. My 1986 book, The Enigma of Loch Ness: Making Sense of a Mystery [10], showed how the then-available evidence could be interpreted to support belief in the reality of Nessies but could also be plausibly enlisted to reject the reality of Nessies. However, the book’s chief purpose was to explain why seeking to “discover” Nessies was not a sensible task for organized science.

Now in 2018 quite proper science, in the guise of “environmental DNA”, has offered a good chance that my belief in the reality of Loch Ness “Monsters” may be vindicated within a year or so by mainstream science. I plan to say more about that soon.

—————————————————————–

[1]  Farewell Address to the Nation, 17 January 1961
[2]  “Bauer: Could science mislead public policy?”
[3]  Science is NOT self-correcting (How science has changed — VII)
[4]  Science Is Not What You Think — how it has changed,
why we can’t trust it, how it can be fixed
(McFarland, 2017)
[5]   “OFFICIAL!   HIV does not cause AIDS!”
[6]    For a detailed history and analysis of the concept of a Science Court,
see chapter 12 in [4]
[7]    Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth (McFarland, 2012)
[8]    Richard Harris, Rigor Mortis — How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions (Basic Books, 2017)
[9]    What’s Wrong with Present-Day Medicine, a bibliography last updated 17 April 2017
[10]  The Enigma of Loch Ness: Making Sense of a Mystery, University of Illinois Press, 1986;
in Cassette Book format, RC 25592, narrated by Richard Dorf, 1988;
U.K. edition, Stirling (Scotland): Johnston & Bacon 1991;
re-issued by Wipf & Stock, 2012

Posted in conflicts of interest, consensus, funding research, global warming, media flaws, medical practices, peer review, politics and science, prescription drugs, resistance to discovery, science is not truth, science policy, unwarranted dogmatism in science | Tagged: | 3 Comments »

How science has changed — VI. The influences of groups

Posted by Henry Bauer on 2018/04/26

Popular stereotypes of scientists picture them as strikingly individual, whether admirably so (Galileo, Darwin, Einstein) or the opposite (Dr. Frankenstein and other mad or evil scientists [1]). That is one of the most significant ways in which the folklore about science differs from today’s reality: Science nowadays is by and large a group activity, and that has many far-reaching corollaries. This is not to deny that scientists see themselves as individuals and act as individuals, but they are also influenced to varying degrees by group memberships and associated loyalties, and that can interfere with truth-seeking.

Memberships in groups and the associated loyalties is a common human experience. First comes the family group; then there is the extended family or clan, and perhaps subgroups of the clan. Other groups cut across different lines, defined by religion, by ethnicity, by nationality; and also, very much pertinent to the circumstances of science, there are groups associated with the way in which we earn a living; we are influenced by our memberships in professional guilds or trade unions.

Under some circumstances it becomes necessary to set priorities with respect to loyalty to the various groups to which we belong. In most circumstances the highest priority is on loyalty to the family, though some individuals have placed a higher priority on religion or some other ideology. Among professional researchers, the most important thing is the current research project and the associated paradigm and scientific consensus: going with this group is the way to further a career whereas dissenting from the group can spell the blighting of a career.

The groups to which scientists belong is one of the most significant aspects of scientific activity, and that has changed fundamentally in recent times, since about WWII.
In the earlier stages of modern science, what we by hindsight describe as scientists were individuals who, for a variety of reasons, were interested in learning to understand the way the natural world works. One of the most crucial foundations of modern science came when groups of such inquiring minds got together, at first informally but soon formally; the Royal Society of London is generally cited as iconic. Those people came together explicitly and solely to share and discuss their findings and their interpretations. At that stage, scientists belonged effectively to just one science-related group, concerned with seeking true understanding of the workings of the world. Since this was a voluntary activity engaged in by amateurs, in other words by people who were not deriving a living or profit from this activity, these early pre-scientists were not much hindered from practicing loyalty simply to truth-seeking; it did not conflict with or interfere with their loyalties to their families or to their religion or to their other social groups.

As the numbers of proto-scientists grew, their associations were influenced by geography and therefore by nationality, so there came occasions when loyalty to truth-seeking was interfered with by questions of who should get credit for particular advances and discoveries. Even in retrospect, British and French sources may differ over whether the calculations for the discovery of Neptune should be credited most to the English John Couch Adams or the French Urbain Le Verrier — and German sources might assert that the first physical observation of the planet was made by Johann Gottfried Galle; again, British and German sources may still differ by hindsight over whether Isaac Newton or Gottfried Wilhelm Leibniz invented the calculus.

Still, for the first two or three centuries of modern science, the explicit ideal or ethos of science was the unfettered pursuit of genuine truth about how the world works. Then, in the 1930s in Nazi Germany and decades later in the Soviet Union, authoritarian regimes insisted that science had to bend to ideology. In Nazi Germany, scientists had to abstain from relativity and other so-called “Jewish” science; in the Soviet Union, chemists had to abstain from the rest of the world’s theories about chemical combination, and biologists had to abstain from what biologists everywhere else knew about evolution. In democratic societies, a few individual scientists were disloyal to their own nations in sharing secrets with scientists in unfriendly other nations, sometimes giving as reason or excuse their overarching loyalty to science, which should not be subject to national boundaries.

By and large, then, up to about the time of WWII, scientific activity was not unlike how Merton had described it [2], which remains the view of it that most people seem still to have of it today: Scientists as truth-seeking individuals, smarter and more knowledgeable than ordinary people, dedicated to science and unaffected by crass self-interest or by conflicts of interest.

That view does not describe today’s reality, as pointed out in earlier posts in this series [2, 3].   The present essay discusses the consequences of the fact that scientists are anything but isolated individuals freely pursuing truth; rather, they are ordinary human beings subject to the pressures of belonging to a variety of groups. Under those conditions, the search for truth can be hindered and distorted.

Chemists (say) admittedly do work individually toward a particular goal, but that goal is not freely self-selected: either it is set by an employer or by a source of funding that considered the proposed work and decided to support it. Quite often, chemists nowadays work in teams, with different individuals focusing on minor specific aspects of some overall project. They are aware of and accommodate in various ways other chemists who happen to be working toward the same or similar goals, be it in the same institution or elsewhere; and they also share some group interests with other chemists in their own institution who may be working on other projects. Chemists everywhere share group interests through national and international organizations and publications. Beyond that, chemists share with biologists, biochemists, physicists and others the group interest of being scientists, having a professional as well as personal interest in the overall prestige and status of science as a whole in the wider society — at the same time as chemists regard their discipline as just a bit “better” than the other sciences, it is “the central science” because it builds on physics and biologists need it; whereas physicists have long known that their discipline is the most fundamental, “the queen of the sciences”, without which there could not be a chemistry or any other science; and so on — biologists know that their field matters much more to human societies than the physical sciences since it is the basis of understanding living things and is indispensable for effective medicine.

So scientists differ among themselves in a number of ways. All feel loyalty to science by comparison to other human endeavors, but especial loyalty to their own discipline; and within that to their particular specialty — among chemists, to analytical or inorganic or organic or physical chemistry; and within each of those to experimental approaches or to theoretical ones. Ultimately, all researchers are obsessed with and loyal to the very specific work they are engaged in every day, and that may be intensely specialized.

For example, researchers working to perfect computer models to mimic global temperatures and climate do just that; they do not have time to work themselves at estimating past temperatures by, for instance, doing isotope analyses of sea-shells. Since such ultra-specialization is necessary, researchers need to rely on and trust those who are working in related areas. So those who are computer-modeling climate take on trust what they are told by geologists about historical temperature and climate changes, and what the meteorologists can tell them about relatively recent weather and climate, and what physicists tell them about heat exchange and the absorption of heat by different materials, and so on.

With all that, despite the fact that research is done within highly organized and even bureaucratic environments, there is actually no overarching authority to monitor and assess what is happening in science, let alone to ensure that things are being done appropriately. In particular, there is no mechanism for deciding that any given research project may have gone off the rails in the sense of drawing unwarranted conclusions or ignoring significant evidence. There is no mechanism to ensure that proper consideration is being given to the views of all competent and informed scientists working on a particular topic.

A consequence is that on quite a range of matters, the so-called scientific consensus, the view accepted as valid by society’s conventional wisdom and by the policy makers, may be at actual odds with inescapable evidence. That circumstance has been documented for example as to the Big-Bang theory in cosmology, the mechanism of smell, the cause of Alzheimer’s disease, the cause of the extinction of the dinosaurs, and more [4].

Of course, the scientific consensus was very often wrong on particular matters throughout the era of modern science. Moreover, the scientific consensus defends itself quite vigorously against the mavericks who point out its errors [5], until eventually the contrary evidence becomes so overwhelming that the old views simply have to give away, in what Thomas Kuhn [6] described as a scientific revolution.

Defense of the consensus illustrates how strong the group influence is on the leading voices in the scientific community; indeed, it has been described as Groupthink [7]. The success of careers, the gaining of eminence and leadership roles hinge on being right, in other words being in line with the contemporary consensus; thus admitting to error can be tantamount to loss of prestige and status and destruction of a career. That is why Max Planck, in the early years of the 20th century, observed that “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it” [8]; a paraphrase popular among those of us who question an established view is that “Science progresses funeral by funeral”.

At the same time as the history of science teaches that any contemporary scientific consensus is quite fallible and may well be wrong, it also records that — up to quite recent times — science has been able to correct itself, albeit it could take quite a long time — several decades in the case of Mendel’s laws of heredity, or Wegener’s continental drift, or about the cause of mad-cow diseases or of gastritis and stomach ulcers.
Unfortunately it seems as though science’s self-correction does not always come in time to forestall society’s policy-makers from making decisions that spell tangible harm to individuals and to societies as a whole, illustrating what President Eisenhower warned against, that “public policy could itself become the captive of a scientific-technological elite” [9].

More about that in future blog posts.

======================================

[1]   Roslynn D. Haynes, From Faust to Strangelove: Representations of the Scientist in Western Literature, Johns Hopkins University Press, 1994; David J. Skal, Screams of Reason: Mad Science and Modern Culture, W. W. Norton, 1998
[2]    How science has changed— II. Standards of Truth and of Behavior
[3]    How science has changed: Who are the scientists?
How science changed — III. DNA: disinterest loses, competition wins
How science changed — IV. Cutthroat competition and outright fraud
[4]    Henry H. Bauer, Dogmatism   in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, McFarland, 2012
[5]    Bernard Barber, “Resistance by scientists to scientific discovery”, Science, 134 (1961) 596–602
[6]    Thomas S. Kuhn, The Structure of Scientific Revolutions, University of Chicago Press, 1970 (2nd ed., enlarged); 1st ed. was 1962)
[7]    I. L. Janis, Victims of Groupthink, 1972; Groupthink, 1982, Houghton Mifflin
[8]    Max Planck, Scientific Autobiography and Other Papers (1949); translated from German by Frank Gaynor, Greenwood Press, 1968
[9]    Dwight D. Eisenhower, Farewell speech, 17 January 1961

Posted in consensus, funding research, media flaws, politics and science, resistance to discovery, science is not truth, science policy, scientific culture, scientism, scientists are human, unwarranted dogmatism in science | Tagged: , | Leave a Comment »

How science changed — V. And changed academe

Posted by Henry Bauer on 2018/04/19

After WWII, lavish support for science made it a cash cow that academe used to change itself; a change abetted by the corruption of collegiate sport.

*               *               *              *                 *                *              *               *

Science began as an informal cottage industry; nowadays it is a highly organized bureaucratic behemoth that is pervasively intertwined with other sectors of human society.

Science began as a disinterested quest to understand how the world works; practical applications were an incidental though welcome byproduct. Nowadays, society values science for its byproducts more than for the truths it reveals about Nature.

Teaching institutions, colleges, universities were founded to educate (albeit sometimes indoctrinate) future generations. Nowadays much of academe has become a self-serving enterprise in which institutions seek status and prestige from what used to be incidental byproducts; research in academe now has an immediate eye out for patents and potential commercial applications, and intercollegiate sports for local enjoyment have become means of mass entertainment for lucrative revenue. A research university will have many dozens of administrators engaged in managing grant-related matters, intellectual property matters, compliance with regulations, status of research staff, and so on. Almost every university has many dozens of administrative staff engaged in managing its intercollegiate sports programs as well as coaches (whose salaries often exceed those of the university president) and assistant coaches (whose salaries are comparable to or exceed those of full professors).

*                  *                 *                *                *                *                *                *

Scientific activity changed from a cottage industry quite slowly at first, and in fits and starts. Already in the 19th century science had been important in the commercial dye-stuff industry. During the First World War, the German war effort was supported by the discovery, by the chemist Fritz Haber, how to synthesize fertilizers and explosives using the nitrogen in the air. During the 1930s, medical practice began to have genuinely curative capabilities with the discovery of bacteria-killing sulfonamides. But, by and large, up to the Second World War scientific activity remained something of a cottage industry, and basic scientific research was largely an academic ivory-tower activity.

World War II demonstrated the powerful capabilities of applications of scientific understanding; not only the war-ending atomic bombs but also and earlier the sonar that was such an invaluable weapon against submarines and the radar that was invaluable to Great Britain in staving off the German Blitzkrieg bombers; as well as all sorts of developments and improvements in weaponry in techniques of communication and of navigation.

Vannevar Bush had been director of the U.S. Office of Scientific Research and Development, seeing at first hand what science could accomplish. Shortly after the end of World War II he presented the president of the United States with a report entitled Science: The Endless Frontier,  which suggested that scientific research and development could be as valuable to peacetime society as science had proved to be in warfare.

Bush’s initiative is generally credited for the subsequent enormous, unprecedented resources directed into the expansion of scientific activity. The federal support of science came in part as grants to support research activity in the form of specific proposed projects, but also in large part through scholarships and fellowships to stimulate more students to go into science as a career.

That influx of funds led to truly far-reaching changes in academe.

Traditionally, the role of universities was to provide tertiary education, preparing people for the professions. A small proportion of academe comprised so-called “research universities” where the faculty were as much concerned with extending the boundaries of scholarship and of science as they were with the education and training of students; yet the research and scholarship were designed to serve the aim of educating students to become independent professionals. However, the emphasis on scientific research and on training more scientists led eventually to the contemporary circumstances where the primary aim is determined by the demands of the research project rather than by whether the work is best suited for the students to learn how to do independent research. Graduate students came to be seen as cheap technical help rather than as apprentices to be nurtured; science faculty among themselves could be heard referring to the graduate students they were mentoring as “pairs of hands”. In earlier days, prospective graduate students in the sciences would choose their mentors to fit with the students’ specific research interests; nowadays graduate students in the sciences sign on to mentors who have the research grants to support them and they work as cogs in the mentor’s long-term research program [1].

The overt aim of supporting and enhancing science had the corollary effect, no doubt unforeseen and unintended, of making science more prestigious than other intellectual fields within colleges and universities. In time, that tempted some of those other fields to distort themselves in trying to mimic science and gain comparable status and prestige thereby. And not only intellectual prestige: science (and engineering and medical) faculty had higher salaries than faculty in the humanities and the social sciences, and moreover scientists could augment their academic ”9-month” salaries with an extra 20-30% from their research grants as summer-time stipends.

In the humanities, for example — philosophy, history, to some degree psychology — scholarship traditionally focused on critical analysis of traditional classical insights gained by earlier scholars, with comparatively little expectation that entirely novel, ground-breaking insights could be attained. Scholars in the humanities would occasionally publish critiques and analyses and perhaps eventually scholarly monographs. By contrast, in the sciences the emphasis was on novelty, on going beyond what was already known. As other parts of academe developed the ambition to be as well-supported fiscally and thereby as highly regarded as the sciences, they also came to emphasize originality and publication. Graduate students working towards doctoral degrees in history or psychology or sociology are nowadays supposed to generate stuff that deserves publication, often as a monograph. The sciences have become an inappropriate role model for other intellectual disciplines.

The pots of gold available for science-related activities also tempted whole institutions, four-year colleges and teachers’ colleges in particular, to seek prestige and status by transforming themselves into “research” universities. By hiring scientists, grants could be obtained whose amounts were calculated not only to cover the actual costs of the research but also “overhead” costs to reimburse the whole institution for the use of its infrastructure pertinent to the research (“indirect costs” became a popular euphemism for “overhead”). Those indirect costs could be as high as a 50% surcharge on the actual costs of research, and that provided a pool of money that upper-level administrators could draw on for all sorts of things. In the 1940s, the United States had 107 doctorate-granting research universities; by 1950–54 there were 142, by 1960–64 there were 208, and by 1970–74 the number had grown to 307 [2]; since then the rate of growth has been much less, with a count of 334 in 2016 [3 ].

 

The influx of science-related money may have stimulated academe to change in inappropriate and undesirable ways, but science cannot be held responsible for all of today’s ills of academe. Like science, like sports, like so much else, academe has been corrupted by the love of money. One of the most serious consequences is the progressive elimination of tenure-track faculty, replaced by teachers on fixed-term contracts. Academic freedom cannot exist in the absence of tenure, and genuine freedom of thought, expression, and criticism cannot exist in the absence of academic freedom.

Perhaps the most fundamental problem is that both academe and science both should be venues for unfettered truth-seeking activities. But truth-seeking is inevitably subversive, and it is never supported for its own sake by the powers that be. The corruption and distortion of science and academe make it easier for non-truths to spread, which is dangerous for the long-term health of society.

=========================================

[1]    Now-graduated Jorge Cham has described life as a graduate student by means of comic strips: see Sara Coelho, “Piled Higher and Deeper: The everyday life of a grad student”, Science, 323 (2009) 1668–9.
[2]    A Century of Doctorates: Data Analyses of Growth and Change, National Academies Press, 1978.
[3]    According to the Carnegie Classification of Institutions of Higher Education

 

 

***************************************************************************

 

Categories: funding research, science policy, scientific culture
Tags: science changed academe,corruption of academe

Posted in funding research, science policy, scientific culture | Tagged: , , | Leave a Comment »

How science changed — IV. Cutthroat competition and outright fraud

Posted by Henry Bauer on 2018/04/15

The discovery of the structure of DNA was a metaphorical “canary in the coal mine”, warning of the intensely competitive environment that was coming to scientific activity. The episode illustrates in microcosm the seismic shift in the circumstances of scientific activity that started around the middle of the 20th century [1], the replacement of one set of unwritten rules by another set [2].
The structure itself was discovered by Watson and Crick around 1950, but it was only in 1968, with the publication of Watson’s personal recollections, that attention was focused on how Watson’s approach and behavior marked a break from the traditional unwritten rules of scientific activity.
It took even longer for science writers and journalists to realize just how cutthroat the competition had become in scientific and medical research. Starting around 1980 there appeared a spate of books describing fierce fights for priority on a variety of specific topics:
Ø    The role of the brain in the release of hormones; Guillemin vs. Schally — Nicholas Wade, The Nobel Duel: Two Scientists’ 21-year Race to Win the World’s Most Coveted Research Prize, Anchor Press/Doubleday, 1981.
Ø    The nature and significance of a peculiar star-like object — David H. Clark, The Quest for SS433, Viking, 1985.
Ø    “‘Mentor chains’, characterized by camaraderie and envy, for example in neuroscience and neuropharmacology” — Robert Kanigel, Apprentice to Genius: The Making of a Scientific Dynasty, Macmillan, 1986.
Ø    High-energy particle physics, atom-smashers — Gary Taubes, Nobel Dreams: Power, Deceit, and the Ultimate Experiment, Random House, 1986.
Ø    “Soul-searching, petty rivalries, ridiculous mistakes, false results as rivals compete to understand oncogenes” — Natalie Angier, Natural Obsessions: The Search for the Oncogene, Houghton Mifflin, 1987.
Ø    “The brutal intellectual darwinism that dominates the high-stakes world of molecular genetics research” — Stephen S. Hall, Invisible Frontiers: The Race to Synthesize a Human Gene, Atlantic Monthly Press, 1987.
Ø    “How the biases and preconceptions of paleoanthropologists shaped their work” — Roger Lewin, Bones of Contention: Controversies in the Search for Human Origins, Simon & Schuster, 1987.
Ø    “The quirks of . . . brilliant . . . geniuses working at the extremes of thought” — Ed Regis, Who Got Einstein’s Office: Eccentricity and Genius at the Institute for Advanced Study, Addison-Wesley, 1987.
Ø    High-energy particle physics — Sheldon Glashow with Ben Bova, Interactions: A Journey Through the Mind of a Particle Physicist and the Matter of the World, Warner, 1988.
Ø    Discovery of endorphins — Jeff Goldberg, Anatomy of a Scientific Discovery, Bantam, 1988.
Ø    “Intense competition . . . to discover superconductors that work at practical temperatures “ — Robert M. Hazen, The Breakthrough: The Race for the Superconductor, Summit, 1988.
Ø    Science is done by human beings — David L. Hull, Science as a Process, University of Chicago Press, 1988.
Ø    Competition to get there first — Charles E. Levinthal, Messengers of Paradise: Opiates and the Brain, Anchor/Doubleday 1988.
Ø    “Political machinations, grantsmanship, competitiveness” — Solomon H. Snyder, Brainstorming: The Science and Politics of Opiate Research, Harvard University Press, 1989.
Ø    Commercial ambitions in biotechnology — Robert Teitelman, Gene Dreams: Wall Street, Academia, and the Rise of Biotechnology, Basic Books, 1989.
Ø    Superconductivity, intense competition — Bruce Schechter, The Path of No Resistance: The Story of the Revolution in Superconductivity, Touchstone (Simon & Schuster), 1990.
Ø    Sociological drivers behind scientific progress, and a failed hypothesis — David M. Raup, The Nemesis Affair: A Story of the Death of Dinosaurs and the Ways of Science, Norton 1999.

These titles illustrate that observers were able to find intense competitiveness wherever they looked in science; though mostly in medical or biological science, with physics including astronomy the next most frequently mentioned field of research.
Watson’s memoir had not only featured competition most prominently, it had also revealed that older notions of ethical behavior no longer applied: Watson was determined to get access to competitors’ results even if those competitors were not yet anxious to reveal all to him [3]. It was not only competitiveness that increased steadily over the years; so too did the willingness to engage in behavior that not so long before had been regarded as improper.
Amid the spate of books about how competitive research had become, there also was published. Betrayers of the Truth: Fraud and Deceit in the Halls of Science by science journalists William Broad and Nicholas Wade (Simon & Schuster, 1982). This book argued that dishonesty has always been present in science, citing in an appendix 33 “known or suspected” cases of scientific fraud from 1981 back to the 2nd century BC. These actual data could not support the book’s sweeping generalizations [4], but Broad and Wade had been very early to draw attention to the fact that dishonesty in science was a significant problem. What they failed to appreciate was why: not that there had always been a notable frequency of fraud in science but that scientific activity was changing in ways that were in process of making it a different kind of thing than in the halcyon few centuries of modern science from the 17th century to the middle of the 20th century.
Research misconduct had featured in Congressional Hearings as early as 1981. Soon the Department of Health and Human Services established an Office of Scientific Integrity, now the Office of Research Integrity. Its mission is to instruct research institutions about preventing fraud and dealing with allegations of it. Scientific periodicals began to ask authors to disclose conflicts of interest, and co-authors to state specifically what portions of the work were their individual responsibility.
Academe has proliferated Centers for Research and Medical Ethics [5], and there are now periodicals entirely devoted to such matters [6]. Courses in research ethics have become increasingly common; it is even required that such courses be available at institutions that receive research funds from federal agencies.
In 1989, the Committee on the Conduct of Science of the National Academy of Sciences issued the booklet On Being a Scientist, which describes proper behavior; that booklet’s 3rd edition, titled A Guide to Responsible Conduct in Research, makes even clearer that the problem of scientific misconduct is now widely seen as serious.
Another indication that dishonesty has increased is the quite frequent retraction of published research reports: Retraction Watch estimates that 500-600 published articles are retracted annually. John Ioannidis has made a specialty of reviewing literature for consistency, and reported: “Why most published research findings are false” [7]. Nature has an archive devoted to this phenomenon [8].

Researchers half a century ago would have been aghast and disbelieving at all this, that science could have become so untrustworthy. It has happened because science changed from an amateur avocation to a career that can bring fame and wealth [9]; and scientific activity changed from a cottage industry to a highly bureaucratic corporate industry, with pervasive institutional as well as individual conflicts of interest; and researchers’ demands for support have far exceeded the available supply.

And as science changed, it drew academe along with it. More about that later.

===============================================

[1]    How science changed — III. DNA: disinterest loses, competition wins
[2]    How science has changed— II. Standards of Truth and of Behavior
[3]    The individuals Watson mentioned as getting him access corrected his recollections: they shared with him nothing that was confidential. The significant point remains that Watson had no such scruples.
[4]    See my review, “Betrayers of the truth: a fraudulent and deceitful title from the journalists of science”, 4S Review, 1 (#3, Fall) 17–23.
[5]   There is an Online Ethics Center for Engineering and Science. Physical Centers have been established at: University of California, San Diego (Center for Ethics in Science and Technology); University of Delaware (Center for Science, Ethics and Public Policy); Michigan State University (Center for Ethics and Humanities in the Life Sciences); University of Notre Dame (John J. Reilly Center for Science, Technology, and Values).
[6]    Accountability in Research (founded 1989); Science and Engineering Ethics (1997); Ethics and Information Technology (1999); BMC Medical Ethics (2000); Ethics in Science and Environmental Politics (2001).
[7]    John P. A. Ioannidis, “Why Most Published Research Findings Are False”, PLoS Medicine, 2 (2005): e124. 
[8]    “Challenges in irreproducible research”
[9]    How science has changed: Who are the scientists?

Posted in conflicts of interest, fraud in medicine, fraud in science, funding research, media flaws, science is not truth, scientific culture, scientists are human | Tagged: , | Leave a Comment »

How science has changed — II. Standards of Truth and of Behavior

Posted by Henry Bauer on 2018/04/08

The scientific knowledge inherited from ancient Babylon and Greece and from medieval Islam was gained by individuals or by groups isolated from one another in time as well as geography. Perhaps the most consequential feature of the “modern” science that we date from the 17th-century Scientific Revolution is the global interaction of the people who are doing science, and especially the continuity over time of their collective endeavors.
These interactions among scientists began in quite informal and individual ways. An important step was the formation of academies and societies, among which the Royal Society of London is usually acknowledged to be the earliest (founded 1660) that has remained active up to the present time — though it was not the earliest such institution and even the claim of “longest continually active” has been challenged [1].
Even nowadays, the global community of scientists remains in many ways informal despite the host of scientific organizations and institutions, national and international: the global scientific community is not governed by any formal structure that lays down how science should be done and how scientists should behave.
However, observing the actualities of scientific activity indicates that there had evolved some agreed-on standards generally seen within the community of scientists as proper behavior. Around the time of the Second World War, sociologist Robert Merton described those informal standards, and they came to be known as the “Mertonian Norms” of science [2]. They comprise:

Ø    Communality or communalism (Merton had said “communism”): Science is an activity of the whole scientific community and it is a public good — findings are shared freely and openly.
Ø    Universalism: Knowledge about the natural world is universally valid and applicable. There are no separations or distinctions by nationality or religion race or anything of that sort.
Ø    Disinterestedness: Science is done for the public good and not for personal benefit; scientists seek to be impartial, objective, unbiased, and not self-serving.
Ø    Skepticism: Claims and reported findings are subject to critical appraisal and testing throughout the scientific community before they can be accepted as proper scientific knowledge.

Note that honesty is not mentioned; it was simply taken for granted.
These norms clearly make sense for a cottage industry, as ideal behavior that individuals should aim for; but they are not appropriate for a corporate environment, they cannot guide the behavior of individuals who are part of some hierarchical enterprise.
In the late 1990s, John Ziman [3] discussed the change in scientific activity as it had morphed from the activities of an informal, voluntary collection of individuals seeking to understand how the world works to a highly organized activity with assigned levels of responsibility and authority and where sources of research funding have a say in what gets done, and which often expect to get something useful in return for their investments, something profitable.
The early cottage industry of science had been essentially self-supporting. Much could be done without expensive equipment. People studied what was conveniently at hand, so there was little need for funds to support travel. Interested patrons and local benefactors could provide the small resources needed for occasional meetings and the publication of findings.
Up to about the middle of the 20th century, universities were able to provide the funds needed for basic research in chemistry and biology and physics. The first sign that exceptional resources could be needed had come in the 1920s when Lawrence constructed the first large “atom-smashing machine”; but that and the need for expensive astronomical telescopes remained outliers in the requirements for the support of scientific research overall.
From about the time of the Second World War, however, research going beyond what had already been accomplished began to require ever more expensive and specialized equipment as well as considerable infrastructure: technicians to support the equipment, glass-blowers and secretaries and book-keepers and librarians, and managers of such ancillary staff; so researchers increasingly came to need support beyond that available from individual patrons or universities. Academic research came to rely increasingly on getting grants for specific research projects from public agencies or from wealthy private foundations.
Although those sources of research funds typically claim that they want to support simply “the best science”, their view of what the best science is does not necessarily jibe with the judgments of the individual researchers [4].
At the same time as research in universities was calling on outside sources of funding, an increasing number of industries were setting up their own laboratories for research specifically toward creating and improving their products and services. Such product-specific “R&D” (research and development) sometimes turned up novel basic knowledge, or revealed the need for such fundamentally new understanding. One consequence has been that some really striking scientific advances have come from such famous industrial laboratories as Bell Telephone Laboratories or the Research Laboratory of General Electric. Researchers employed in industry have received a considerable number of Nobel Prizes, often jointly with academics [5].
Under these new circumstances, as Ziman [3] pointed out, the traditional distinction between “applied” research and “pure” or “basic” research lost its meaning.
Ziman rephrased the Mertonian norms as the nice acronym CUDOS, adding the “O” for originality, quite appropriately since within the scientific community credit was and is given to for the most innovative, original contributions; CUDOS, or preferably “kudos”, being the Greek term for acclaim of exceptional accomplishment. By contrast, Ziman proposed for the norms that obtain in a corporate scientific enterprise, be it government or private, the acronym PLACE: Researchers nowadays get their rewards not by adhering to the Mertonian norms but by producing Proprietary findings whose significance may be purely Local rather than universal, the subject of research having been chosen under the Authority of an employer or patron and not by the individual researcher, who is Commissioned to do the work as an Expert employee.

Ziman too did not mention honesty; like Merton he simply took it for granted.
Ziman had made an outstanding career in solid-state physics before, in his middle years, he began to publish, starting in 1968 [6] highly insightful works about how science functions, in particular what makes it reliable. In the late 1960s, it had still been reasonable to take honesty in science for granted; but by the time Ziman published Prometheus Bound, honesty in science could no longer be taken for granted; Ziman had failed to notice some of what was happening in scientific activity. Competition for resources and for career advancement had increased to a quite disturbing extent, presumably the impetus for the increasing frequency with which scientists were found to have cheated in some way. Even published, supposedly peer-reviewed research failed later attempted confirmation in many cases, and all too often it was revealed as simply false, faked [7].
More about that in a following blog post.

==========================================

[1]    “The Royal Societies [sic] claim to be the oldest is based on the fact that they developed out of a group that started meeting in Gresham College in 1645 but unlike the Leopoldina this group was informal and even ceased to meet for two years between 1658 and 1660” — according to The Renaissance Mathematicus, “It wasn’t the first but…”
[2]    Robert K. Merton, “The normative structure of science” (1942); most readily accessible as pp. 267–78 in The Sociology of Science (ed. N. Storer, University of Chicago Press, 1973) a collection of Merton’s work
[3]    John Ziman, Prometheus Bound: Science in a Dynamic Steady State, Cambridge University Press, 1994
[4]    Richard Muller, awarded a prize by the National Science Foundation, pointed out that truly innovative studies are unlikely to be funded and need to be carried out more or less surreptitiously; and Charles Townes, who developed masers and lasers, testified to his difficulty in getting research support for that ground-breaking work, or even encouragement from some of his distinguished older colleagues —
Richard A. Muller, “Innovation and scientific funding”, Science, 209 (1980) 880–3
Charles Townes, How the Laser Happened: Adventures of a Scientist, Oxford University Press , 1999
[5]    Karina Cummings, “Nobel Science Prizes in industry”;
Nobel Laureates and Research Affiliations
[6]    John Ziman, Public Knowledge (1968); followed by The Force of
Knowledge
(1976); Reliable Knowledge (1978); An Introduction to Science
Studies
(1984); Prometheus Bound (1994); Real Science (2000);
all published by Cambridge University Press
[7]    John P. A. Ioannidis, “Why most published research findings are false”,
         PLoS Medicine, 2 (2005) e124
Daniele Fanelli, “How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data”,
PLoS ONE, 4(#5, 2009): e5738

Posted in conflicts of interest, fraud in medicine, fraud in science, funding research, peer review, resistance to discovery, science is not truth, scientific culture, scientists are human | Tagged: , | Leave a Comment »

How science has changed: Who are the scientists?

Posted by Henry Bauer on 2018/04/07

Scientists are people who do science, Nowadays scientists are people who work at science as a full-time occupation and who earn their living at it.
Science means studying and learning about the natural world, and human beings have been doing that since time immemorial; indeed, in a sense all animals do that, but humans have developed efficient means to transmit gained knowledge to later generations.
At any rate, there was science long before [1] there were scientists, full-time professional students of Nature. Our present-day store of scientific knowledge includes things that have been known for at least thousands of years. For example, from more than 6,000 years ago in Mesopotamia (Babylon, Sumer) we still use base-60 mathematics for the number of degrees in the arcs of a circle (360) and the number of seconds in a minute and the number of minutes in an hour. We still cry “Eureka” (found!!) for a new discovery, as supposedly Archimedes did more than 2000 years ago when he recognized that floating an object in water was an easy way to measure its volume (by the increase in height of the water) and that the object’s weight equaled the weight of the water it displaced. The Islamic science of the Middle Ages has left its mark in language with, for instance, algebra or alchemy.
Despite those early pieces of science that are still with us today, most of what the conventional wisdom thinks it knows about science is based on what historians call “modern” science, which is generally agreed to have emerged around the 17th century in what is usually called The Scientific Revolution.
The most widely known bits of science are surely the most significant advances. Those are typically associated with the names of people who either originated them or made them popular [2]; so many school-children hear about Archimedes and perhaps Euclid and Ptolemy; and for modern science, even non-science college students are likely to hear of Galileo and Newton and Darwin and Einstein. Chemistry students will certainly hear about Lavoisier and Priestley and Wöhler and Haber; and so on, just as most of us have learned about general history in terms of the names of important individuals. So far as science is concerned, most people are likely to gain the general impression that it has been done and is being done by a relatively small number of outstanding individuals, geniuses in fact. That impression could only be entrenched by the common thought-bite that “science” overthrew “religion” sometime in the 19th century, leading to the contemporary role of science as society’s ultimate arbiter of true knowledge.
The way in which scientists in modern times have been featured in books and in films also gives the impression that scientists are somehow special, that they are by no means ordinary people. Roslynn Haynes [3] identified several stereotypes of scientists, for example “adventurer” or “the noble scientist as hero or savior of society”, with most stereotypes however being less than favorable — “mad, bad, dangerous scientist, unscrupulous in the exercise of power”. But no matter whether good or bad in terms of morals or ethics, society’s stereotype of “scientist” is “far from an ordinary person”.
That is accurate enough for the founders of modern science, but it became progressively less true as more and more people came to take part in some sort of scientific activity. Real change began in the early decades of the 19th century, when the term “scientist” seems to have been used for the first time [4].
By the end of the 19th century it had become possible to earn a living through being a scientist, through teaching or through doing research that led to commercially useful results (as in the dye-stuff industry) or through doing both in what nowadays are called research universities. By the early 20th century, scientists no longer deserved to be seen as outstanding individual geniuses, but they were still a comparatively elite group of people with quite special talents and interests. Nowadays, however, there is nothing distinctly elite about being a scientist. In terms of numbers (in the USA), scientists at roughly 2.7 million are comparable to engineers at 2.1 million (in ~2001), less elite than lawyers (~ 1 million) or doctors (~800,000); and teachers, at ~3.5 million, are almost as elite as scientists.
Nevertheless, so far as the general public and the conventional wisdom are concerned, there is still an aura of being special and distinctly elite associated with science and being a scientist, no doubt because science is so widely acknowledged as the ultimate authority on what is true about the workings of the natural world; and because “scientist” brings to most minds someone like Darwin or Einstein or Galileo or Newton.
So the popular image of scientists is wildly wrong about today’s world. Scientists today are unexceptional white-collar workers. Certainly a few of them could still be properly described as geniuses, just as a few engineers or doctors could be — or those at the high tail-end of any distribution of human talent; but by and large, there is nothing exceptional about scientists nowadays. That is an enormous change from times past, and the conventional wisdom has not begun to be aware of that change.
One aspect of that change is that the first scientists were amateurs seeking to satisfy their curiosity about how the world works, whereas nowadays scientists are technicians or technical experts who do what they are told to do by employers or enabled to do by patrons. A very consequential corollary is that the early scientists had nothing to gain by being untruthful, whereas nowadays the rewards potentially available to prominent scientists have tempted a significant number to practice varying degrees of dishonesty.
Another way of viewing the change that science and scientists have undergone is that science used to be a cottage industry largely self-supported by independent entrepreneurial workers, whereas nowadays science is a corporate behemoth whose workers are apparatchiks, cogs in bureaucratic machinery; and in that environment, individual scientists are subject to conflicts of interest and a variety of pressures owing to their membership in a variety of groups.

Science today is not a straightforward seeking of truth about how the world works; and claims emerging from the scientific community are not necessarily made honestly; and even when made honestly, they are not necessarily true. More about those things in future posts.

=======================================

[1]    For intriguing tidbits about pre-scientific developments, see “Timeline Outline View”
[2]    In reality, most discoveries hinge on quite a lot of work and learning that prefigured them and made them possible, as discussed for instance by Tony Rothman in Everything’s Relative: And Other Fables from Science and Technology (Wiley, 2003). That what matters most is not the act of discovery but the making widely known is the insight embodied in Stigler’s Law, that discoveries are typically named after the last person who discovered them, not the first (S. M. Stigler, “Stigler’s Law of Eponymy”, Transactions of the N.Y. Academy of Science, II: 39 [1980] 147–58)
[3]    Roslynn D. Haynes, From Faust to Strangelove: Representations of the Scientist in Western Literature, Johns Hopkins University Press, 1994; also “Literature Has shaped the public perception of science”, The Scientist, 12 June 1989, pp. 9, 11
[4]    William Whewell is usually credited with coining the term “scientist” in the early 1830s

Posted in conflicts of interest, fraud in science, funding research, media flaws, peer review, science is not truth, scientific culture, scientists are human | Tagged: , , | 4 Comments »

Dangerous knowledge IV: The vicious cycle of wrong knowledge

Posted by Henry Bauer on 2018/02/03

Peter Duesberg, universally admired scientist, cancer researcher, and leading virologist, member of the National Academy of Sciences, recipient of a seven-year Outstanding Investigator Grant from the National Institutes of Health, was astounded when the world turned against him because he pointed to the clear fact that HIV had never been proven to cause AIDS and to the strong evidence that, indeed, no retrovirus could behave in the postulated manner.

Frederick Seitz, at one time President of the National Academy of Sciences and for some time President of Rockefeller University, became similarly non grata for pointing out that parts of an official report contradicted one another about whether human activities had been proven to be the prime cause of global warming (“A major deception on global warming”, Wall Street Journal, 12 June 1996).

A group of eminent astronomers and astrophysicists (among them Halton Arp, Hermann Bondi, Amitabha Ghosh, Thomas Gold, Jayant Narlikar) had their letter pointing to flaws in Big-Bang theory rejected by Nature.

These distinguished scientists illustrate (among many other instances involving less prominent scientists) that the scientific establishment routinely refuses to acknowledge evidence that contradicts contemporary theory, even evidence proffered by previously lauded fellow members of the elite establishment.

Society’s dangerous wrong knowledge about science includes the mistaken belief that science hews earnestly to evidence and that peer review — the behavior of scientists — includes considering new evidence as it comes in.

Not so. Refusal to consider disconfirming facts has been documented on a host of topics less prominent than AIDS or global warming: prescription drugs, Alzheimer’s disease, extinction of the dinosaurs, mechanism of smell, human settlement of the Americas, the provenance of Earth’s oil deposits, the nature of ball lightning, the evidence for cold nuclear fusion, the dangers from second-hand tobacco smoke, continental-drift theory, risks from adjuvants and preservatives in vaccines, and many more topics; see for instance Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, Jefferson (NC): McFarland 2012. And of course society’s officialdom, the conventional wisdom, the mass media, all take their cue from the scientific establishment.

The virtually universal dismissal of contradictory evidence stems from the nature of contemporary science and its role in society as the supreme arbiter of knowledge, and from the fact of widespread ignorance about the history of science, as discussed in earlier posts in this series (Dangerous knowledge; Dangerous knowledge II: Wrong knowledge about the history of science; Dangerous knowledge III: Wrong knowledge about science).

The upshot is a vicious cycle. Ignorance of history makes it seem incredible that “science” would ignore evidence, so claims to that effect on any given topic are brushed aside — because it is not known that science has ignored contrary evidence routinely. But that fact can only be recognized after noting the accumulation of individual topics on which this has happened, evidence being ignored. That’s the vicious cycle.

Wrong knowledge about science and the history of science impedes recognizing that evidence is being ignored in any given actual case. Thereby radical progress is nowadays being greatly hindered, and public policies are being misled by flawed interpretations enshrined by the scientific consensus. Society has succumbed to what President Eisenhower warned against (Farewell speech, 17 January 1961) :

in holding scientific research and discovery in respect, as we should,
we must also be alert to the equal and opposite danger
that public policy could itself become the captive
of a scientific-technological elite.

The vigorous defending of established theories and the refusal to consider contradictory evidence means that once theories have been widely enough accepted, they soon become knowledge monopolies, and support for research establishes the contemporary theory as a research cartel(“Science in the 21st Century: Knowledge Monopolies and Research Cartels”).

The presently dysfunctional circumstances have been recognized only by two quite small groups of people:

  1. Observers and critics (historians, philosophers, sociologists of science, scholars of Science & Technology Studies)
  2. Researchers whose own experiences and interests happened to cause them to come across facts that disprove generally accepted ideas — for example Duesberg, Seitz, the astronomers cited above, etc. But these researchers only recognize the unwarranted dismissal of evidence in their own specialty, not that it is a general phenomenon (see my talk, “HIV/AIDS blunder is far from unique in the annals of science and medicine” at the 2009 Oakland Conference of Rethinking AIDS; mov file can be downloaded at http://ra2009.org/program.html, but streaming from there does not work).

Such dissenting researchers find themselves progressively excluded from mainstream discourse, and that exclusion makes it increasingly unlikely that their arguments and documentation will gain attention. Moreover, frustrated by a lack of attention from mainstream entities, dissenters from a scientific consensus find themselves listened to and appreciated increasingly only by people outside the mainstream scientific community to whom the conventional wisdom also pays no attention, for instance the parapsychologists, ufologists, cryptozoologists. Such associations, and the conventional wisdom’s consequent assigning of guilt by association, then entrenches further the vicious cycle of dangerous knowledge that rests on the acceptance of contemporary scientific consensuses as not to be questioned — see chapter 2 in Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth and “Good Company and Bad Company”, pp. 118-9 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017).

Posted in conflicts of interest, consensus, denialism, funding research, global warming, media flaws, peer review, resistance to discovery, science is not truth, science policy, scientific culture, scientism, scientists are human, unwarranted dogmatism in science | Tagged: , | 2 Comments »

Science is broken

Posted by Henry Bauer on 2017/11/21

Science is broken: Perverse incentives and the misuse of quantitative metrics have undermined the integrity of scientific research is the full title of an article published in the on-line journal AEON . I learned of it through a friend who was interested in part because the authors are at the university from which I retired some 17 years ago.

The article focuses on the demands on researchers to get grants and publish, and that their achievements are assessed quantitatively rather than qualitatively, through computerized scoring of such things as Journal Impact Factor and numbers of citations of an individual’s work.

I agree that those things are factors in what has gone wrong, but there are others as well.

The AEON piece is an abbreviated version of the full article in Environmental Engineering Science (34 [2017] 51-61; DOI: 10.1089/ees.2016.0223). I found it intriguing that the literature cited in it overlaps very little with the literature with which I’ve been familiar. That illustrates how over-specialized academe has become, and with that the intellectual life of society as a whole. There is no longer a “natural philosophy” that strives to integrate knowledge across the board, from all fields and specializations; and there are not the polymath public intellectuals who could guide society through the jungle of ultra-specialization. So it is possible, as in this case of “science is broken”, for different folk to reach essentially the same conclusion by extrapolating from quite different sets of sources and quite independently of one another.

I would add more factors, or perhaps context, to what Edwards and Roy emphasized:

The character of research activity has changed out of sight since the era or “modern science” began; for example, the number of wannabe “research universities” in the USA has tripled or quadrupled since WWII — see “Three stages of modern science”; “The science bubble”; chapter 1 in Science Is Not What You Think [McFarland 2017].

This historical context shows how the perverse incentives noted by Edwards and Roy came about. Honesty and integrity, dedication to truth-seeking above all, were notable aspects of scientific activity when research was something of an ivory-tower avocation; nowadays research is so integrated with government and industry that researchers face much the same difficulties as professionals who seek to practice honesty and integrity while working in the political realm or the financial realm: the system makes conflicts of interest, institutional as well as personal, inevitable. John Ziman (Prometheus Bound, Cambridge University Press) pointed out how the norms of scientific practice nowadays differ from those traditionally associated with science “in the good old days” (the “Mertonian” norms of communality, universality, disinterestedness, skepticism).

My special interest has long been in the role of unorthodoxies and minority views in the development of science. The mainstream, the scientific consensus, has always resisted drastic change (Barber, “Resistance by scientists to scientific discovery”, Science, 134 [1961] 596–602), but nowadays that resistance can amount to suppression; see “Science in the 21st century”; Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth [McFarland, 2012]). Radical dissent from mainstream views is nowadays expressed openly almost only by long-tenured full professors or by retired people.

I’m in sympathy with the suggestions at the end of the formal Edwards and Roy paper, but I doubt that even those could really fix things since the problem is so thoroughgoingly systemic. Many institutions and people are vested in the status quo. Thus PhD programs will not change in the desired direction so long as the mentoring faculty are under pressure to produce more publications and grants, which leads to treating graduate students as cheap hired hands pushing the mentor’s research program instead of designing PhD research as optimum for neophytes to learn to do independent research. The drive for institutional prestige and status and rankings seems the same among university leaders, and they seek those not by excelling in “higher education” but by winning at football and basketball and by getting and spending lots of grant money on “research”. How to change that obsession with numbers: dollars for research, games won in sports?

That attitude is not unique to science or to academe. In society as a whole there has been increasing pressure to find “objective” criteria to avoid the biases inherent inevitably in human judgments. Society judges academe by numbers — of students, of research expenditures, of patents, of magnitude of endowment , etc. — and we compare nations by GDP rather than level of satisfaction among the citizens. In schools we create “objective” and preferably quantifiable criteria like “standards of learning” (SOLs), that supersede the judgments of the teachers who are in actual contact with actual students. Edwards and Roy cite Goodhart’s Law, which states that “when a measure becomes a target, it ceases to be a good measure”, which was new to me and which encapsulates so nicely much of what has gone wrong. For instance, in less competitive times, the award of a research grant tended to attest the quality of the applicant’s work; but as everything increased in size, and the amount of grants brought in became the criterion of quality of applicant and of institution, the aim of research became to get more grants rather than to do the most advancing work that would if successful bring real progress as well as more research funds. SOLs induced teachers to cheat by sharing answers with their students before giving the test. And so on and on. The cart before the horse. The letter of every law becomes the basis for action instead of the human judgment that could put into practice the spirit of the law.

Posted in conflicts of interest, consensus, fraud in science, funding research, politics and science, resistance to discovery, science is not truth, scientific culture | Tagged: , | Leave a Comment »

Has all academic publishing become predatory? Or just useless? Or just vanity publishing?

Posted by Henry Bauer on 2017/06/14

A pingback to my post “Predatory publishers and fake rankings of journals” led me to “Where to publish and not to publish in bioethics – the 2017 list”.

That essay brings home just how pervasive has become for-profit publishing of purportedly scholarly material. The sheer volume of the supposedly scholarly literature is such as to raise the question, who looks at any part of this literature?

One of the essay’s links leads to a listing by the Kennedy Center for Ethics of 44 journals in the field of bioethics.  Another link leads to a list of the “Top 100 Bioethics Journals in the World, 2015” by the author of the earlier “Top 50 Bioethics Journals and Top 250 Most Cited Bioethics Articles Published 2011-2015

What, I wonder, does any given bioethicist actually read? How many of these journals have even their Table of Contents scanned by most bioethicists?

Beyond that: Surely the potential value of scholarly work in bioethics is to improve the ethical practices of individuals and institutions in the real world. How does this spate of published material contribute to that potential value?

Those questions are purely rhetorical, of course. I suggest that the overwhelming mass of this stuff has no influence whatever on actual practices by doctors, researchers, clinics and other institutions.

This literature does, however, support the existence of a body of bioethicists whose careers are tied in some way to the publication of articles about bioethics.

The same sort of thing applies nowadays in every field of scholarship and science. The essay’s link to Key Journals in The Philosopher’s Index brings up a 79-page list, 10 items per page, of key [!] journals in philosophy.

This profusion of scholarly journals supports not only communities of publishing scholars in each field, it also nurtures an expanding community of meta-scholars whose publications deal with the profusion of publication. The earliest work in this genre was the Science Citation Index which capitalized on information technology to compile indexes through which all researchers could discover which of their published work had been cited and where.

That was unquestionably useful, including by making it possible to discover people working in one’s own specialty. But misuse became abuse, as administrators and bureaucrats began simply to count how often an individual’s work had been cited and to equate that number with quality.

No matter how often it has been pointed out that this equation is so wrong as to be beyond rescuing, this attraction of supposedly objective numbers and the ease of obtaining them has made citation-counting an apparently permanent part of the scholarly literature.

Not only that. The practice has been extended to judging the influence a journal has by counting how often the articles in it have been cited, yielding a “journal impact factor” that, again, is typically conflated with quality, no matter how often or how learnedly the meta-scholars point out the fallacies in that equation — for example different citing practices in different fields, different editorial practices that sometimes limit number of permitted citations, the frequent citation of work that had been thought important but that turned out to be wrong.

The scholarly literature had become absurdly voluminous even before the advent of on-line publishing. Meta-scholars had already learned several decades ago that most published articles are never cited by anyone other than the original author(s): see for instance J. R. Cole & S. Cole, Social Stratification in Science (University of Chicago Press, 1973); Henry W. Menard, Science: Growth and Change (Harvard University Press, 1971); Derek de Solla Price, Little Science, Big Science … And Beyond (Columbia University Press, 1986).

Derek Price (Science Since Babylon, Yale University Press, 1975) had also pointed out that the growth of science at an exponential rate since the 17th century had to cease in the latter half of the 20th century since science was by then consuming several percent of the GDP of developed countries. And indeed there has been cessation of growth in research funds; but the advent of the internet has made it possible for publication to continue to grow exponentially.

Purely predatory publishing has added more useless material to what was already unmanageably voluminous, with only rare needles in these haystacks that could be of any actual practical use to the wider society.

Since almost all of this publication has to be paid for by the authors or their research grants or patrons, one could also characterize present-day scholarly and scientific publication as vanity publishing, serving to the benefit only of the author(s) — except that this glut of publishing now supports yet another publishing community, the scholars of citation indexes and journal impact factors, who concern themselves for example with “Google h5 vs Thomson Impact Factor” or who offer advice for potential authors and evaluators and administrators about “publishing or perishing”.

To my mind, the most damaging aspect of all this is not the waste of time and material resources to produce useless stuff, it is that judgment of quality by informed, thoughtful individuals is being steadily displaced by reliance on numbers generated via information technology by procedures that are understood by all thinking people to be invalid substitutes for informed, thoughtful human judgment.

 

Posted in conflicts of interest, funding research, media flaws, scientific culture | Tagged: , , | 3 Comments »

 
%d bloggers like this: