Skepticism about science and medicine

In search of disinterested science

Archive for the ‘scientific culture’ Category

How science changed — V. And changed academe

Posted by Henry Bauer on 2018/04/19

After WWII, lavish support for science made it a cash cow that academe used to change itself; a change abetted by the corruption of collegiate sport.

*               *               *              *                 *                *              *               *

Science began as an informal cottage industry; nowadays it is a highly organized bureaucratic behemoth that is pervasively intertwined with other sectors of human society.

Science began as a disinterested quest to understand how the world works; practical applications were an incidental though welcome byproduct. Nowadays, society values science for its byproducts more than for the truths it reveals about Nature.

Teaching institutions, colleges, universities were founded to educate (albeit sometimes indoctrinate) future generations. Nowadays much of academe has become a self-serving enterprise in which institutions seek status and prestige from what used to be incidental byproducts; research in academe now has an immediate eye out for patents and potential commercial applications, and intercollegiate sports for local enjoyment have become means of mass entertainment for lucrative revenue. A research university will have many dozens of administrators engaged in managing grant-related matters, intellectual property matters, compliance with regulations, status of research staff, and so on. Almost every university has many dozens of administrative staff engaged in managing its intercollegiate sports programs as well as coaches (whose salaries often exceed those of the university president) and assistant coaches (whose salaries are comparable to or exceed those of full professors).

*                  *                 *                *                *                *                *                *

Scientific activity changed from a cottage industry quite slowly at first, and in fits and starts. Already in the 19th century science had been important in the commercial dye-stuff industry. During the First World War, the German war effort was supported by the discovery, by the chemist Fritz Haber, how to synthesize fertilizers and explosives using the nitrogen in the air. During the 1930s, medical practice began to have genuinely curative capabilities with the discovery of bacteria-killing sulfonamides. But, by and large, up to the Second World War scientific activity remained something of a cottage industry, and basic scientific research was largely an academic ivory-tower activity.

World War II demonstrated the powerful capabilities of applications of scientific understanding; not only the war-ending atomic bombs but also and earlier the sonar that was such an invaluable weapon against submarines and the radar that was invaluable to Great Britain in staving off the German Blitzkrieg bombers; as well as all sorts of developments and improvements in weaponry in techniques of communication and of navigation.

Vannevar Bush had been director of the U.S. Office of Scientific Research and Development, seeing at first hand what science could accomplish. Shortly after the end of World War II he presented the president of the United States with a report entitled Science: The Endless Frontier,  which suggested that scientific research and development could be as valuable to peacetime society as science had proved to be in warfare.

Bush’s initiative is generally credited for the subsequent enormous, unprecedented resources directed into the expansion of scientific activity. The federal support of science came in part as grants to support research activity in the form of specific proposed projects, but also in large part through scholarships and fellowships to stimulate more students to go into science as a career.

That influx of funds led to truly far-reaching changes in academe.

Traditionally, the role of universities was to provide tertiary education, preparing people for the professions. A small proportion of academe comprised so-called “research universities” where the faculty were as much concerned with extending the boundaries of scholarship and of science as they were with the education and training of students; yet the research and scholarship were designed to serve the aim of educating students to become independent professionals. However, the emphasis on scientific research and on training more scientists led eventually to the contemporary circumstances where the primary aim is determined by the demands of the research project rather than by whether the work is best suited for the students to learn how to do independent research. Graduate students came to be seen as cheap technical help rather than as apprentices to be nurtured; science faculty among themselves could be heard referring to the graduate students they were mentoring as “pairs of hands”. In earlier days, prospective graduate students in the sciences would choose their mentors to fit with the students’ specific research interests; nowadays graduate students in the sciences sign on to mentors who have the research grants to support them and they work as cogs in the mentor’s long-term research program [1].

The overt aim of supporting and enhancing science had the corollary effect, no doubt unforeseen and unintended, of making science more prestigious than other intellectual fields within colleges and universities. In time, that tempted some of those other fields to distort themselves in trying to mimic science and gain comparable status and prestige thereby. And not only intellectual prestige: science (and engineering and medical) faculty had higher salaries than faculty in the humanities and the social sciences, and moreover scientists could augment their academic ”9-month” salaries with an extra 20-30% from their research grants as summer-time stipends.

In the humanities, for example — philosophy, history, to some degree psychology — scholarship traditionally focused on critical analysis of traditional classical insights gained by earlier scholars, with comparatively little expectation that entirely novel, ground-breaking insights could be attained. Scholars in the humanities would occasionally publish critiques and analyses and perhaps eventually scholarly monographs. By contrast, in the sciences the emphasis was on novelty, on going beyond what was already known. As other parts of academe developed the ambition to be as well-supported fiscally and thereby as highly regarded as the sciences, they also came to emphasize originality and publication. Graduate students working towards doctoral degrees in history or psychology or sociology are nowadays supposed to generate stuff that deserves publication, often as a monograph. The sciences have become an inappropriate role model for other intellectual disciplines.

The pots of gold available for science-related activities also tempted whole institutions, four-year colleges and teachers’ colleges in particular, to seek prestige and status by transforming themselves into “research” universities. By hiring scientists, grants could be obtained whose amounts were calculated not only to cover the actual costs of the research but also “overhead” costs to reimburse the whole institution for the use of its infrastructure pertinent to the research (“indirect costs” became a popular euphemism for “overhead”). Those indirect costs could be as high as a 50% surcharge on the actual costs of research, and that provided a pool of money that upper-level administrators could draw on for all sorts of things. In the 1940s, the United States had 107 doctorate-granting research universities; by 1950–54 there were 142, by 1960–64 there were 208, and by 1970–74 the number had grown to 307 [2]; since then the rate of growth has been much less, with a count of 334 in 2016 [3 ].

 

The influx of science-related money may have stimulated academe to change in inappropriate and undesirable ways, but science cannot be held responsible for all of today’s ills of academe. Like science, like sports, like so much else, academe has been corrupted by the love of money. One of the most serious consequences is the progressive elimination of tenure-track faculty, replaced by teachers on fixed-term contracts. Academic freedom cannot exist in the absence of tenure, and genuine freedom of thought, expression, and criticism cannot exist in the absence of academic freedom.

Perhaps the most fundamental problem is that both academe and science both should be venues for unfettered truth-seeking activities. But truth-seeking is inevitably subversive, and it is never supported for its own sake by the powers that be. The corruption and distortion of science and academe make it easier for non-truths to spread, which is dangerous for the long-term health of society.

=========================================

[1]    Now-graduated Jorge Cham has described life as a graduate student by means of comic strips: see Sara Coelho, “Piled Higher and Deeper: The everyday life of a grad student”, Science, 323 (2009) 1668–9.
[2]    A Century of Doctorates: Data Analyses of Growth and Change, National Academies Press, 1978.
[3]    According to the Carnegie Classification of Institutions of Higher Education

 

 

***************************************************************************

 

Categories: funding research, science policy, scientific culture
Tags: science changed academe,corruption of academe

Advertisements

Posted in funding research, science policy, scientific culture | Tagged: , , | Leave a Comment »

How science changed — IV. Cutthroat competition and outright fraud

Posted by Henry Bauer on 2018/04/15

The discovery of the structure of DNA was a metaphorical “canary in the coal mine”, warning of the intensely competitive environment that was coming to scientific activity. The episode illustrates in microcosm the seismic shift in the circumstances of scientific activity that started around the middle of the 20th century [1], the replacement of one set of unwritten rules by another set [2].
The structure itself was discovered by Watson and Crick around 1950, but it was only in 1968, with the publication of Watson’s personal recollections, that attention was focused on how Watson’s approach and behavior marked a break from the traditional unwritten rules of scientific activity.
It took even longer for science writers and journalists to realize just how cutthroat the competition had become in scientific and medical research. Starting around 1980 there appeared a spate of books describing fierce fights for priority on a variety of specific topics:
Ø    The role of the brain in the release of hormones; Guillemin vs. Schally — Nicholas Wade, The Nobel Duel: Two Scientists’ 21-year Race to Win the World’s Most Coveted Research Prize, Anchor Press/Doubleday, 1981.
Ø    The nature and significance of a peculiar star-like object — David H. Clark, The Quest for SS433, Viking, 1985.
Ø    “‘Mentor chains’, characterized by camaraderie and envy, for example in neuroscience and neuropharmacology” — Robert Kanigel, Apprentice to Genius: The Making of a Scientific Dynasty, Macmillan, 1986.
Ø    High-energy particle physics, atom-smashers — Gary Taubes, Nobel Dreams: Power, Deceit, and the Ultimate Experiment, Random House, 1986.
Ø    “Soul-searching, petty rivalries, ridiculous mistakes, false results as rivals compete to understand oncogenes” — Natalie Angier, Natural Obsessions: The Search for the Oncogene, Houghton Mifflin, 1987.
Ø    “The brutal intellectual darwinism that dominates the high-stakes world of molecular genetics research” — Stephen S. Hall, Invisible Frontiers: The Race to Synthesize a Human Gene, Atlantic Monthly Press, 1987.
Ø    “How the biases and preconceptions of paleoanthropologists shaped their work” — Roger Lewin, Bones of Contention: Controversies in the Search for Human Origins, Simon & Schuster, 1987.
Ø    “The quirks of . . . brilliant . . . geniuses working at the extremes of thought” — Ed Regis, Who Got Einstein’s Office: Eccentricity and Genius at the Institute for Advanced Study, Addison-Wesley, 1987.
Ø    High-energy particle physics — Sheldon Glashow with Ben Bova, Interactions: A Journey Through the Mind of a Particle Physicist and the Matter of the World, Warner, 1988.
Ø    Discovery of endorphins — Jeff Goldberg, Anatomy of a Scientific Discovery, Bantam, 1988.
Ø    “Intense competition . . . to discover superconductors that work at practical temperatures “ — Robert M. Hazen, The Breakthrough: The Race for the Superconductor, Summit, 1988.
Ø    Science is done by human beings — David L. Hull, Science as a Process, University of Chicago Press, 1988.
Ø    Competition to get there first — Charles E. Levinthal, Messengers of Paradise: Opiates and the Brain, Anchor/Doubleday 1988.
Ø    “Political machinations, grantsmanship, competitiveness” — Solomon H. Snyder, Brainstorming: The Science and Politics of Opiate Research, Harvard University Press, 1989.
Ø    Commercial ambitions in biotechnology — Robert Teitelman, Gene Dreams: Wall Street, Academia, and the Rise of Biotechnology, Basic Books, 1989.
Ø    Superconductivity, intense competition — Bruce Schechter, The Path of No Resistance: The Story of the Revolution in Superconductivity, Touchstone (Simon & Schuster), 1990.
Ø    Sociological drivers behind scientific progress, and a failed hypothesis — David M. Raup, The Nemesis Affair: A Story of the Death of Dinosaurs and the Ways of Science, Norton 1999.

These titles illustrate that observers were able to find intense competitiveness wherever they looked in science; though mostly in medical or biological science, with physics including astronomy the next most frequently mentioned field of research.
Watson’s memoir had not only featured competition most prominently, it had also revealed that older notions of ethical behavior no longer applied: Watson was determined to get access to competitors’ results even if those competitors were not yet anxious to reveal all to him [3]. It was not only competitiveness that increased steadily over the years; so too did the willingness to engage in behavior that not so long before had been regarded as improper.
Amid the spate of books about how competitive research had become, there also was published. Betrayers of the Truth: Fraud and Deceit in the Halls of Science by science journalists William Broad and Nicholas Wade (Simon & Schuster, 1982). This book argued that dishonesty has always been present in science, citing in an appendix 33 “known or suspected” cases of scientific fraud from 1981 back to the 2nd century BC. These actual data could not support the book’s sweeping generalizations [4], but Broad and Wade had been very early to draw attention to the fact that dishonesty in science was a significant problem. What they failed to appreciate was why: not that there had always been a notable frequency of fraud in science but that scientific activity was changing in ways that were in process of making it a different kind of thing than in the halcyon few centuries of modern science from the 17th century to the middle of the 20th century.
Research misconduct had featured in Congressional Hearings as early as 1981. Soon the Department of Health and Human Services established an Office of Scientific Integrity, now the Office of Research Integrity. Its mission is to instruct research institutions about preventing fraud and dealing with allegations of it. Scientific periodicals began to ask authors to disclose conflicts of interest, and co-authors to state specifically what portions of the work were their individual responsibility.
Academe has proliferated Centers for Research and Medical Ethics [5], and there are now periodicals entirely devoted to such matters [6]. Courses in research ethics have become increasingly common; it is even required that such courses be available at institutions that receive research funds from federal agencies.
In 1989, the Committee on the Conduct of Science of the National Academy of Sciences issued the booklet On Being a Scientist, which describes proper behavior; that booklet’s 3rd edition, titled A Guide to Responsible Conduct in Research, makes even clearer that the problem of scientific misconduct is now widely seen as serious.
Another indication that dishonesty has increased is the quite frequent retraction of published research reports: Retraction Watch estimates that 500-600 published articles are retracted annually. John Ioannidis has made a specialty of reviewing literature for consistency, and reported: “Why most published research findings are false” [7]. Nature has an archive devoted to this phenomenon [8].

Researchers half a century ago would have been aghast and disbelieving at all this, that science could have become so untrustworthy. It has happened because science changed from an amateur avocation to a career that can bring fame and wealth [9]; and scientific activity changed from a cottage industry to a highly bureaucratic corporate industry, with pervasive institutional as well as individual conflicts of interest; and researchers’ demands for support have far exceeded the available supply.

And as science changed, it drew academe along with it. More about that later.

===============================================

[1]    How science changed — III. DNA: disinterest loses, competition wins
[2]    How science has changed— II. Standards of Truth and of Behavior
[3]    The individuals Watson mentioned as getting him access corrected his recollections: they shared with him nothing that was confidential. The significant point remains that Watson had no such scruples.
[4]    See my review, “Betrayers of the truth: a fraudulent and deceitful title from the journalists of science”, 4S Review, 1 (#3, Fall) 17–23.
[5]   There is an Online Ethics Center for Engineering and Science. Physical Centers have been established at: University of California, San Diego (Center for Ethics in Science and Technology); University of Delaware (Center for Science, Ethics and Public Policy); Michigan State University (Center for Ethics and Humanities in the Life Sciences); University of Notre Dame (John J. Reilly Center for Science, Technology, and Values).
[6]    Accountability in Research (founded 1989); Science and Engineering Ethics (1997); Ethics and Information Technology (1999); BMC Medical Ethics (2000); Ethics in Science and Environmental Politics (2001).
[7]    John P. A. Ioannidis, “Why Most Published Research Findings Are False”, PLoS Medicine, 2 (2005): e124. 
[8]    “Challenges in irreproducible research”
[9]    How science has changed: Who are the scientists?

Posted in conflicts of interest, fraud in medicine, fraud in science, funding research, media flaws, science is not truth, scientific culture, scientists are human | Tagged: , | Leave a Comment »

How science changed — III. DNA: disinterest loses, competition wins

Posted by Henry Bauer on 2018/04/10

The Second World War marked a shift of economic and political power from Europe to the United States, with associated changes in the manner and style with which those powers are deployed. Science began to change at about the same time and in somewhat analogous and perhaps associated ways.

The change in the norms of science, from CUDOS to PLACE, that Ziman had described (How science has changed — II. Standards of Truth and of Behavior) began with what happened in the middle of the 20th century. The first of the Mertonian norms to fade away was disinterestedness: Science came to be like other spheres of human activity in that some people chose to pursue it as an avenue for satisfying personal ambition rather than as an opportunity to serve the public good.

My cohort of science students in Australia in the early 1950s had been notably idealistic about science. We could imagine no finer future then the opportunity to earn a living doing science. The relative absence of excessive personal ambition may have stemmed in large part from the fact that Australia was at that time a profoundly egalitarian society; no one should imagine himself to be “better” than anyone else [1].

Our ideals about science included taking honesty for granted, as Merton had.

Our ranking of desirable occupations had doing research in a university setting at the top. Those who were not good enough to do innovative self-directed research would still be able to have a place in science by working in industry. If one were not talented enough even for that, one would have to make do with teaching science. And if one could not even do that, then it would have to be some sort of administrative job. I still recall the minor functionary at the University of Sydney who represented a living lesson for us in the wages of sin: As a graduate student in chemistry, he had faked some of his results, and so he had been condemned to lifelong labor as a paper pusher.

The sea change in science around the middle of the 20th century is illustrated in microcosm by the circumstances of the discovery of the structure of DNA by James Watson and Francis Crick. Watson’s description of that discovery in his memoir, The Double Helix (Atheneum, 1968), and the reactions to that book in the scientific community, illustrate the profound changes in scientific activity beginning to take place around that time. Gunther Stent’s annotated edition of The Double Helix [2] provides a ready source for appreciating how the DNA discovery touches on many aspects of how scientific activity changed profoundly, beginning in the middle of the 20th century; the edition includes the original text of the book, commentaries, many of the original book reviews, and pertinent articles.

Watson himself, as portrayed in his own memoir, exemplifies the brash, personally ambitious American ignorant of or simply ignoring the traditional ways of doing things, in personal behavior as well as in doing science [3].

In Watson’s memoir, traditional ways including disinterestedness are exemplified by the Europeans Max Perutz and Erwin Chargaff. Perutz had been working diligently for a decade or so, gradually refining what could be learned about the structure of proteins through the technique of X-ray crystallography. With similar diligence Erwin Chargaff had been analyzing the chemical constitutions of DNA from a variety of different sources. Both those research approaches comported with traditional experience that carefully accumulating sufficient pertinent information would eventually be rewarded by important new understanding. In Britain, since Maurice Wilkins and Rosalind Franklin were working on DNA structure via X-ray crystallography, no other British lab would trespass onto that research project.

Watson of course had no such scruples, nor was he prepared to wait for the traditional ways to pay off; Watson’s own words make it appear that his prime motivation was to make a name for himself — any advance in human understanding, for the public good, would be a byproduct.

To short-circuit old-fashioned laborious approaches, he and his co-worker Francis Crick looked to what had been pioneered by another American, Linus Pauling, who is often still regarded as the outstanding chemist of the 20th century. Pauling did also use X-ray crystallography, but only as a secondary adjunct. He had laid the foundations for an understanding of chemical bonding and had been interested from the beginning in the three-dimensional structures of molecules; applying his insights to the study of macromolecules, he succeeded in elucidating the configuration of protein molecules in part by constructing feasible molecular models.

Traditional cosmopolitan European culture could be disdainful and snobbish toward the parvenu, nouveau-riche American ways that were taking over the world, including the world of science. Erwin Chargaff provides an apposite, rather sad illustration. He disliked not only Watson’s personality and actions, he led himself to believe that his own diligent traditional work on the chemical composition of DNA should have been rewarded by a share of the Nobel Prize. Chargaff’s review [4] of The Double Helix flaunts his cultured erudition and also reveals his personal disappointment; later he refused Gunther Stent permission to reprint his review, in company with all the others, in Stent’s annotated edition.

The technical point at issue is that Chargaff had been content to allow results to accumulate until insight revealed itself rather than to take a gamble on some premature interpretation: he had merely remarked on an apparently consistent ratio of purines to pyrimidines in the DNA from a variety of sources [5]: “It is . . . noteworthy — whether this is more than accidental cannot yet be said — that in all deoxypentose nucleic acids examined thus far the molar ratios of total purines to total pyrimidines, and also of adenine to thymine and of guanine to cytosine, were not far from 1”.

The important insight, however, is that the numbers are exactly equal; adenine faces thymine, and guanine faces cytosine in the molecular structure of DNA, and that is the central and crucial feature of the double helix. In hindsight, Chargaff wanted his tentative statement of approximate equality to be construed as “the discovery of   the base-pairing regularities” [4].

Erwin Chargaff may have been acerbic and ungenerous in his book review, but he will also have spoken for generations of scientists in his regret for the passing of the more idealistic, disinterested, traditional order and distaste for what was replacing it: “in our time a successful cancer researcher is not one who ‘solves the riddle,’ but rather one who gets a lot of money to do so” [6]; “Watson’s book may contribute to the much-needed demythologization of modern science”; “with very few exceptions, it is not the men that make science; it is science that makes the men” [4].

That disappearing idealistic traditional order might be exemplified in Sinclair Lewis’s Arrowsmith. Published in 1925 by Harcourt, Brace, according to amazon.com there have been more than 80 later editions, including a 2008 paperback. Evidently the yearning remains strong for disinterested science for the public good. The book’s protagonist, after some early mis-steps and yieldings to commercial temptations, opts for pure research for the good of humankind. Even a couple of decades ago, an academic of my generation (a biochemist) told me that he still gave his graduate students Arrowsmith to read as a guide to the proper ethos of science.

That occasion for being reminded of Arrowsmith was a series of seminars I was then holding on our campus about ethics in research [7], a topic that was just becoming prominent as instances of dishonesty in scientific work were beginning to be noted with increasing frequency.

More about that in a future blog post.

========================================

[1]    A widely shared view was that “tall poppies” should be decapitated. A highly educated Labor-Party leader was careful to adopt a working-class accent in public to hide his normal “educated”, British-BBC-type dialect. I personally saw fisticuffs occasioned by one party feeling that the other had thought themselves better in some way
[2]    Gunther S. Stent (ed.), The Double Helix — Text, Commentary, Reviews, Original Papers, W. W. Norton, 1980
[3]    I had begun to sense the new self-serving ethos in science in the late 1960s, after a career move from Australia to the USA. I encountered ambitious young go-getters who luxuriated in the [then!] largesse of research support, inserting personal pleasures into publicly funded research travel, for example studying aspects of marine environments in ways that made possible scuba-diving and general cavorting in the Caribbean. I participated in the WETS, one of the informal associations of young up-and-comers who used to sample fleshly diversions as part of research-grant-paid trips to professional conferences
[4]    Erwin Chargaff, “A quick climb up Mount Olympus”, Science, 159 (1968) 1448-9
[5]    Erwin Chargaff, “Chemical specificity of nucleic acids and mechanism of their enzymatic degradation”, Experientia, 6 (1950) 201-40
[6]    Erwin Chargaff, Voices in the Labyrinth, Seabury, 1977, p. 89
[7]    For instance, “Ethics in Science” under “Current topics in analytical chemistry: critical analysis of the literature”, 15 & 17 March 1994;
reprinted at pp. 169-182 in Against the Tide, ed. Martín López Corredoira & Carlos Castro Perelman, Universal Publishers, 2008;

 

Posted in peer review, scientific culture, scientists are human | Tagged: , , , , , | Leave a Comment »

How science has changed — II. Standards of Truth and of Behavior

Posted by Henry Bauer on 2018/04/08

The scientific knowledge inherited from ancient Babylon and Greece and from medieval Islam was gained by individuals or by groups isolated from one another in time as well as geography. Perhaps the most consequential feature of the “modern” science that we date from the 17th-century Scientific Revolution is the global interaction of the people who are doing science, and especially the continuity over time of their collective endeavors.
These interactions among scientists began in quite informal and individual ways. An important step was the formation of academies and societies, among which the Royal Society of London is usually acknowledged to be the earliest (founded 1660) that has remained active up to the present time — though it was not the earliest such institution and even the claim of “longest continually active” has been challenged [1].
Even nowadays, the global community of scientists remains in many ways informal despite the host of scientific organizations and institutions, national and international: the global scientific community is not governed by any formal structure that lays down how science should be done and how scientists should behave.
However, observing the actualities of scientific activity indicates that there had evolved some agreed-on standards generally seen within the community of scientists as proper behavior. Around the time of the Second World War, sociologist Robert Merton described those informal standards, and they came to be known as the “Mertonian Norms” of science [2]. They comprise:

Ø    Communality or communalism (Merton had said “communism”): Science is an activity of the whole scientific community and it is a public good — findings are shared freely and openly.
Ø    Universalism: Knowledge about the natural world is universally valid and applicable. There are no separations or distinctions by nationality or religion race or anything of that sort.
Ø    Disinterestedness: Science is done for the public good and not for personal benefit; scientists seek to be impartial, objective, unbiased, and not self-serving.
Ø    Skepticism: Claims and reported findings are subject to critical appraisal and testing throughout the scientific community before they can be accepted as proper scientific knowledge.

Note that honesty is not mentioned; it was simply taken for granted.
These norms clearly make sense for a cottage industry, as ideal behavior that individuals should aim for; but they are not appropriate for a corporate environment, they cannot guide the behavior of individuals who are part of some hierarchical enterprise.
In the late 1990s, John Ziman [3] discussed the change in scientific activity as it had morphed from the activities of an informal, voluntary collection of individuals seeking to understand how the world works to a highly organized activity with assigned levels of responsibility and authority and where sources of research funding have a say in what gets done, and which often expect to get something useful in return for their investments, something profitable.
The early cottage industry of science had been essentially self-supporting. Much could be done without expensive equipment. People studied what was conveniently at hand, so there was little need for funds to support travel. Interested patrons and local benefactors could provide the small resources needed for occasional meetings and the publication of findings.
Up to about the middle of the 20th century, universities were able to provide the funds needed for basic research in chemistry and biology and physics. The first sign that exceptional resources could be needed had come in the 1920s when Lawrence constructed the first large “atom-smashing machine”; but that and the need for expensive astronomical telescopes remained outliers in the requirements for the support of scientific research overall.
From about the time of the Second World War, however, research going beyond what had already been accomplished began to require ever more expensive and specialized equipment as well as considerable infrastructure: technicians to support the equipment, glass-blowers and secretaries and book-keepers and librarians, and managers of such ancillary staff; so researchers increasingly came to need support beyond that available from individual patrons or universities. Academic research came to rely increasingly on getting grants for specific research projects from public agencies or from wealthy private foundations.
Although those sources of research funds typically claim that they want to support simply “the best science”, their view of what the best science is does not necessarily jibe with the judgments of the individual researchers [4].
At the same time as research in universities was calling on outside sources of funding, an increasing number of industries were setting up their own laboratories for research specifically toward creating and improving their products and services. Such product-specific “R&D” (research and development) sometimes turned up novel basic knowledge, or revealed the need for such fundamentally new understanding. One consequence has been that some really striking scientific advances have come from such famous industrial laboratories as Bell Telephone Laboratories or the Research Laboratory of General Electric. Researchers employed in industry have received a considerable number of Nobel Prizes, often jointly with academics [5].
Under these new circumstances, as Ziman [3] pointed out, the traditional distinction between “applied” research and “pure” or “basic” research lost its meaning.
Ziman rephrased the Mertonian norms as the nice acronym CUDOS, adding the “O” for originality, quite appropriately since within the scientific community credit was and is given to for the most innovative, original contributions; CUDOS, or preferably “kudos”, being the Greek term for acclaim of exceptional accomplishment. By contrast, Ziman proposed for the norms that obtain in a corporate scientific enterprise, be it government or private, the acronym PLACE: Researchers nowadays get their rewards not by adhering to the Mertonian norms but by producing Proprietary findings whose significance may be purely Local rather than universal, the subject of research having been chosen under the Authority of an employer or patron and not by the individual researcher, who is Commissioned to do the work as an Expert employee.

Ziman too did not mention honesty; like Merton he simply took it for granted.
Ziman had made an outstanding career in solid-state physics before, in his middle years, he began to publish, starting in 1968 [6] highly insightful works about how science functions, in particular what makes it reliable. In the late 1960s, it had still been reasonable to take honesty in science for granted; but by the time Ziman published Prometheus Bound, honesty in science could no longer be taken for granted; Ziman had failed to notice some of what was happening in scientific activity. Competition for resources and for career advancement had increased to a quite disturbing extent, presumably the impetus for the increasing frequency with which scientists were found to have cheated in some way. Even published, supposedly peer-reviewed research failed later attempted confirmation in many cases, and all too often it was revealed as simply false, faked [7].
More about that in a following blog post.

==========================================

[1]    “The Royal Societies [sic] claim to be the oldest is based on the fact that they developed out of a group that started meeting in Gresham College in 1645 but unlike the Leopoldina this group was informal and even ceased to meet for two years between 1658 and 1660” — according to The Renaissance Mathematicus, “It wasn’t the first but…”
[2]    Robert K. Merton, “The normative structure of science” (1942); most readily accessible as pp. 267–78 in The Sociology of Science (ed. N. Storer, University of Chicago Press, 1973) a collection of Merton’s work
[3]    John Ziman, Prometheus Bound: Science in a Dynamic Steady State, Cambridge University Press, 1994
[4]    Richard Muller, awarded a prize by the National Science Foundation, pointed out that truly innovative studies are unlikely to be funded and need to be carried out more or less surreptitiously; and Charles Townes, who developed masers and lasers, testified to his difficulty in getting research support for that ground-breaking work, or even encouragement from some of his distinguished older colleagues —
Richard A. Muller, “Innovation and scientific funding”, Science, 209 (1980) 880–3
Charles Townes, How the Laser Happened: Adventures of a Scientist, Oxford University Press , 1999
[5]    Karina Cummings, “Nobel Science Prizes in industry”;
Nobel Laureates and Research Affiliations
[6]    John Ziman, Public Knowledge (1968); followed by The Force of
Knowledge
(1976); Reliable Knowledge (1978); An Introduction to Science
Studies
(1984); Prometheus Bound (1994); Real Science (2000);
all published by Cambridge University Press
[7]    John P. A. Ioannidis, “Why most published research findings are false”,
         PLoS Medicine, 2 (2005) e124
Daniele Fanelli, “How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data”,
PLoS ONE, 4(#5, 2009): e5738

Posted in conflicts of interest, fraud in medicine, fraud in science, funding research, peer review, resistance to discovery, science is not truth, scientific culture, scientists are human | Tagged: , | Leave a Comment »

How science has changed: Who are the scientists?

Posted by Henry Bauer on 2018/04/07

Scientists are people who do science, Nowadays scientists are people who work at science as a full-time occupation and who earn their living at it.
Science means studying and learning about the natural world, and human beings have been doing that since time immemorial; indeed, in a sense all animals do that, but humans have developed efficient means to transmit gained knowledge to later generations.
At any rate, there was science long before [1] there were scientists, full-time professional students of Nature. Our present-day store of scientific knowledge includes things that have been known for at least thousands of years. For example, from more than 6,000 years ago in Mesopotamia (Babylon, Sumer) we still use base-60 mathematics for the number of degrees in the arcs of a circle (360) and the number of seconds in a minute and the number of minutes in an hour. We still cry “Eureka” (found!!) for a new discovery, as supposedly Archimedes did more than 2000 years ago when he recognized that floating an object in water was an easy way to measure its volume (by the increase in height of the water) and that the object’s weight equaled the weight of the water it displaced. The Islamic science of the Middle Ages has left its mark in language with, for instance, algebra or alchemy.
Despite those early pieces of science that are still with us today, most of what the conventional wisdom thinks it knows about science is based on what historians call “modern” science, which is generally agreed to have emerged around the 17th century in what is usually called The Scientific Revolution.
The most widely known bits of science are surely the most significant advances. Those are typically associated with the names of people who either originated them or made them popular [2]; so many school-children hear about Archimedes and perhaps Euclid and Ptolemy; and for modern science, even non-science college students are likely to hear of Galileo and Newton and Darwin and Einstein. Chemistry students will certainly hear about Lavoisier and Priestley and Wöhler and Haber; and so on, just as most of us have learned about general history in terms of the names of important individuals. So far as science is concerned, most people are likely to gain the general impression that it has been done and is being done by a relatively small number of outstanding individuals, geniuses in fact. That impression could only be entrenched by the common thought-bite that “science” overthrew “religion” sometime in the 19th century, leading to the contemporary role of science as society’s ultimate arbiter of true knowledge.
The way in which scientists in modern times have been featured in books and in films also gives the impression that scientists are somehow special, that they are by no means ordinary people. Roslynn Haynes [3] identified several stereotypes of scientists, for example “adventurer” or “the noble scientist as hero or savior of society”, with most stereotypes however being less than favorable — “mad, bad, dangerous scientist, unscrupulous in the exercise of power”. But no matter whether good or bad in terms of morals or ethics, society’s stereotype of “scientist” is “far from an ordinary person”.
That is accurate enough for the founders of modern science, but it became progressively less true as more and more people came to take part in some sort of scientific activity. Real change began in the early decades of the 19th century, when the term “scientist” seems to have been used for the first time [4].
By the end of the 19th century it had become possible to earn a living through being a scientist, through teaching or through doing research that led to commercially useful results (as in the dye-stuff industry) or through doing both in what nowadays are called research universities. By the early 20th century, scientists no longer deserved to be seen as outstanding individual geniuses, but they were still a comparatively elite group of people with quite special talents and interests. Nowadays, however, there is nothing distinctly elite about being a scientist. In terms of numbers (in the USA), scientists at roughly 2.7 million are comparable to engineers at 2.1 million (in ~2001), less elite than lawyers (~ 1 million) or doctors (~800,000); and teachers, at ~3.5 million, are almost as elite as scientists.
Nevertheless, so far as the general public and the conventional wisdom are concerned, there is still an aura of being special and distinctly elite associated with science and being a scientist, no doubt because science is so widely acknowledged as the ultimate authority on what is true about the workings of the natural world; and because “scientist” brings to most minds someone like Darwin or Einstein or Galileo or Newton.
So the popular image of scientists is wildly wrong about today’s world. Scientists today are unexceptional white-collar workers. Certainly a few of them could still be properly described as geniuses, just as a few engineers or doctors could be — or those at the high tail-end of any distribution of human talent; but by and large, there is nothing exceptional about scientists nowadays. That is an enormous change from times past, and the conventional wisdom has not begun to be aware of that change.
One aspect of that change is that the first scientists were amateurs seeking to satisfy their curiosity about how the world works, whereas nowadays scientists are technicians or technical experts who do what they are told to do by employers or enabled to do by patrons. A very consequential corollary is that the early scientists had nothing to gain by being untruthful, whereas nowadays the rewards potentially available to prominent scientists have tempted a significant number to practice varying degrees of dishonesty.
Another way of viewing the change that science and scientists have undergone is that science used to be a cottage industry largely self-supported by independent entrepreneurial workers, whereas nowadays science is a corporate behemoth whose workers are apparatchiks, cogs in bureaucratic machinery; and in that environment, individual scientists are subject to conflicts of interest and a variety of pressures owing to their membership in a variety of groups.

Science today is not a straightforward seeking of truth about how the world works; and claims emerging from the scientific community are not necessarily made honestly; and even when made honestly, they are not necessarily true. More about those things in future posts.

=======================================

[1]    For intriguing tidbits about pre-scientific developments, see “Timeline Outline View”
[2]    In reality, most discoveries hinge on quite a lot of work and learning that prefigured them and made them possible, as discussed for instance by Tony Rothman in Everything’s Relative: And Other Fables from Science and Technology (Wiley, 2003). That what matters most is not the act of discovery but the making widely known is the insight embodied in Stigler’s Law, that discoveries are typically named after the last person who discovered them, not the first (S. M. Stigler, “Stigler’s Law of Eponymy”, Transactions of the N.Y. Academy of Science, II: 39 [1980] 147–58)
[3]    Roslynn D. Haynes, From Faust to Strangelove: Representations of the Scientist in Western Literature, Johns Hopkins University Press, 1994; also “Literature Has shaped the public perception of science”, The Scientist, 12 June 1989, pp. 9, 11
[4]    William Whewell is usually credited with coining the term “scientist” in the early 1830s

Posted in conflicts of interest, fraud in science, funding research, media flaws, peer review, science is not truth, scientific culture, scientists are human | Tagged: , , | 4 Comments »

Denialism and pseudo-science

Posted by Henry Bauer on 2018/03/31

Nowadays, questioning whether HIV causes AIDS, or whether carbon dioxide causes global warming, is often deplored and attacked as “denialism” or pseudo-science. Yet questioning those theories is perfectly good, normal science.

Science is many things, including a human activity, an institution, an authority, but most centrally science means knowledge and understanding. Pseudo-science correspondingly means false claims dressed up as though they were reliable, genuine science. Denialism means refusing to believe what is unquestionably known to be true.

Knowledge means facts; understanding means theories or interpretations; and an essential adjunct to both is methodology, the means by which facts can be gathered.

There is an important connection not only between methods and facts but also between facts and theories: Un-interpreted facts carry no meaning. They are made meaningful only when connected to a conceptual framework, which is inevitably subjective. That is typically illustrated by diagrams where the facts consist of black and white lines and areas whose meaning depends on interpretations by the viewer. Different observers offer different interpretations.

The meanings of these facts — black-and-white lines and areas — are supplied by the viewer:
A young lady with extravagant hair treatment facing left — OR an old crone looking downwards;
A duck facing left OR a rabbit facing right;
Twin black profiles looking at one another OR a white vase.

In science, researchers often differ over the interpretation of the evidence: the facts are not disputed but different theories are offered to explain them.

At any rate, in considering what science can tell us we need to consider the three facets of science: facts, methods, and theories [1]. Normal scientific activity is guided by established theories and applies established methods to enlarge the range of factual knowledge.
Every now and again, something unconventional and unforeseen turns up in one of those three facets of science. It might be a new interpretation of existing facts, as in the theory of relativity; or it may be the application of a novel method as in radio-astronomy; or it may be the observation of previously unsuspected happenings, facts, for instance that atoms are not eternally stable and sometimes decompose spontaneously. When something of that sort happens, it is often referred to later as having been a scientific revolution, overturning what had been taken for granted in one facet of science while remaining content with what has been taken for granted in the other two facets.
The progress of science can be viewed as revolutions in facts, or in method followed by the gaining of possibly revolutionary facts, followed eventually by minor or major revisions of theory. Over a sufficiently long time — say, the several centuries of modern (post-17th-century) science — the impression by hindsight is of continual accumulation of facts and improvement of methods; the periodic changes in theoretical perspective are all that tends to be remembered by other than specialist historians of science.

(from “Why minority views should be listened to”)

The history of science also records episodes in which researchers proposed something novel simultaneously in two facets of science, for example when Gregor Mendel applied simple arithmetic to observations of plant breeding, an unprecedented methodology in biology that thereby uncovered entirely new facts. Another example might be the suggestion by Alfred Wegener in the early decades of the 20th century that the Earth’s continents must have moved, since the flora and fauna and geological formations are so alike on continents that are now far apart; making comparisons across oceans was an entirely novel methodology, and there was no theory to accommodate the possibility of continents moving. Episodes of that sort, where two of the three facets of science are unorthodox, have been labeled “premature science” by Gunther Stent [2]; the scientific community did not accept these suggestions for periods of several decades, until something more conventional showed that those unorthodox proposals had been sound.

When claims are made that do not fit with established theory or established methods or established facts, then those claims are typically dismissed out of hand and labeled pseudo-science. For example, claims of the existence of Loch Ness “monsters” involve unorthodox facts obtained by methods that are unorthodox in biology, namely eyewitness accounts, sonar echoes, photographs, and films, instead of the established way of certifying the existence of a species through the examination of an actual specimen; and the theory of evolution and the accepted fossil record have no place for the sort of creature that eyewitnesses describe.

In recent years it has it has been quite common see dissent from established scientific theories referred to as “denialism”. The connotation of that term “denialism” is not only that something is wrong but that it is reprehensibly wrong, that those who question the established view should know better, that it would be damaging to pay attention to them; moreover that denying (for example) that HIV causes AIDS is as morally distasteful as denying the fact of the Holocaust in which millions of Jews, Gypsies, and others were killed.

As Google N-grams for “denialism” indicate, until the last couple of decades, “denialism” meant to deny historical facts of genocide or something like it:

In the 1930s, “denialism” was applied to the refusal to acknowledge the millions of deaths in the Soviet Union caused by enforcement of collectivized agriculture and associated political purges, for example the 1932-33 Ukraine famine [3]. Holocaust denial was prominent for a while around 1970 but then faded away from mention in books until it re-appeared in the late 1980s [4]. But soon “denialism” directed at questioning of HIV/AIDS theory and the theory of carbon-dioxide-induced global warming swamped all other applications of the term:


This recent usage of “denialism” is consciously and specifically intended to arouse the moral outrage associated with denial of genocides, as admitted (for example) by the South African jurist Edwin Cameron [5]. But those genocides are facts, proved beyond doubt by the records of deaths as well as remains and various artefacts at concentration camps. By contrast, so-called “AIDS denialism” and so-called “climate-change denialism” or “global warming denialism” are the questioning or disputing of theories, not facts.

That questioning, moreover, is perfectly consonant with normal science:
⇒⇒   On the matter of whether HIV causes AIDS, dissidents do not question anything about established methods of virology, and they do not claim that HIV tests do not measure proteins, antibodies, and bits of genetic material; they merely assert that the results of HIV tests do not fit the theory that HIV is an infectious agent, and they assert that the methods used in HIV AIDS research are not sound methods for studying viruses since they have not been verified against experiments with authentic pure HIV virions derived directly from HIV+ individuals or from AIDS patients (The Case against HIV).
⇒⇒   On the matter of whether the liberation of carbon dioxide and by the burning of fossil fuels is the primary cause of global warming and climate change (AGW, Anthropogenic Global Warming and climate change [ACC]), those who question that theory do not question the facts about amounts of carbon dioxide present over time and they do not question the changes that have taken place in temperatures; they merely point out that the known and accepted facts show that there have been periods of time during which carbon-dioxide levels were very high while temperatures were very low, and that during several periods when carbon-dioxide levels were increasing the Earth’s temperature was not increasing or perhaps even cooling [6]. Furthermore, those who question AGW point out that the prime evidence offered for the theory is no evidence at all, merely the outputs of computer models that are supposed to take into account all the important variables — even as it is obvious that they do not do that, since those computer models do not provide an accurate record of the actual temperature changes that have been observed over many centuries.

Denialism means to deny something that is unquestionably true, but theories, interpretations, can never be known to be unquestionably true. Labeling as denialists those who question whether HIV causes AIDS, or those who question whether human-caused generation of carbon dioxide is the prime cause of global warming and climate change, is an attempt to finesse having to properly demonstrate the validity of those theories. Another attempt at such evasion is the oft-heard assertion that there is an “overwhelming consensus” on those matters. As Michael Crichton put it:
the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. . . . Consensus is invoked only in situations where the science is not solid enough. Nobody says the consensus of scientists agrees that E=mc2. Nobody says the consensus is that the sun is 93 million miles away. It would never occur to anyone to speak that way [7].

When the assertion of consensus does not suffice, then the ad hominem tactic of crying “denialism” is invoked: the last refuge of intellectual scoundrels who cannot prove their case by evidence and logic.

=================================================
[1]    I first suggested this in “Velikovsky and the Loch Ness Monster: Attempts at demarcation in two controversies”, in a symposium on “The Demarcation between Science and Pseudo-Science” (ed. Rachel Laudan), published as Working Papers of the Center for the Study of Science in Society (VPI&SU), 2 (#1, April 1983) 87-106. The idea was developed further in The Enigma of Loch Ness: Making Sense of a Mystery (University of Illinois Press, 1986/88; reprint, Wipf & Stock, 2012; pp. 152-3); see also Science or Pseudoscience: Magnetic Healing, Psychic Phenomena, and Other Heterodoxies (University of Illinois Press, 2001); Science Is Not What You Think (McFarland, 2017)
[2]    Gunther Stent, “Prematurity and uniqueness in scientific discovery”, Scientific American, December 1972, pp. 84–93
[3]    Described as the Holodomor
[4]    Holocaust Denial Timeline
[5]    Edwin Cameron, Witness to AIDS, I. B. Tauris, 2005; see book review in Journal of Scientific Exploration, 20 (2006) 436-444
[6]    Climate-change facts: Temperature is not determined by carbon dioxide
[7]    Michael Crichton,  “Aliens cause global warming”, Caltech Michelin Lecture, 17 January 2003

 

Posted in consensus, denialism, global warming, media flaws, politics and science, science is not truth, science policy, scientific culture, scientific literacy, scientism, unwarranted dogmatism in science | Tagged: , , | 2 Comments »

Where to turn for disinterested scientific knowledge and insight?

Posted by Henry Bauer on 2018/02/11

The “vicious cycle of wrong knowledge” illustrates the dilemma we face nowadays: Where to turn for disinterested scientific knowledge and insight?

In centuries past in the intellectual West, religious authorities had offered unquestionable truth. In many parts of the world, religious authorities or political authorities still do. But in relatively emancipated, socially and politically open societies, the dilemma is inescapable. We accept that religion doesn’t have final answers on everything about the natural world, even if we accept the value of religious teachings about how we should behave as human beings. Science, it seemed, knew what religion didn’t, about the age of the Earth, about the evolution of living things, about all sorts of physical, material things. So “science” became the place to turn for reliable knowledge. We entered the Age of Science (Knight, 1983). But we (most of us) recognize that scientific knowledge cannot be absolutely and finally true because, ultimately, it rests on experience, on induction from observations, which can never be a complete reflection of the natural world; there remain always the known unknown and the unknown unknown.

Nevertheless, for practical purposes we want to be guided by the best current understanding that science can afford. The problem becomes, how to glean the best current understanding that science can offer?

Society’s knee-jerk response is to consult the scientific community: scientific associations, lauded scientists, government agencies, scientific literature. What society hears, however, is not a disinterested analysis or filtering of what those sources say, because all of them conform to whatever the contemporary “scientific consensus” happens to be. And, as earlier discussed (Dangerous knowledge II: Wrong knowledge about the history of science), that consensus is inevitably fallible, albeit the conventional wisdom is not on guard against that, largely because of misconceptions stemming from an holistic ignorance of the history of science.

The crux of the problem is that scientific knowledge and ideas that do not conform to the scientific consensus are essentially invisible in the public sphere. In any case, society has no mechanism for ensuring that what the scientific consensus holds at any given time is the most faithful, authoritative reflection of the available evidence and its logical interpretation. That represents clear and present danger as “science” is increasingly turned to for advice on public policies, in an environment replete with claims of truth from many sides, people claiming to speak for religion or for science, or organizations claiming to do so, including sophisticated advertisements by commercial and political groups.

In less politically partisan times, Congress and the administration had the benefit of the Office of Technological Assessment (OTA), founded in 1972 to provide policy makers with advice, as objective and up-to-date as possible, about technical issues; but OTA was disbanded in 1995 for reasons of partisan politics, and no substitute has been established. Society needs badly some authoritative, disinterested, non-partisan mechanism for analyzing, filtering, and interpreting scientific claims.

The only candidate so far on offer for that task is a Science Court, apparently first mooted half a century ago by Arthur Kantrowitz (1967) in the form of an “institute for scientific judgment”, soon named by others as a Science Court (Cavicchi 1993; Field 1993; Mazur 1993; Task Force 1993). Such a Court’s sole mission would be to assess the validity of conflicting contemporary scientific and technical claims and advice.

The need for such a Court is most obvious in the context of impassioned controversy in the public arena where political and ideological interests confuse and obfuscate the purely technical points, as for instance nowadays over global warming (A politically liberal global-warming skeptic?). Accordingly, a Science Court would need complete independence, for which the best available appropriate model is the United States Supreme Court. Indeed, perhaps a Science Court could be managed and supervised by the Supreme Court.

Many knotty issue beside independence present themselves in considering how a Science Court might function: choice of judges or panels or juries; choice of issues to take on; possibilities for appealing findings. For an extended discussion of such matters, see chapter 12 of Science Is Not What You Think and further sources given there. But the salient point is this:

Society needs but lacks an authoritative, disinterested, non-partisan mechanism for adjudicating conflicting scientific advice. A Science Court seems the only conceivable possibility.

———————————————————–

Jon R. Cavicchi, “The Science Court: A Bibliography”, RISK — Issues in Health and Safety, 4 [1993] 171–8.

Thomas G. Field, Jr., “The Science Court Is Dead; Long Live the Science Court!” RISK — Issues in Health and Safety, 4 [1993] 95–100.

Arthur Kantrowitz, “Proposal for an Institution for Scientific Judgment”, Science,
156 [1967] 763–4.

David Knight, The Age of Science, Basil Blackwell, 1986.

Allan Mazur, “The Science Court: Reminiscence and Retrospective”, RISK — Issues in Health and Safety, 4 [1993] 161–70.

Task Force of the Presidential Advisory Group on Anticipated Advances in Science and Technology, “The Science Court Experiment: An Interim Report”, RISK — Issues in Health and Safety, 4 [1993] 179–88

Posted in consensus, legal considerations, media flaws, politics and science, science is not truth, science policy, scientific culture, unwarranted dogmatism in science | Tagged: | 2 Comments »

Dangerous knowledge IV: The vicious cycle of wrong knowledge

Posted by Henry Bauer on 2018/02/03

Peter Duesberg, universally admired scientist, cancer researcher, and leading virologist, member of the National Academy of Sciences, recipient of a seven-year Outstanding Investigator Grant from the National Institutes of Health, was astounded when the world turned against him because he pointed to the clear fact that HIV had never been proven to cause AIDS and to the strong evidence that, indeed, no retrovirus could behave in the postulated manner.

Frederick Seitz, at one time President of the National Academy of Sciences and for some time President of Rockefeller University, became similarly non grata for pointing out that parts of an official report contradicted one another about whether human activities had been proven to be the prime cause of global warming (“A major deception on global warming”, Wall Street Journal, 12 June 1996).

A group of eminent astronomers and astrophysicists (among them Halton Arp, Hermann Bondi, Amitabha Ghosh, Thomas Gold, Jayant Narlikar) had their letter pointing to flaws in Big-Bang theory rejected by Nature.

These distinguished scientists illustrate (among many other instances involving less prominent scientists) that the scientific establishment routinely refuses to acknowledge evidence that contradicts contemporary theory, even evidence proffered by previously lauded fellow members of the elite establishment.

Society’s dangerous wrong knowledge about science includes the mistaken belief that science hews earnestly to evidence and that peer review — the behavior of scientists — includes considering new evidence as it comes in.

Not so. Refusal to consider disconfirming facts has been documented on a host of topics less prominent than AIDS or global warming: prescription drugs, Alzheimer’s disease, extinction of the dinosaurs, mechanism of smell, human settlement of the Americas, the provenance of Earth’s oil deposits, the nature of ball lightning, the evidence for cold nuclear fusion, the dangers from second-hand tobacco smoke, continental-drift theory, risks from adjuvants and preservatives in vaccines, and many more topics; see for instance Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth, Jefferson (NC): McFarland 2012. And of course society’s officialdom, the conventional wisdom, the mass media, all take their cue from the scientific establishment.

The virtually universal dismissal of contradictory evidence stems from the nature of contemporary science and its role in society as the supreme arbiter of knowledge, and from the fact of widespread ignorance about the history of science, as discussed in earlier posts in this series (Dangerous knowledge; Dangerous knowledge II: Wrong knowledge about the history of science; Dangerous knowledge III: Wrong knowledge about science).

The upshot is a vicious cycle. Ignorance of history makes it seem incredible that “science” would ignore evidence, so claims to that effect on any given topic are brushed aside — because it is not known that science has ignored contrary evidence routinely. But that fact can only be recognized after noting the accumulation of individual topics on which this has happened, evidence being ignored. That’s the vicious cycle.

Wrong knowledge about science and the history of science impedes recognizing that evidence is being ignored in any given actual case. Thereby radical progress is nowadays being greatly hindered, and public policies are being misled by flawed interpretations enshrined by the scientific consensus. Society has succumbed to what President Eisenhower warned against (Farewell speech, 17 January 1961) :

in holding scientific research and discovery in respect, as we should,
we must also be alert to the equal and opposite danger
that public policy could itself become the captive
of a scientific-technological elite.

The vigorous defending of established theories and the refusal to consider contradictory evidence means that once theories have been widely enough accepted, they soon become knowledge monopolies, and support for research establishes the contemporary theory as a research cartel(“Science in the 21st Century: Knowledge Monopolies and Research Cartels”).

The presently dysfunctional circumstances have been recognized only by two quite small groups of people:

  1. Observers and critics (historians, philosophers, sociologists of science, scholars of Science & Technology Studies)
  2. Researchers whose own experiences and interests happened to cause them to come across facts that disprove generally accepted ideas — for example Duesberg, Seitz, the astronomers cited above, etc. But these researchers only recognize the unwarranted dismissal of evidence in their own specialty, not that it is a general phenomenon (see my talk, “HIV/AIDS blunder is far from unique in the annals of science and medicine” at the 2009 Oakland Conference of Rethinking AIDS; mov file can be downloaded at http://ra2009.org/program.html, but streaming from there does not work).

Such dissenting researchers find themselves progressively excluded from mainstream discourse, and that exclusion makes it increasingly unlikely that their arguments and documentation will gain attention. Moreover, frustrated by a lack of attention from mainstream entities, dissenters from a scientific consensus find themselves listened to and appreciated increasingly only by people outside the mainstream scientific community to whom the conventional wisdom also pays no attention, for instance the parapsychologists, ufologists, cryptozoologists. Such associations, and the conventional wisdom’s consequent assigning of guilt by association, then entrenches further the vicious cycle of dangerous knowledge that rests on the acceptance of contemporary scientific consensuses as not to be questioned — see chapter 2 in Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth and “Good Company and Bad Company”, pp. 118-9 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017).

Posted in conflicts of interest, consensus, denialism, funding research, global warming, media flaws, peer review, resistance to discovery, science is not truth, science policy, scientific culture, scientism, scientists are human, unwarranted dogmatism in science | Tagged: , | 2 Comments »

Dangerous knowledge II: Wrong knowledge about the history of science

Posted by Henry Bauer on 2018/01/27

Knowledge of history among most people is rarely more than superficial; the history of science is much less known even than is general (political, social) history. Consequently, what many people believe they know about science is typically wrong and dangerously misleading.

General knowledge about history, the conventional wisdom about historical matters, depends on what society as a whole has gleaned from historians, the people who have devoted enormous time and effort to assemble and assess the available evidence about what happened in the past.

Society on the whole does not learn about history from the specialists, the primary research historians. Rather, teachers of general national and world histories in schools and colleges have assembled some sort of whole story from all the specialist bits, perforce taking on trust what the specialist cadres have concluded. The interpretations and conclusions of the primary specialists are filtered and modified by second-level scholars and teachers. So what society as a whole learns about history as a whole is a sort of third-hand impression of what the specialists have concluded.

History is a hugely demanding pursuit. Its mission is so vast that historians have increasingly had to specialize. There are specialist historians of economics, of   mathematics, and of other aspects of human cultures; and there are historians who specialize in particular eras in particular places, say Victorian Britain. Written material still extant is an important resource, of course, but it cannot be taken literally, it has to be evaluated for the author’s identity, and clues as to bias and ignorance. Artefacts provide clues, and various techniques from chemistry and physics help to discover dates or to test putative dates. What further makes doing history so demanding is the need to capture the spirit of a different time and place, an holistic sense of it; on top of which the historian needs a deep, authentic understanding of the particular aspect of society under scrutiny. So doing economic history, for example, calls not only for a good sense of general political history, it requires also a good understanding of the whole subject of economics itself in its various stages of development.

The history of science is a sorely neglected specialty within history. There are History Departments in colleges and universities without a specialist in the history of science — which entails also that many of the people who — at both school and college levels — teach general history or political or social or economic history, or the history of particular eras or places, have never themselves learned much about the history of science, not even as to how it impinges on their own specialty. One reason for the incongruous place — or lack of a place — for the history of science with respect to the discipline of history as a whole is the need for historians to command an authentic understanding of the particular aspect of history that is their special concern. Few if any people whose career ambition was to become historians have the needed familiarity with any science; so a considerable proportion of historians of science are people whose careers began in a science and who later turned to history.

Most of the academic research in the history of science has been carried on in separate Departments of History of Science, or Departments of History and Philosophy of Science, or Departments of History and Sociology of Science, or in the relatively new (founded within the last half a century) Departments of Science & Technology Studies (STS).

Before there were historian specialists in the history of science, some historical aspects were typically mentioned within courses in the sciences. Physicists might hear bits about Galileo, Newton, Einstein. Chemists would be introduced to thought-bites about alchemy, Priestley and oxygen, Haber and nitrogen fixation, atomic theory and the Greeks. Such anecdotes were what filtered into general knowledge about the history of science; and the resulting impressions are grossly misleading. Within science courses, the chief interest is in the contemporary state of known facts and established theories, and historical aspects are mentioned only in so far as they illustrate progress toward ever better understanding, yielding an overall sense that science has been unswervingly progressive and increasingly trustworthy. In other words, science courses judge the past in terms of what the present knows, an approach that the discipline of history recognizes as unwarranted, since the purpose of history is to understand earlier periods fully, to know about the people and events in their own terms, under their own values.

*                   *                   *                  *                    *                   *

How to explain that science, unlike other human ventures, has managed to get better all the time? It must be that there is some “scientific method” that ensures faithful adherence to the realities of Nature. Hence the formulaic “scientific method” taught in schools, and in college courses in the behavioral and social sciences (though not in the natural sciences).

Specialist historians of science, and philosophers and sociologists of science and scholars of Science & Technology Studies all know that science is not done by any such formulaic scientific method, and that the development of modern science owes as much to the precursors and ground-preparers as to such individual geniuses as Newton, Galileo, etc. — Newton, by the way, being so fully aware of that as to have used the modest “If I have seen further it is by standing on the shoulders of giants” mentioned in my previous post (Dangerous knowledge).

*                     *                   *                   *                   *                   *

Modern science cannot be understood, cannot be appreciated without an authentic sense of the actual history of science. Unfortunately, for the reasons outlined above, contemporary culture is pervaded by partly ignorance and partly wrong knowledge of the history of science. In elementary schools and in high schools, and in college textbooks in the social sciences, students are mis-taught that science is characterized, defined, by use of “the scientific method”. That is simply not so: see Chapter 2 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017)  and sources cited there. The so-called the scientific method is an invention of philosophical speculation by would-be interpreters of the successes of science; working scientists never subscribed to this fallacy, see for instance Reflections of a Physicist (P. W. Bridgman, Philosophical Library, 1955), or in 1992 the physicist David Goodstein, “I would strongly recommend this book to anyone who hasn’t yet heard that the scientific method is a myth. Apparently there are still lots of those folks around” (“this book” being my Scientific Literacy and Myth of the Scientific Method).

The widespread misconception about the scientific method is compounded by the misconception that the progress of science has been owing to individual acts of genius by the people whose names are common currency — Galileo, Newton, Darwin, Einstein, etc. — whereas in reality those unquestionably outstanding individuals were not creating out of the blue but rather placing keystones, putting final touches, synthesizing; see for instance Tony Rothman’s Everything’s Relative: And Other Fables from Science and Technology (Wiley, 2003). The same insight is expressed in Stigler’s Law, that discoveries are typically named after the last person who discovered them, not the first (S. M. Stigler, “Stigler’s Law of Eponymy”, Transactions of the N.Y. Academy of Science, II, 39 [1980] 147–58).

That misconception about science progressing by lauded leaps by applauded geniuses is highly damaging since it hides the crucially important lesson that the acts of genius that we praise in hindsight were vigorously, often even viciously resisted by their contemporaries, their contemporary scientific establishment and scientific consensus; see “Resistance by scientists to scientific discovery” (Bernard Barber, Science, 134 [1961] 596–602); “Prematurity and uniqueness in scientific discovery” (Gunther Stent, Scientific American, December 1972, 84–93); Prematurity in Scientific Discovery: On Resistance and Neglect (Ernest B. Hook (ed)., University of California Press, 2002).

What is perhaps most needed nowadays, as the authority of science is invoked in so many aspects of everyday affairs and official policies, is clarity that any contemporary scientific consensus is inherently and inevitably fallible; and that the scientific establishment will nevertheless defend it zealously, often unscrupulously, even when it is demonstrably wrong.

 

Recommended reading: The historiography of the history of science, its relation to general history, and related issues, as well as synopses of such special topics as evolution or relativity, are treated authoritatively in Companion to the History of Modern Science (eds.: Cantor, Christie, Hodge, Olby; Routledge, 1996) [not to be confused with the encyclopedia titled Oxford Companion to the History of Modern Science, ed. Heilbron, Oxford University Press, 2003).

Posted in consensus, media flaws, resistance to discovery, science is not truth, scientific culture, scientific literacy, scientism, scientists are human, the scientific method, unwarranted dogmatism in science | Tagged: , , | 2 Comments »

Science is broken: Illustrations from Retraction Watch

Posted by Henry Bauer on 2017/12/21

I commented before about Science is broken: Perverse incentives and the misuse of quantitative metrics have undermined the integrity of scientific research.  The magazine The Scientist published on 18 December “Top 10 Retractions of 2017 —
Making the list: a journal breaks a retraction record, Nobel laureates Do the Right Thing, and Seinfeld characters write a paper”, compiled by Retraction Watch. It should be widely read and digested for an understanding of the jungle of unreliable stuff nowadays put out under the rubric of “science”.

See also “Has all academic publishing become predatory? Or just useless? Or just vanity publishing?”

 

Posted in conflicts of interest, fraud in medicine, fraud in science, media flaws, science is not truth, scientific culture, scientists are human | Tagged: , | Leave a Comment »

 
%d bloggers like this: