Skepticism about science and medicine

In search of disinterested science

Posts Tagged ‘history of science’

Dangerous knowledge II: Wrong knowledge about the history of science

Posted by Henry Bauer on 2018/01/27

Knowledge of history among most people is rarely more than superficial; the history of science is much less known even than is general (political, social) history. Consequently, what many people believe they know about science is typically wrong and dangerously misleading.

General knowledge about history, the conventional wisdom about historical matters, depends on what society as a whole has gleaned from historians, the people who have devoted enormous time and effort to assemble and assess the available evidence about what happened in the past.

Society on the whole does not learn about history from the specialists, the primary research historians. Rather, teachers of general national and world histories in schools and colleges have assembled some sort of whole story from all the specialist bits, perforce taking on trust what the specialist cadres have concluded. The interpretations and conclusions of the primary specialists are filtered and modified by second-level scholars and teachers. So what society as a whole learns about history as a whole is a sort of third-hand impression of what the specialists have concluded.

History is a hugely demanding pursuit. Its mission is so vast that historians have increasingly had to specialize. There are specialist historians of economics, of   mathematics, and of other aspects of human cultures; and there are historians who specialize in particular eras in particular places, say Victorian Britain. Written material still extant is an important resource, of course, but it cannot be taken literally, it has to be evaluated for the author’s identity, and clues as to bias and ignorance. Artefacts provide clues, and various techniques from chemistry and physics help to discover dates or to test putative dates. What further makes doing history so demanding is the need to capture the spirit of a different time and place, an holistic sense of it; on top of which the historian needs a deep, authentic understanding of the particular aspect of society under scrutiny. So doing economic history, for example, calls not only for a good sense of general political history, it requires also a good understanding of the whole subject of economics itself in its various stages of development.

The history of science is a sorely neglected specialty within history. There are History Departments in colleges and universities without a specialist in the history of science — which entails also that many of the people who — at both school and college levels — teach general history or political or social or economic history, or the history of particular eras or places, have never themselves learned much about the history of science, not even as to how it impinges on their own specialty. One reason for the incongruous place — or lack of a place — for the history of science with respect to the discipline of history as a whole is the need for historians to command an authentic understanding of the particular aspect of history that is their special concern. Few if any people whose career ambition was to become historians have the needed familiarity with any science; so a considerable proportion of historians of science are people whose careers began in a science and who later turned to history.

Most of the academic research in the history of science has been carried on in separate Departments of History of Science, or Departments of History and Philosophy of Science, or Departments of History and Sociology of Science, or in the relatively new (founded within the last half a century) Departments of Science & Technology Studies (STS).

Before there were historian specialists in the history of science, some historical aspects were typically mentioned within courses in the sciences. Physicists might hear bits about Galileo, Newton, Einstein. Chemists would be introduced to thought-bites about alchemy, Priestley and oxygen, Haber and nitrogen fixation, atomic theory and the Greeks. Such anecdotes were what filtered into general knowledge about the history of science; and the resulting impressions are grossly misleading. Within science courses, the chief interest is in the contemporary state of known facts and established theories, and historical aspects are mentioned only in so far as they illustrate progress toward ever better understanding, yielding an overall sense that science has been unswervingly progressive and increasingly trustworthy. In other words, science courses judge the past in terms of what the present knows, an approach that the discipline of history recognizes as unwarranted, since the purpose of history is to understand earlier periods fully, to know about the people and events in their own terms, under their own values.

*                   *                   *                  *                    *                   *

How to explain that science, unlike other human ventures, has managed to get better all the time? It must be that there is some “scientific method” that ensures faithful adherence to the realities of Nature. Hence the formulaic “scientific method” taught in schools, and in college courses in the behavioral and social sciences (though not in the natural sciences).

Specialist historians of science, and philosophers and sociologists of science and scholars of Science & Technology Studies all know that science is not done by any such formulaic scientific method, and that the development of modern science owes as much to the precursors and ground-preparers as to such individual geniuses as Newton, Galileo, etc. — Newton, by the way, being so fully aware of that as to have used the modest “If I have seen further it is by standing on the shoulders of giants” mentioned in my previous post (Dangerous knowledge).

*                     *                   *                   *                   *                   *

Modern science cannot be understood, cannot be appreciated without an authentic sense of the actual history of science. Unfortunately, for the reasons outlined above, contemporary culture is pervaded by partly ignorance and partly wrong knowledge of the history of science. In elementary schools and in high schools, and in college textbooks in the social sciences, students are mis-taught that science is characterized, defined, by use of “the scientific method”. That is simply not so: see Chapter 2 in Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017)  and sources cited there. The so-called the scientific method is an invention of philosophical speculation by would-be interpreters of the successes of science; working scientists never subscribed to this fallacy, see for instance Reflections of a Physicist (P. W. Bridgman, Philosophical Library, 1955), or in 1992 the physicist David Goodstein, “I would strongly recommend this book to anyone who hasn’t yet heard that the scientific method is a myth. Apparently there are still lots of those folks around” (“this book” being my Scientific Literacy and Myth of the Scientific Method).

The widespread misconception about the scientific method is compounded by the misconception that the progress of science has been owing to individual acts of genius by the people whose names are common currency — Galileo, Newton, Darwin, Einstein, etc. — whereas in reality those unquestionably outstanding individuals were not creating out of the blue but rather placing keystones, putting final touches, synthesizing; see for instance Tony Rothman’s Everything’s Relative: And Other Fables from Science and Technology (Wiley, 2003). The same insight is expressed in Stigler’s Law, that discoveries are typically named after the last person who discovered them, not the first (S. M. Stigler, “Stigler’s Law of Eponymy”, Transactions of the N.Y. Academy of Science, II, 39 [1980] 147–58).

That misconception about science progressing by lauded leaps by applauded geniuses is highly damaging since it hides the crucially important lesson that the acts of genius that we praise in hindsight were vigorously, often even viciously resisted by their contemporaries, their contemporary scientific establishment and scientific consensus; see “Resistance by scientists to scientific discovery” (Bernard Barber, Science, 134 [1961] 596–602); “Prematurity and uniqueness in scientific discovery” (Gunther Stent, Scientific American, December 1972, 84–93); Prematurity in Scientific Discovery: On Resistance and Neglect (Ernest B. Hook (ed)., University of California Press, 2002).

What is perhaps most needed nowadays, as the authority of science is invoked in so many aspects of everyday affairs and official policies, is clarity that any contemporary scientific consensus is inherently and inevitably fallible; and that the scientific establishment will nevertheless defend it zealously, often unscrupulously, even when it is demonstrably wrong.

 

Recommended reading: The historiography of the history of science, its relation to general history, and related issues, as well as synopses of such special topics as evolution or relativity, are treated authoritatively in Companion to the History of Modern Science (eds.: Cantor, Christie, Hodge, Olby; Routledge, 1996) [not to be confused with the encyclopedia titled Oxford Companion to the History of Modern Science, ed. Heilbron, Oxford University Press, 2003).

Advertisements

Posted in consensus, media flaws, resistance to discovery, science is not truth, scientific culture, scientific literacy, scientism, scientists are human, the scientific method, unwarranted dogmatism in science | Tagged: , , | 2 Comments »

Dangerous knowledge

Posted by Henry Bauer on 2018/01/24

It ain’t what you don’t know that gets you into trouble.
It’s what you know for sure that just ain’t so.

That’s very true.

In a mild way, the quote also illustrates itself since it is so often attributed wrongly; perhaps most often to Mark Twain but also to other humorists — Will Rogers, Artemus Ward, Kin Hubbard — as well as to inventor Charles Kettering, pianist Eubie Blake, baseball player Yogi Berra, and more (“Bloopers: Quote didn’t really originate with Will Rogers”).

Such mis-attributions of insightful sayings are perhaps the rule rather than any exception; sociologist Robert Merton even wrote a whole book (On the Shoulders of Giants, Free Press 1965 & several later editions) about mis-attributions over many centuries of the modest acknowledgment that “If I have seen further it is by standing on the shoulders of giants”.

No great harm comes from mis-attributing words of wisdom. Great harm is being done nowadays, however, by accepting much widely believed and supposedly scientific medical knowledge; for example about hypertension, cholesterol, prescription drugs, and more (see works listed in What’s Wrong with Present-Day Medicine).

The trouble is that “science” was so spectacularly successful in elucidating so much about the natural world and contributing to so many useful technologies that it has come to be regarded as virtually infallible.

Historians and other specialist observers of scientific activity — philosophers, sociologists, political scientists, various others — of course know that science, no less than all other human activities, is inherently and unavoidably fallible.

Until the middle of the 20th century, science was pretty much an academic vocation not venturing very much outside the ivory towers. Consequently and fortunately, the innumerable things on which science went wrong in past decades and centuries did no significant damage to society as a whole; the errors mattered only within science and were corrected as time went by. Nowadays, however, science has come to pervade much of everyday life through its influences on industry, medicine, and official policies on much of what governments are concerned with: agriculture, public health, environmental matters, technologies of transport and of warfare, and so on. Official regulations deal with what is permitted to be in water and in the air and in innumerable man-made products; propellants in spray cans and refrigerants in cooling machinery have been banned, globally, because science (primarily chemists) persuaded the world that those substances were reaching the upper atmosphere and destroying the natural “layer” of ozone that absorbs some of the ultraviolet radiation from the sun, thereby protecting us from damage to eyes and skin. For the last three decades, science (primarily physicists) has convinced the world that human generation of carbon dioxide is warming the planet and causing irreversible climate change.

So when science goes wrong nowadays, that can do untold harm to national economies, and to whole populations of people if the matter has to do with health.

Yet science remains as fallible as it ever was, because it continues to be done by human beings. The popular illusion that science is objective and safeguarded from error by the scientific method is simply that, an illusion: the scientific method describes how science perhaps ought to be done, but how it is done depends on the human beings doing it, none of whom never make mistakes.

When I wrote that “science persuaded the world” or “convinced the world”, of course it was not science that did that, because science cannot speak for itself. Rather, the apparent “scientific consensus” at any given time is generally taken a priori as “what science says”. But it is rare that any scientific consensus represents what all pertinent experts think; and consensus is appealed to only when there is controversy, as Michael Crichton pointed out so cogently: “the claim of consensus has been the first refuge of scoundrels[,] … invoked only in situations where the science is not solid enough. Nobody says the consensus of scientists agrees that E=mc2. Nobody says the consensus is that the sun is 93 million miles away. It would never occur to anyone to speak that way”.

Yet the scientific consensus represents contemporary views incorporated in textbooks and disseminated by science writers and the mass media. Attempting to argue publicly against it on any particular topic encounters the pervasive acceptance of the scientific consensus as reliably trustworthy. What reason could there be to question “what science says”? There seems no incentive for anyone to undertake the formidable task of seeking out and evaluating the actual evidence for oneself.

Here is where real damage follows from what everyone knows that just happens not to be so. It is not so that a scientific consensus is the same as “what science says”, in other words what the available evidence is, let alone what it implies. On any number of issues, there are scientific experts who recognize flaws in the consensus and dissent from it. That dissent is not usually mentioned by the popular media, however; and if it should be mentioned then it is typically described as misguided, mistaken, “denialism”.

Examples are legion. Strong evidence and expert voices dissent from the scientific consensus on many matters that the popular media regard as settled: that the universe began with a Big Bang about 13 billion years ago; that anti-depressant drugs work specifically and selectively against depression; that human beings (the “Clovis” people) first settled the Americas about 13,000 years ago by crossing the Bering Strait; that the dinosaurs were brought to an end by the impact of a giant asteroid; that claims of nuclear fusion at ordinary temperatures (“cold fusion”) have been decisively disproved; that Alzheimer’s disease is caused by the build-up of plaques of amyloid protein; and more. Details are offered in my book, Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth (McFarland, 2012). That book also documents the widespread informed dissent from the views that human-generated carbon dioxide is the prime cause of global warming and climate change, and that HIV is not the cause of AIDS (for which see the compendium of evidence and sources at The Case against HIV).

The popular knowledge that just isn’t so is, directly, that it is safe to accept as true for all practical purposes what the scientific consensus happens to be. That mistaken knowledge can be traced, however, to knowledge that isn’t so about the history of science, for that history is a very long story of the scientific consensus being wrong and later modified or replaced, quite often more than once.

Further posts will talk about why the real history of science is so little known.

 

Posted in consensus, denialism, global warming, media flaws, medical practices, prescription drugs, science is not truth, scientific literacy, scientism, scientists are human, the scientific method, unwarranted dogmatism in science | Tagged: , | 4 Comments »

How Science Has Changed — notably since World War II

Posted by Henry Bauer on 2017/01/01

The way science is usually mentioned, including its history, seems to imply a fundamental continuity in the development of modern science from its origins around the 16th-17th centuries (Galileo, Newton) to the present time, via the understanding of heredity (Mendel, much later DNA), of evolution (Darwin, Lynn Margulis, many others), of atomic structure and chemical bonding, of relativity and quantum mechanics, and much else.

One can certainly discern a continuity in these discoveries and accumulations of facts and the development of ever-better, more encompassing explanations. But the nature of scientific activity — who does science and how they do it — is best understood not as a continuum over this period but as three clearly distinguishable stages in which the interaction of science with society as a whole is significantly different: what the social place of scientists is, how their work is supported, how the fruits of science are disseminated and how they are accepted (or not accepted) outside science itself.

To understand the role of science in today’s worlds it is essential to understand this history.

The birth of “modern” science is credited uncontroversially to “The” Scientific Revolution of the 17th century, but there is not equally general recognition that there have been three distinctly and significantly different stages of scientific activity since then.

In the first stage, a variety of people — clergy, craftsmen, aristocrats, entrepreneurs —were seeking to satisfy their curiosity about how the world works; truth-seeking was effectively in the hands of amateurs, people doing it for the sake of doing it, truth-seeking was their chief controlling interest. Missteps taken at this stage resulted chiefly from the inherent difficulty of making discoveries and from such inherent human flaws as pride and avarice.

The second stage, roughly much of the later 19th century and first half of the 20th, saw science becoming a career, a plausible way to make a living, not unlike other careers in academe or in professions like engineering: respectable and potentially satisfying but not any obvious path to great influence or wealth. Inevitably there were conflicts of interest between furthering a career and following objectively where evidence pointed, but competition and collegiality served well   enough to keep the progress of science little affected by conflicting career interests. The way to get ahead was by doing good science.

In the third and present stage, which began at about the middle of the 20th century, science faces a necessary change in ethos as its centuries-long expansion at an exponential rate has changed to a zero-sum, steady-state situation that has fostered intensely cutthroat competition. At the same time, the record of science’s remarkable previous successes has led industry and government to co-opt and exploit science and scientists. Those interactions offer the possibility for individual practitioners of science to gain considerable public influence and wealth. That possibility tempts to corruption. Outright fraud in research has become noticeably more frequent, and public pronouncements about matters of science are made not for the purpose of enlightenment on truths about the natural world but largely for self-interested bureaucratic and commercial motives. As a result. one cannot nowadays rely safely on the soundness of what authoritative institutions and individuals say about science.

For a full discussion with pertinent citations and references, see my article “Three Stages of Modern Science”, Journal of Scientific Exploration, 27 (2013) 505-13.

Posted in conflicts of interest, fraud in science, funding research, politics and science, science is not truth, scientific culture, scientists are human | Tagged: | 2 Comments »