Skepticism about science and medicine

In search of disinterested science

Posts Tagged ‘dysfunctional research system’

Dishonesty and dysfunction in science

Posted by Henry Bauer on 2012/12/16

The traditional (Mertonian) norms intended to describe the behavior of scientists during the 1st and 2nd ages of modern science included not only disinterestedness and organized skepticism but also “universalism” and “communalism”: scientific understanding as a freely shared public good, universal rather than local.

No more free sharing
The intrusion of politics and big money in the present-day 3rd age of modern science has effectively neutered that ideal of free sharing.

Secrecy for commercial purposes, including patenting, used to be restricted to industry and to “applied” science in general. But the distinction between pure and applied has eroded, and moreover universities — the traditional home of “pure” or “basic” research — have themselves become profit-seeking and patent-greedy. One consequence is that the sharing of information between researchers at universities has become subject to bureaucratic restrictions expressed in “Material Transfer Arrangements” (Philip Mirowski, Science-Mart: Privatizing American Science, Harvard University Press, 2011).

Individuals as well as institutions have become secretive and wary of being scooped. A notorious instance in the race to invent high-temperature superconductors had the author of a manuscript insert wrong information so that the reviewers would not be able to benefit from early  knowledge of crucial details of the work; the information was corrected only when the article had reached the proof stage of publication (Robert M. Hazen, The Breakthrough: The Race for the Superconductor, Summit Books / Simon & Schuster; 1988).

Outright fraud
Deliberate dishonesty was rare during the first and second ages of modern science. By 1980, however, instances had become sufficiently common that two science journalists could suggest that it is endemic within science: William Broad & Nicholas Wade, Betrayers of the Truth: Fraud and Deceit in the Halls of Science (Simon & Schuster, 1982). Their claim to trace instances back for many centuries indicated, however, that fraud had actually been quite rare in times past, becoming disturbingly frequent only in modern times, in biomedical matters in particular (book review, 4S Review, 1 [#3, Fall 1983] 17-23).

That dishonesty has become much more common in science during the last three decades can be amply demonstrated. For instance, in 1989 the National Academies of Science (NAS) had felt it necessary to publish a booklet entitled On Being a Scientist. By 1995, the 2nd edition added a sub-title to emphasize ethical behavior: On Being a Scientist: A Guide to Responsible Conduct in Research, and this was downloaded 850 times from the NAS Press website. Since the 3rd edition of 2009 there have been 40,000 downloads.

Also in the 1980s the National Institutes of Health found it necessary to establish an Office of Research Integrity (ORI; its name has changed several times over the years). ORI newsletters  all too often have to report penalties enacted on individuals who have been found dishonest in grant applications or in other ways. Nowadays it is also required that universities receiving NIH grants must provide courses in research ethics for their faculty and students; and many universities have set up their own offices of research integrity to ensure that their faculty and students are taught how to be honest in doing research. Such honesty is difficult to ensure, apparently, since there is a mushrooming industry carrying on research into research integrity: Centers for Research Ethics have sprung up at a number of universities, and there are opportunities for grant-getting for such scholarship — “Funding Opportunity Title: Research on Research Integrity (R21)”. Journals dedicated to the problem have of course been founded: Accountability in Research (volume 1 in 1989), Ethics in Science and Environmental Politics (volume 1 in 2001), Journal of Academic Ethics (since 2003), Research Ethics (since 2005), Journal of Empirical Research on Human Research Ethics (since 2006), and of course International Journal of Internet Research Ethics established in 2008. Dishonesty among PhDs and MDs has evidently become rampant.

Just how prevalent fraud has become in science is also illustrated by a proliferation not only of scholarly journals but also news items, blogs, and websites concerned with the problem. Much of the media still find this astonishing:  “A surprising upsurge in the number of scientific papers that have had to be retracted because they were wrong or even fraudulent has journal editors and ethicists wringing their hands” (emphasis added; New York Times, Editorial — Fraud in the scientific literature, 5 October 2012). Individual scientists come to recognize the problem not because it has become fully recognized within the scientific community but from unhappy personal experience (see e.g. the website Science Fraud: Highlighting Misconduct in Life Sciences Research). A few people, however, are recognizing that this points to systemic dysfunction, see e.g. Horace Freeland Judson in The Great Betrayal: Fraud In Science (2004) or Pete Etchells and Suzi Gage, “Scientific fraud is rife: it’s time to stand up for good science. The way we fund and publish science encourages fraud” (emphasis added; Guardian blog, 2 November 2012).

The crux of the matter is that too many would-be researchers are competing for inadequate available resources, under burdensome demands by universities as well as commercial institutions that researchers get grants and make patentable discoveries. No amount of regulation, or education in ethics, can bring disinterested ethical behavior when all the incentives point the opposite way, urging speedy production of profitable outcomes which in the normal course of scientific work can never be guaranteed, let alone quickly.

Dogmatism and barriers to progress
Outright fraud is only the most obviously damaging feature of this 3rd age of modern science. The absolute necessity for researchers to obtain uninterrupted flows of grant money brings enormous pressure to be working along productive lines, not to be wrong. But the essence of research is to enlarge understanding, which means venturing into the unknown. By definition, the unknown is a mystery, and by easy extension the outcome of genuine research is not predictable. Surely every serious scientist has sometimes hit a dead end and made mistakes along the way. The very history of science is a story of trials and errors. Therefore seeking to avoid making any mistakes or to take on only projects that are guaranteed to succeed means restricting research to banalities.

Furthermore, if one nevertheless goes wrong, for instance by clinging too long to a superseded theory, the incentives are strong to resist acknowledging the mistake for as long as possible. Established leaders, who as a group control available resources — grants, hiring, publishing — are in a good position to stave off threats to the established mainstream consensus. So contemporary science has also seen a marked increase in dogmatic adherence to outmoded approaches and interpretations; see Dogmatism  in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth.

The problem is clear, the solution is not
I wish I could suggest remedies whose early introduction might be feasible. But a necessary first step is to understand what the problem is. No amount of research into research integrity is needed to recognize that the hothouse environment of cutthroat competition brings to would-be researchers temptations that a significant proportion of scientists are unable to resist.

The system of scientific and medical research has become seriously dysfunctional. Perhaps my analogy of success rates in grant-getting with actual unemployment was somewhat forced (80% unemployment?! The research system is broken), but it is surely no exaggeration to describe it as absurdly dysfunctional when senior researchers as well as would-be scientists have to construct 5 or 6 grant proposals for every one that succeeds. Instead of doing research, scientists spend huge amounts of time and effort on grant-writing (John P. A. Ioannidis, “Fund people not projects”, Nature 477 [2011] 529-31); and universities and other research institutions even have grant-writing specialists to assist their scientists by providing marketing and public-relations skills to make the grants appear more impressive.

The very system of project grants has become dysfunctional; for a cogently argued and documented discussion, see Donald W. Miller, Jr., “The government grant system: Inhibitor of truth and innovation?”, Journal of Information Ethics, 16 (2007) 59-69.
Half a century ago, it could seem appropriate to fund research in response to requests generated by scientists themselves. But as competition increased, attempts to judge competing requests led to increasingly inappropriate criteria; for instance it is common that grant proposals are expected to forecast the value of what the research will generate, when everyone knows that the most valuable results come serendipitously and not necessarily in line with researchers’ aims or expectations.

The whole research enterprise has become too large, too bureaucratic, too thoroughly dysfunctional for its own good and for the public good.

Posted in fraud in science, funding research, science policy | Tagged: , | 7 Comments »

 
%d bloggers like this: