In his essay on the state of science [“Saving Science,” Spring/Summer 2016], Daniel Sarewitz pulls no punches. He takes exception to Vannevar Bush’s 1945 claim that “Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.” To Sarewitz, this “beautiful lie” has corrupted the scientific enterprise by separating it from the technological problems that have been responsible since the Industrial Revolution for guiding science “in its most productive directions and providing continual tests of its validity, progress, and value.” “Technology keeps science honest,” Sarewitz claims, and without it science has run the risk of being “infected with bias,” and now finds itself in a state of “chaos” where “the boundary between objective truth and subjective belief appears, gradually and terrifyingly, to be dissolving.”
Those are bruising, emotive words. Sarewitz certainly has some important points to make about the interaction of science with the outside world, but he seems blinded by his anger, and that has dulled his analytical edge.
Sarewitz is quite right to draw attention to the complex interplay between science and technology, and to the energizing effects on science of the demands of governments, industry, and commerce to make better technology — interactions that are probably underappreciated in some scientific quarters. He raises valid questions about the publish-or-perish culture within science that yields much uncited work and a growing harvest of results of questionable reliability. And his challenge to the tendency in biomedical research sometimes to fixate on exploring phenomena in model systems at the expense of progress in clinical research hits some valid targets.
In the end, however, Sarewitz overplays his hand. Technology has certainly been a powerful driving force in scientific productivity. Yes, technology can keep science honest because there is no better test than a product, process, or medical treatment that just works. In Sarewitz’s telling, curiosity-driven research has produced only two fundamental advances of transformational power in the last century or so: quantum mechanics and genomics. But this account overlooks blue-skies breakthroughs such as antibiotics, plate tectonics, nuclear fission and fusion, the X-ray methods that cracked the structures of DNA and proteins, monoclonal antibodies, RNA interference, and (to look slightly further back) the theory of evolution. At the same time, he underplays the stringency of the reality check that experiment and observation places on the free play of free intellects. It seems to me that both roads make for interesting journeys, though it’s hard to decide which is truly the more rewarding.
I am happy to defend Sarewitz’s right to question how far scientists should be permitted to roam free from the demands of the societies that fund them, even if I can’t accept his prognosis. Sarewitz argues that science needs to be managed, but, beyond a couple of examples that both involve management by the military, he doesn’t say how.
The management of science is quite properly a preoccupation of governments, even if it raises perennially contentious issues of freedom and responsibility for the research community. But Sarewitz’s prescription of management by technology to keep science honest is too simplistic, for reasons that emerge — perhaps unconsciously — in his discussion of “trans-science.” To Sarewitz trans-science is research into questions about systems that are too complex for science to answer — things like the brain, a species, a classroom, the economy, or the climate. Missing from this list is science itself, and the social, political, and industrial ecosystem in which it operates. Unarguably, these are issues and phenomena of huge complexity and importance.
So, how do we move forward to figure out how best to make science work? Polemic is a great method for stirring debate, but a poor one for achieving resolution. I suggest on all sides we proceed by respecting the evidence, acknowledging our limitations, and renewing our determination to improve the connections between science and the world beyond laboratory walls.
Stephen Curry
Professor of Structural Biology
Department of Life Sciences
Imperial College London
There is more than a passing irony that I used the World Wide Web to read Daniel Sarewitz’s polemic on how the direction of scientific research should be driven by its “real world” application. The web was invented not to solve a perceived problem in mass communication, but rather as an incidental by-product of the most abstruse particle physics research at CERN, with the aim of addressing only the information-sharing needs of academics. In reading the article, I was also using a high-speed wireless device that makes extensive use of technology developed by Australian radio astronomers who were interested in processing confused faint signals from the depths of space, not in creating Wi-Fi. The list of such spin-offs from basic undirected research is long, yet none featured in Professor Sarewitz’s discussion, and his sweeping assertion that “technology led; science followed” (actually made in the context of the World Wide Web) is simply untrue.
It is not, as Professor Sarewitz asserts, a “beautiful lie … that scientific imagination gives birth to technological progress, when in reality technology sets the agenda for science.” Within my own department, the late Sir Peter Mansfield’s Nobel Prize–winning work on magnetic resonance imaging did not have its agenda set by existing technology; rather, through his scientific imagination he was laying the foundations for an entirely new life-saving technology. This sequence of science driving imaging technology in medicine is by no means new: A century ago, the great physicist J. J. Thomson pointed to the use of X-rays to locate bullet fragments in the First World War, and inquired, “Now, how was this method discovered? It was not the result of a research in applied science starting to find an improved method of locating bullet wounds. This might have led to improved probes, but we cannot imagine it leading to the discovery of X-rays.”
The reality, of course, is that often there is a virtuous symbiosis, in which technology helps to push the boundaries of science while scientific breakthroughs open the way to entirely new technologies. But true technological innovation often relies on the purest of curiosity-driven science, in the least predictable ways — as Thomson’s son later noted, his father also pointed out “that if Government laboratories had been operating in the Stone Age we should have wonderful stone axes but no-one would have discovered metals!”
Michael Merrifield
Head of School
School of Physics and Astronomy
University of Nottingham
Like many others, we read Daniel Sarewitz’s article with interest.
On one point, we agree with him: That the close coupling between science and technology can be enormously beneficial — our own experience in the physical sciences and engineering has taught us this. Examples abound of virtuous cycles in which science and technology have fed each other and accelerated progress in both, including the Nobel Prize–winning and society-transforming scientific discovery of the transistor effect and technological invention of the transistor itself.
On another point, we don’t entirely agree with Sarewitz: that the close coupling between science and technology is always beneficial, and hence should be forced. Our experience is that the benefit is situational. Research policy prescriptions must allow for the flexibility to couple or not, as appropriate to the mission at hand and its stage of development. That fluidity is exemplified by the evolution of quantum mechanics as a knowledge domain: In its early years, it was driven primarily by intellectual curiosity; in its middle years, it was symbiotic with a wide range of technologies (including the transistor mentioned above); and, in its most recent years, it is entering a new stage of symbiosis with quantum information technology.
That said, we understand why it is tempting to argue for forced coupling.
One argument is long-standing: Because a common (though by no means the only) route by which science impacts society is through technology, close coupling would seem to increase the likelihood that new science will be useful to society. But, as said eloquently by Robert Merton, the distinguished social scientist of science:
Ideally that empirical object is selected for study which enables one to investigate a scientific problem to particularly good advantage. Often, these intellectually strategic objects hold little intrinsic interest, either for the investigator or anyone else…. It is not an intrinsic interest in the fruit fly or the bacteriophage that leads the geneticist to devote so much attention to them. It is only that they have been found to provide strategic materials for working out selected problems of genetic transmission.
In other words, technological usefulness cannot always be the criterion for choosing a particular object for scientific study. The forced coupling between science and technology that such a criterion represents can be counterproductive (as of course can be a forced separation between science and technology).
Another argument for the forced coupling of science and technology is newer: It provides a powerful cross-checking that would seem to minimize scientific knowledge that is “contestable, unreliable, unusable, or flat-out wrong,” as Sarewitz puts it. Technology is indeed often the ultimate real-world test of scientific understanding! But it is important to remember that, in its earliest stages, research always proceeds through a stage in which it is fraught with error, mistakes, and wrong turns. This is true even in the physical sciences and engineering, often thought of as the gold standard for science and engineering knowledge.
The geocentric universe, phlogiston, the luminiferous aether: all of these were not so much wrong turns as symptoms of early-stage exploration of difficult physical-science knowledge domains. The physical-, life-, and social-science knowledge domains that Sarewitz mentions — metastatic cancer, climate change, growth economics, dietary standards — are similarly (if not more) complex, and similar wrong turns can be expected. It is human nature to forget past errors made en route to current knowledge: As Thomas Kuhn argued, once a new paradigm has emerged, we become unable to see, much less remember, old and mistaken paradigms that we once believed. And, by forgetting that in now-more-mature knowledge domains we once made errors, we tend to believe that in less-mature knowledge domains we can avoid them. But 20/20 hindsight does not imply a newfound ability for 20/20 foresight.
Now, we do not mean to suggest that research processes, institutions, and policies cannot be improved. Perhaps one can increase the probability that research will be useful to society without undue harm to research itself; and perhaps one can avoid some wrong research turns while enhancing the low-probability but truly transformational research turns. These are grand, timely, and important challenges to the social scientists and engineers of research. In the meantime, we should try to meet those challenges with a nuance appropriate to the mission at hand and to its stage of development: Science and technology will at times benefit enormously from a close coupling, but at other times will benefit just as much from independent development.
Jeff Tsao
Semiconductor & Optical Sciences Group
Sandia National Laboratories
Venkatesh Narayanamurti
Benjamin Peirce Research Professor of Technology and Public Policy
Harvard University
Daniel Sarewitz’s essay advocates for science to become less curiosity-driven and more focused on solving practical (and especially technology-related) problems. Consider, in counterpoint, an episode of scientific discovery that began decades ago and is unfolding still today.
In the mid-1960s, Canadian scientists undertook a medical expedition to Easter Island focused on studying the island’s isolated population. A small part of this project involved collecting soil samples, initially to study why the islanders did not have tetanus, an infection elsewhere common in barefoot people living among horses. Almost no tetanus spores were found, and the samples were set aside and eventually transferred to Ayerst Pharmaceuticals, where a research program on natural antimicrobials needed soil samples.
There, in the early 1970s, researcher Suren Sehgal discovered a bacterium that produced a compound he named rapamycin (after Easter Island’s indigenous name, Rapa Nui). Despite having antifungal properties, the compound initially did not have a clear medical use, because it was found also to suppress the immune system. Thus it was slated for destruction, as the company’s Montreal lab was to be closed in 1983. Sehgal, still curious about his discovery, stored it in his freezer. Four years later he resumed research at the drug company Wyeth.
Since then, rapamycin has been found to prevent rejection of organ transplants in humans and has been studied for relevance to conditions ranging from cancer to Alzheimer’s disease. Much interest is now focused on its potential as a longevity drug. Needless to say, none of this was imagined by the scientists at Easter Island.
I am all for goal-directed research, but the attitude espoused in Sarewitz’s essay would never have led to the discovery of rapamycin. More broadly, it would foreclose any number of avenues of exploration that may have practical benefits that are totally invisible at the outset. We live in a world that does not always reveal its secrets in compliance with some bureaucratic program and timetable.
Kenneth Silber
Writer
Wyckoff, New Jersey
In his much-discussed article, Daniel Sarewitz makes a compelling argument that a totally unmoored scientific research sphere will eventually twist itself into knots. The dogged pursuit of statistically significant results, and ever-louder calls for “transformative” findings, push many scientists to conduct experiments that are of little scientific merit, and to grossly overstate the practical implications of whatever findings are scraped together. Furthermore, the need to publish these results in peer-reviewed journals, and the need to integrate them through citations into the dense thicket of scholarly literature, mean that we’ve managed to weave together quite a bramble of knowledge, little of it reliable or of practical value.
The solution proposed by Sarewitz is that science needs to be taken down from its unimpeachable seat in society, and research needs to be put back in touch with the community of users who depend on robust findings to improve verifiable, tangible, important issues affecting daily life. This kind of specific and tangible goal would help to make research more accountable by applying some constructive pressures.
I think that what Sarewitz points to here is indeed an important problem, as a dense and self-referential network can drift distantly off into the void if it is content to moor itself only to overblown circumstantial findings and more (always more) self-reference, never putting its feet back onto the solid ground of practical application. Ultimately, the form of scientific research is undermining its function. Adam Briggle and Bob Frodeman diagnose an analogous issue within philosophical research, which through embracing the institutional structure of academic disciplines and specialist journals has managed to disconnect its conversations from one of the primary social roles that philosophy long occupied in society: that of the gadfly — the Socratic examiner and exposer of intellectual hubris.
Briggle and Frodeman diagnose this problem as one of “disciplinary capture,” a concept that is useful in understanding Sarewitz’s point. Disciplinary capture takes place when the structural and cultural features of a (scholarly) discipline overtake the process they are meant to guide, and ultimately undermine the outcome they are meant to bring about. Statistically significant and transformative research is wonderful for finding solutions to important problems, but the dogged pursuit of such things has completely displaced the goal of solving these problems, and in fact undermined much of the potential to find those solutions. This exemplifies disciplinary capture.
A related notion is “sectoral capture,” where a collaboration involving various sectors is dictated primarily by one of them rather than defined in genuine dialogue amongst the partners involved. For example, an industrial–academic partnership in which a peer-reviewed publication must result, at all costs, exemplifies sectoral capture. Sometimes a scholarly publication simply isn’t the appropriate output from such a collaboration, but insisting that one be produced constrains immediately the scope of the partnership, simply in virtue of its form (which is driven primarily by an academic concern, in this case).
Sarewitz points to the problems arising from the capture of scientific research, which has become enslaved to the p-value and the headline. The solution he proposes is to dissolve that capture by making science accountable to end users. So far, I agree with his point. However, in his rhetorical fervor, he sometimes comes off as suggesting that end users should be the only drivers of the research agenda — and whether that is in fact his point or this is simply an interpretation left open by his exposition, I think it’s worth clarifying the problems this prescription would pose. In brief, we would simply be exchanging one form of capture for another.
The first concern about this capture-swap is that scientific rigor is a darling child that should not be tossed out along with the bath water. Tim Caulfield never tires of his crusade against health advice spewed by Gwyneth Paltrow (and other celebs) because much of what is appealing to and adopted by non-experts is just bunk. Allowing end users to entirely dictate the terms of engagement with scientists would be catastrophic for the simple reasons that what’s convincing isn’t always true and that what companies (and governments) want is usually what sells rather than necessarily what works.
Sarewitz articulates well the problems of letting scientists call all the shots, prompting us to recognize that we need others involved in this process, too. What’s needed is an articulated vision of who else should be involved, and how the shot-calling should be negotiated by various parties rather than dictated by any one of them.
This note is too short for any full-blooded exposition, but I’ll part with a modest contribution by saying that I believe that end users need to be involved in setting research agendas, and even to participate with researchers in study design to find a balance between rigor and relevance. As for how we should be collaborating, true to my Socratic roots I maintain that we all need to be a bit more open to being wrong. Surrounding ourselves with a diverse group of people, and respecting them enough to actually listen, is an important step in recognizing our own preconceptions, identifying alternative views, examining them to develop novel insights, and using them to better our world.
Brooke Struck
Policy Analyst
Science-Metrix
Daniel Sarewitz characterizes Vannevar Bush’s idea that science advances through the “free play of free intellects” as a “bald-faced but beautiful lie.” According to Sarewitz, science arises in the trail of technological change and would be more productive if it were “steered” to solve problems important to society. Better steerage, he argues, will produce better science. These may be worthy propositions, but Sarewitz should not dismiss Vannevar Bush so readily.
During the Second World War, Vannevar Bush led what he called a “coordinated attack on special problems” by assembling academic scientists, industry experts, and gadgeteers to produce things that the military could not imagine or justify funding. Bush steered technology development by appropriating new, competitive science. The question Bush addressed in Science, The Endless Frontier was whether such an innovation engine was possible for civilian purposes. He argued that it was.
Bush’s innovation engine was to be fueled by new science, and this new science would be discovered outside of the study of problems defined by institutions. Bush proposed a National Research Foundation, led by a director accountable to the president, to support this frontier scientific research. The foundation would be hosted by universities, where there could be “free play of free intellects.” Institutional interests quickly pushed back. University administrators insisted that government funding of science should not be concentrated in a single agency; government agencies clambered to extend their own research programs.
Sarewitz insists that scientists should discover the new science they need by working on problems important to society. They just need to have a better sense of those problems and better steerage. Institutional science, then, would be more productive if it weren’t, well, so institutionalized. According to Sarewitz, scientists led by charismatic, organized non-scientists, or by scientists working outside of their capacity as scientists, will solve socially urgent problems sooner, producing the sound, testable science they need to get the job done.
Bush’s critique was not that such pragmatic, institutionally endorsed efforts would not produce science, even good science, but that such efforts were less likely to expand the frontiers of science. New science may arise when scientists work on institutional problems, but that science may not be on a frontier, and even when it is, scientists are not at liberty to chase after mystery — especially if charismatic and organized leaders outside of science prevent such wandering off.
Bush’s innovation engine, by contrast, draws not on science suggested by socially important problems, but rather on science shifted laterally from the context of its initial discovery: “Discoveries pertinent to medical progress have often come from remote and unexpected sources, and it is certain that this will be true in the future.” Is the idea of “remote and unexpected sources” also part of what Sarewitz calls Bush’s “seductive manipulation” about the “purity” of science? Or is it an argument against the dominance of science by institutions?
Sarewitz advocates for scientists to get out more and deal with practical problems, under the direction of most anyone other than the current institutional managers of science. One can see why. Institutional managers — even those with scientific training — are not accountable for outputs, and in turn they do not hold scientists accountable for outputs. Bush recognized the problem in Modern Arms and Free Men: “With the Federal government plunging into the support of research on an enormous scale there is danger of the encouragement of mediocrity and grandiose projects, discouragement of individual genius, and hardening of administrative consciences in the universities.”
Every worry is here that concerns Sarewitz — mediocrity, grandiose projects, and hardening of administrative consciences. But we also find a worry that people will disavow “the free play of free intellects” — that institutions will be considered better judges of what new avenues should be pursued than will the scientists doing the pursuit. Sarewitz’s contribution is that capable non-bureaucrats would be better institutional judges of new science than the present science administrators. That may be so. But the “free play of free intellects” is not about how best to manage institutional science. Rather, it argues for limits on institutional control of research. Free play of free intellects is not the problem with institutional science — it is an alternative, if not a remedy.
The problem for institutional science is that the managers of science are not accountable, not even visible. For institutional science, the problem is the free play of management intellects. For Vannevar Bush, when scientists get out, it is for the “untrammeled study of nature” with “complete freedom for the exercise of initiative” — free of management intellects, charismatic or otherwise. We might argue that science administrators need to be better trammeled rather than that Bush was a seductive manipulator. Untrammeled scientists may create new science that can spark technological innovation. So might better-trammeled institutional managers of science.
In short, Sarewitz wants different management to direct scientific discovery when what we need is less management. And these are testable proposals. We might find also, following Sarewitz’s argument, that when scientists, engineers, and gadgeteers care about getting something done, they abandon worthless but institutionally sanctioned work, and that makes all the difference.
Gerald Barnett
Writer and Editor
Research Enterprise blog
I broadly agree with the sentiments expressed by Daniel Sarewitz in “Saving Science” — particularly the notion that science seems extraordinarily productive by the typical managerial metrics of publication rates, but the reality seems to tell a different story. In my own experience the actual, mechanical aspects of even getting to a place where one can do science are confining. The Kafkaesque description E. O. Wilson offers of academic science, quoted in the article — “You will need forty hours a week to perform teaching and administrative duties, another twenty hours on top of that to conduct respectable research, and still another twenty hours to accomplish really important research…. Fail to discover, and you are little or nothing” — is frustratingly relatable. To climb out of poverty and come so close to achieving a lifelong goal (I am wrapping my up my Ph.D. this year) only to see this possibility is an endless source of angst — not to mention the other problems my age cohort faces, which are not unrelated. I implore any reader and especially any scientist to take a step back and seriously assess the state of the scientific ecosystem, our political economy, and how scientists’ work shapes and, more likely, is shaped by these forces.
As an example of this, in earlier days of my research experience, during my undergraduate, I saw many physics labs including my own pivoting to graphene for promise of grants and glory. Graphene was hyped and marketed heavily and even at the time, in that moment, I felt the strong tug of skepticism and looming disappointment. That moment was when the would-be Nobel Prize winners were giving a talk about graphene at the 2010 American Physical Society conference in Portland, Oregon. It’s an interesting material and has served many purposes in developing research questions, and it may yet become an element of more important technologies, but the hype fits the profile of science without technology.
This is where I diverge from the tone of the article. The desire that young scientists have, the “hunger” invoked by Kumar, to have a greater impact on the world will, in this system, be contained. It will be contained by economic constraints and the very niches that we occupy to form the basis of our careers. Yes, agendas and goals are an excellent basis with which to embark on any complex task, but without scientists having resources and a secure position, the resolution of these goals will always be at risk of compromise. The institutions that better connect science and people through technologies will likely be non-profits dependent on an economic configuration that reproduces similar barriers.
These are systemic problems with a material basis. One problem is the material needs of scientists, another is the material needs of society. Sometimes market forces align with these needs — and in these cases, scientists may find institutions supportive of doing meaningful scientific work. But these needs are not always aligned. In such cases it helps to be independently wealthy or have independently wealthy patrons. At that point, one may engage in such free inquiry or address a specific problem within their (or their patron’s) sphere of influence. But market forces tend to guide us elsewhere altogether.
Amazon’s Alexa, Juicero’s press, and Wi-Fi–connected salt shakers are non-solutions in search of profit. The burgeoning data-scientist profession is certainly in demand, but its function is very often to improve profits by way of targeted ads and predictive analytics. Guided by market forces, those who would solve society’s real material problems — climate change, food distribution, crop yields, energy production and distribution, and so on — are instead relegated to deciding which sponsored ads appear in your social media feed or squeezing additional profit out of a commodified health care system.
The material needs of the Department of Defense during a massive world war were also those of an advanced nation opening up new markets for exploitation. The war effort snatched up many young scientists who would go on to be supported by something approximating a command economy. The “products” yielded by this effort propelled newly linked markets and a new consumer electronics sector. That market forces and the material needs of both scientists and society were aligned is ultimately responsible for the productivity we see in that era. Now, however, we are brushing against some internal contradictions of our economy — particularly with climate change, which is effectively treated as an externality by market forces, and is thus incommensurate with market approaches. What commonly passes for a solution to the greenhouse gas issue is itself motivated by market arguments, but does not seriously grapple with the scale, supply chain, and other physical limits of the problem. In short, it is my opinion that market-logic constraints are incapable of decisive and lasting solutions except in moments of alignment, such as the alignment between the need for powerful communications tools and the development of the smartphone.
As for individuals, many would-be academic scientists will be forced by the contours of our institutions to spend years in post-docs or — even worse — adjunct positions, prostrating ourselves in economic misery in pursuance of meaningful work. We would do well to closely examine the economic configuration that produces institutional barriers that effectively prevent these “hungry” young scientists from ever achieving the positions and security necessary to embark on meaningful scientific endeavors as opposed to playing a game that bears some responsibility for a science lost at sea.
Robert Stallard
Ph.D. candidate
Electrical and Computer Engineering
University of Louisville
Daniel Sarewitz responds: I am grateful to the letter-writers for further illuminating several fundamental issues that I raised in “Saving Science.” Their careful arguments merit some detailed response. To start with, it seems important to make clear that Vannevar Bush’s idyll of pure science as the foundation for technological and economic progress — what in “Saving Science” I call the “beautiful lie” — builds on two subsidiary beliefs. The first is that such progress depends above all upon research that is not directed toward practical problem-solving. The second is that scientists must, therefore, be free to work, in Bush’s words, “on subjects of their own choice, in the manner dictated by their curiosity.”
Thus it is unsurprising that many of these letters are concerned about what I have variously termed “managing” and “steering” science. Stephen Curry refers to my “prescription of management by technology.” Jeff Tsao and Venkatesh Narayanamurti write that I advocate a “forced coupling” between science and technology. Kenneth Silber says that I advocate “for science to become less curiosity-driven”; Gerald Barnett that I want “to direct scientific discovery”; and Brooke Struck that I sometimes appear to suggest “that end users should be the only drivers of the research agenda.” These correspondents echo in various degrees the position of chemist and philosopher Michael Polanyi: “You can kill or mutilate the advance of science, you cannot shape it. For it can advance only by essentially unpredictable steps, pursuing problems of its own, and the practical benefits of these advances will be incidental and hence doubly unpredictable.”
Yet the recent history of the complex U.S. science enterprise, which I sought in part to portray in my article, is a flat-out contradiction of Polanyi’s position: We consciously shaped science with the intent and result of capturing practical benefits. I presented several examples meant to provide a much richer, more diverse and nuanced account of “managing” and “steering” science than the one that these letters, or Polanyi’s admonition, seem to allow. The breast cancer research story tells of patient-advocates working closely with scientists to guide research choices toward high-risk, high-reward outcomes. The Defense Department environmental research case illustrates how high-quality fundamental research carried out in the context of problem-solving can outperform research left on its own. A. J. Kumar shows that an individual scientist can choose to be scientifically curious about research problems that both advance knowledge and address human suffering.
Through such examples, chosen to highlight a variety of creative approaches to organizing science, I sought to emphasize the very point that Professor Curry thinks I “perhaps unconsciously” neglect: that the scientific enterprise itself is too complex to be characterized in linear, invariant, prescriptive terms. That is precisely why Vannevar Bush’s formulation is so problematic: It posits a foundational motivation — the unfettered curiosity of the pure scientist — upon which the system should be built. And its implications are widely interpreted in terms of a standard, linear model of innovation in which unfettered science automatically leads to technology along pathways that are serendipitous and “doubly unpredictable.”
Several letters offer specific examples to bolster this view of things, yet on closer examination they don’t necessarily reveal what the authors appear to intend. Professor Curry mentions antibiotics, and of course Fleming’s discovery of penicillin is familiarly presented as a canonical instance of the serendipitous consequences of scientists pursuing their curiosity. But what was Fleming so curious about? His research at the time was focused on the practical problem of resistance to infection, so if his famous discovery illustrates the serendipitous results of curiosity, it is curiosity arising from the search for application, not for “pure” knowledge. “Fortune favors the prepared mind,” as Pasteur aptly noted. Michael Merrifield objects to my observation that “technology sets the agenda for science,” and he wants to play “gotcha” with his example of the World Wide Web, which, he notes, “was invented not to solve a perceived problem in mass communication, but rather as an incidental by-product” of academic scientists’ need to share information. But this example actually supports the ideas presented in “Saving Science.” Scientists developed the ancestral Web to solve a practical problem — in this case, information-sharing among researchers working in many locations. The subsequent evolution of the Web from this early-stage niche application to a much broader, indeed transformational, role in society is actually a rather typical story, as the history of steam engines (originally for powering pumps to drain coal mines) and radio (originally for point-to-point communication) well illustrate. Similarly, while Professor Merrifield notes how high-speed wireless draws on “technology developed by Australian radio astronomers who were interested in processing confused faint signals from the depths of space,” he might have traced the story still deeper into history to reveal that radio astronomy itself was a serendipitous outgrowth of research at Bell Labs aimed at reducing background noise in early overseas telephone service. Efforts to build a general case for a deterministic, linear path from pure science to technology often founder on such historical details.
Of course new science may lead to new technology. I wrote: “Scientists have discovered and probed phenomena that turned out to have enormously broad technological applications.” But the secret elixir of technological advance is not to be found in individual instances of scientific discovery, but in the complex institutional arrangements of national innovation systems such as the one that powered the U.S. Cold War effort.
Yet I do want to say something on behalf of the pure pursuit of scientific knowledge, of the “free play of free intellects” and the intrinsic social value of that endeavor. Alvin Weinberg, the physicist whose concept of “trans-science” I discuss in the article, also suggested that the “purest basic science” should — because it offers no direct predictable payback — be treated as “overhead,” funded by the government “as a fraction of the entire remaining technical enterprise.” What fraction? That is “a political decision … influenced in part by the public’s attitude towards science.” (In the United States, such attitudes are, common complaints of scientists notwithstanding, quite positive on the whole.) The unfettered quest for new knowledge about the universe, the planet, and ourselves is empowering and ennobling. It should be supported by governments and insulated from calls for relevance. Occasionally, it will open up wide new vistas for technological development. But it is an ineffective and ultimately self-destructive foundation for a public enterprise aimed at using science and technology to improve the human condition.
None of the letters seem to disagree with my motivating concern that both the quality and public value of science are now in alarming decline, although several seek to engage, amend, or offer alternatives to my explanation of the problem and my suggestions for reversing it. Jeff Tsao and Venkatesh Narayanamurti emphasize that “research always proceeds through a stage in which it is fraught with error, mistakes, and wrong turns.” They mention geocentrism and phlogiston — they might also have mentioned eugenics and phrenology, alchemy and astrology. They seem to be arguing that any and all fields of scientific research are plausibly just immature versions of the physical sciences, so that “wrong turns can be expected.” But the scale and pervasiveness of quality problems in many scientific fields today seems less about disciplinary immaturity than something closer to its opposite: institutional senescence (Derek de Solla Price called it “senility.”) The familiar indicators include overpublication, systemic positive bias, lack of reproducibility or empirical confirmation, brutal competition for funding and publication in “high-impact” journals, and large portions of the academic science community mired in low-caste post-doc and research or adjunct faculty positions. These indicators infect fields both mature and developing, from theoretical physics to genomics to epidemiology to cognitive and behavioral science.
Moreover, even if all emerging sciences must navigate error and uncertainty, the social and political dimensions of some areas of science are more palpable and intractable than others. The intercalation of science and politics today is far more pervasive than it has ever been in the past, due both to the scale of the enterprise and to the trans-scientific questions that science is increasingly expected to address. Self-correction occurs in most areas of human endeavor — politics, law, the marketplace, even the arts, not just science — but when areas of science are suffused with entrenched institutional interests, intractable uncertainties, and contested values and ideologies, self-correction may depend less on new research than on political resolution of underlying conflicts.
I take Gerald Barnett to be asking questions along these lines: Might the problems with scientific quality and public value that pervade the research enterprise today be the product of too much, rather than too little, management of science? Perhaps the real problem is insufficient “free play of free intellects” rather than an excess? On one level I agree. The obsessive focus on publications, grants, citations, impact factor, and other indicators of output and productivity, especially at universities, is bad for scientific quality and creativity regardless of whether one seeks to enable the “free play of free intellects” or accelerate the synergies between knowledge creation and practical problem-solving.
Brooke Struck comes closer to my own position in arguing that the goal of solving societal problems has sometimes been “captured” by the goal of scientific advance. We each view the involvement of end users in science policy processes as potentially important for avoiding such capture. In this regard I couldn’t agree more with his call for a “balance between rigor and relevance,” and want further to emphasize that achieving this balance is a problem of institutional design, some of whose key attributes, are, I hope, illustrated by a number of the stories I tell in “Saving Science.”
The most radical and dispiriting take on science’s troubles comes from the trenches of academe, where Ph.D. student Robert Stallard suggests that the current political economy for university science is foreclosing the prospects for newly minted and future scientists alike. Certainly the evidence so far shows that the university system as a whole lacks the tools for collective decision-making that could substantially modify the trajectory of unsustainable growth, declining social productivity, and compromised quality that Mr. Stallard finds himself having to navigate. I understand him to be arguing that not only must science address the sorts of structural challenges I describe in “Saving Science,” but also that it must find new public purpose — and new patrons for pursuing such purpose — if future generations of scientists are to satisfy their hunger to advance knowledge while contributing to social betterment. This is a view that I strongly subscribe to, and that “Saving Science” was meant to help advance. Mr. Stallard’s experience and eloquence bring a passionate legitimacy to the discussion that needs to be heard. Professor Curry’s letter suggests that I am “blinded by [my] anger.” Robert Stallard shows why it might take the anger of young scientists to open our eyes to the stakes and consequences of allowing the gift of science to become corrupted.