Why Science Can't Settle Political Disputes
Science can be a powerful political tool. Consider a hearing on climate change, “Data or Dogma,” held in 2015 by the then chairman of the U.S. Senate’s Space, Science, and Competitiveness Subcommittee and soon-to-be presidential candidate Ted Cruz. Senator Cruz characterized himself in his introduction as a defender of the truth writ large. As the son of two mathematicians and scientists — both of his parents earned bachelor’s degrees in mathematics — he claimed that he was interested only in ensuring that public policy followed from “the actual science and the actual data.” In Cruz’s view, however, the actual data demonstrated that there was little to no threat from global climate change, contrary to the “alarmism” of mainstream climate scientists.
The notable feature of Cruz’s rhetoric during his hearing was not simply his appeal to scientific evidence but the underlying narrative of science being threatened by politics. Cruz ironically framed his argument in much the same way that his political opponents do: extolling the benefits of open scientific debate and expressing the desire to move past the kinds of partisanship that can cloud a supposedly clear-eyed view of the facts. Yet in Cruz’s view mainstream climate scientists, not skeptics, were the ones guilty of stifling criticism and being overly wedded to a convenient political narrative. He depicted the political problem of anthropogenic climate change as one that can be overcome — fortuitously in the policy direction he prefers — if people were finally to know about the real data: the “inconvenient” facts he presented in his hearing.
It would be much too easy to dismiss speeches like Cruz’s as nonsense, as examples of petty political theater that would not persuade a reasonable person. Yet such talk clearly resonates with some members of the public. In some corners of the internet, mainstream climate scientists are frequently portrayed as both politically biased and scientifically incompetent — “scamsters” using “pseudoscience” and science that “is not yet settled” to force unsound policy on the rest of the country. Such language is nearly the same as that used to describe climate “denialists.” It often seems as if each side of any contentious issue is absolutely convinced of being on the “right side” of science.
But what happens to politics when citizens begin to believe that science should settle public disagreements and when rhetorical appeals to “the facts” start to dominate people’s justifications of their political worldviews and actions? Can science actually play the role in solving contentious disputes that people expect it to?
The Politicizing of Science
It’s tempting to see the institution of science as pure and free of values. One viral online cartoon presents the scientific method and the “political method” as stark contrasts: For the former, the question is “Here are the facts. What conclusions can we draw from them?,” whereas the latter puts it as “Here’s the conclusion. What facts can we find to support it?” The word politics itself seems to be most often uttered as a lament, with some distaste, especially when compared with the term science. Should we be surprised that levels of public trust and distrust for scientists are nearly the inverse of those for elected officials?
In line with popular opinion, a number of recent books portray the problem of politics and science as relatively straightforward: too much politics. Science historians Naomi Oreskes and Erik Conway, for instance, have documented the efforts of tobacco companies and fossil-fuel corporations to artificially increase doubt over the existence of climate change, acid rain, ozone depletion, and the harmful effects of smoking. Business lobbies and allied scientists have established institutes dedicated to questioning inconvenient scientific results, seemingly not with the intention of improving knowledge but rather of uncovering enough uncertainty in order to delay regulatory changes.
Thomas McGarity and Wendy Wagner similarly depict the pipeline of scientific knowledge into the policy sphere as being “contaminated” by outsiders. Industry-funded research, they show in their book, is sometimes explicitly designed to produce favorable results.
No doubt these scholars’ observations regarding how powerful groups sometimes intentionally muddy public debate and produce biased data are important. But they tend to portray science as something that is pure only until tainted by corporate profit incentives, an image that is — ironically — at odds with much of the sociological and anthropological data on fact production itself. Science has always been political.
The science policy scholar Daniel Sarewitz subsumes the idea that scientific research is politically neutral — that is, guided purely by curiosity rather than by political need or cultural values — under the “myth of unfettered research.” Contrary to the popular image of scientists as monastic explorers of truth, science has been socially shaped and steered since its beginnings.
The ancient Greek scientist Archimedes’s role as a maker of war machines is paralleled today by the disproportionately high levels of funding awarded to fields such as physics, whose results are more readily relevant to the creation of new weapon systems. The rise in biomedical science funding was likewise not because such research areas suddenly became more evocative of curiosity but rather because of both the desire to improve public health and high expectations for patentable and hence profitable new treatments. Recognizing these influences does not mean that all science funding is so narrowly motivated but that funding generally accrues to fields with powerful constituencies, economic or political, or with appeal to broader values such as improved national economic competitiveness, military prowess, and the curing of disease.
Furthermore, scientific results have been shaped by the traditional maleness of research. Male scientists generally excluded women from clinical trials, seeing their hormonal cycles as unnecessary “complications.” Understanding of how drugs affect women’s bodies has suffered as a result. Patriarchal ideals about feminine passivity led generations of biologists to ignore the active role that mammalian eggs play in the process of conception, capturing and tethering sperm rather than acting as, one set of writers put it, a passive “bride awaiting her mate’s magic kiss.” Moreover, because medical research is so often privately supported or profit-driven means that diseases afflicting affluent and/or white people get more attention. More is spent on cystic fibrosis and erectile dysfunction than on sickle cell anemia and tropical diseases.
Values shape science at nearly every stage, from deciding what phenomena to study to choosing how to study and talk about them. As the philosopher Kevin C. Elliot explains in his engaging book on the subject, “A Tapestry of Values,” when environmental researchers started to refer to boggy areas as “wetlands” rather than as “swamps,” it helped persuade citizens that such places were not merely wasted space but rather served a purpose: supporting animal species, purifying groundwater, and improving flood control. Calling a chemical an “endocrine disruptor” rather than an “hormonally active agent” highlights its potential for harm. Scientists make value-laden choices every day regarding their terminology, research questions, assumptions, and experimental methods, which can often have political consequences. Wherever social scientists have looked closely, they have found elements of politics in the conduct of science.
Discerning exactly where science ends and politics begins is no simple matter. Although it is clear to most people that certain kinds of politics are problematic (e.g., sowing doubt in bad faith), other political influences go unquestioned (e.g., funding biases toward weapons and new consumer gadgets). The framing of science as pure until it is externally politicized therefore prevents a broader debate of the question “What kinds of political influences on science are appropriate and when?” Improving women’s and minorities’ ability to become scientists or directing science toward more peaceful or life-fulfilling purposes is political in a different sense than intentionally obscuring the harmful effects of smoking. But the myth that science is inherently apolitical prevents a full consideration of such distinctions. Despite its problems, however, the belief that the normal conduct of science is purified of politics nevertheless dominates people’s political imaginations and drives attempts to scientifically “rationalize” policy. What are the effects of such attempts?
Scientizing Politics
Even though much of the public decries the explicit politicizing of science, few question the simultaneous effort to ensure that policymaking is “scientifically guided.” In a popular YouTube video from a few years ago, for instance, astrophysicist Neil deGrasse Tyson claims that America’s problems stem from the increasing inability of those in power to recognize scientific fact. Only if people begin to see that policy choices must be based on established scientific truths, according to Tyson, can we move forward with necessary political decisions.
The movement to scientize politics explicitly seeks to depoliticize contentious public issues — or at least appear to do so. As one set of science policy scholars puts it, it is hoped that putting scientific experts at the helm in policy decisions will “[clear] away the tangle of politics and opinion to reveal the unbiased truth.” The scientizing of politics thus relies on the same assumption as attacks on the explicit politicizing of science: that science and politics are totally distinct and that the former is nearly everywhere preferable to the latter.
Yet is there any reason to believe that scientized policy would be value-free? Upon reflection, it seems unreasonable to believe that any human endeavor — being carried out by imperfectly rational persons — can be so. Even worse, evidence that a large number of published scientific studies have been so poorly conducted or designed that their results are not reliably reproducible suggests that even evidence-based policy is not guaranteed to be guided by reality. There are plenty of reasons to doubt the visions expressed by Tyson and others who hope that science can provide a firm, indisputable basis for policymaking.
It can be countered nevertheless that even though values and politics play a role in research, science should still be a dominant means of settling public issues. No doubt having some scientific research when deciding a complex issue is preferable to having none at all. Moreover, even if scientific results are sometimes biased or even completely wrong, scientific institutions at least value trying to improve the quality of knowledge. It is difficult to quarrel with such a view in the abstract; however, one still wonders exactly how dominant a role scientific expertise should play in politics. In what ways might scientized policy fail to deliver the goods?
One question on this point is “Whose expertise?” New York Department of Health scientists thought they were being more objective when they made conservative estimates about the potential risks from the toxic-waste dump lying underneath Love Canal, New York — designing their analyses to avoid falsely labeling the neighborhood as unsafe. But equally talented scientists allied with homeowners made the exact opposite assumption regarding the burden of proof. Who was being less objective?
People with advanced degrees, moreover, have no monopoly on insight. British physicists, for instance, did not bother to include local sheep farmers’ knowledge when investigating the consequences of fallout from the Chernobyl nuclear disaster on the local agricultural ecosystem. As a result, the scientists recommended actions that the latter considered absurd and out of touch with the realities of sheep farming, such as having the sheep eat straw in pens rather than forage. They also failed to take advantage of farmers’ understanding of how water moves through their fields, which resulted in the scientists’ failure to measure where rainwater pooled and underestimation of the degree of nuclear contamination.
Similarly, officials in Flint, Michigan, initially met residents’ complaints of developing health problems and skin issues after being exposed to the municipal water supply with eye rolls and snide remarks, dismissing them as rushing to judgment on mere anecdotal evidence. But later investigation corroborated the residents’ worries, finding that changes in the water system caused lead to leach out of local pipes and to introduce elevated levels of trihalomethanes. Corrosion even created geographical inconsistencies in chlorination, raising the risk of fostering Legionella bacteria. Scientizing public controversies often prevents the recognition that people without science degrees often have important contributions to make.
Even experts from different scientific disciplines often see controversial phenomena in wildly different ways. Ecologists are more often critical of genetically engineered (GE) crops, given that their field emphasizes the complexity, potential fragility, and interconnectedness of ecosystems; in contrast, genetic engineering’s role in transforming organisms for human purposes lends itself to a more optimistic view of humanity’s ability to control transgenic species. It is simple enough to argue that science should guide policy, but things quickly become more complex after recognizing that no single group of experts can give a complete and fair analysis of a problem.
Another question is “Can science even settle controversial disputes?” For instance, Silvia Tesh, in her book “Uncertain Hazards,” describes how scientifically proving that a substance has ill effects on human bodies is often very difficult, if not impossible. For obvious ethical reasons, experimental tests are done only with nonhuman animals, whose reaction to different doses can vary wildly from humans’ reactions. Thalidomide, which even in minute amounts produces birth defects in humans, affects dogs only in large doses. Epidemiological studies on populations exposed to toxic substances are even messier: It is hard to get an accurate measure of how much of a toxin people have been exposed to; information on harm (e.g., birth defects and cancer) is not always reliably or consistently reported; there are too many confounding factors (e.g., smoking, toxic exposure at other times and places); and exposed populations and increased rates of illness are often just small enough that under standard measures of statistical significance it is impossible to prove a relationship.
Digging deeply into any contentious case furthermore exposes a wealth of conflicting scientific perspectives. Consider the controversy over “shaken-baby syndrome” — more recently dubbed “abusive head trauma.” Since the 1970s, it has been accepted within the medical community that a typical pattern of symptoms — including brain swelling and bleeding within brain membranes and retinas — can be produced only by abuse from a caregiver and not from other kinds of accidents or unrelated diseases. But shaken-baby syndrome diagnoses are questionably scientific and are at high risk of falsely putting grieving parents in prison for the accidental deaths of their children.
Critics contend that the decades of clinical studies that undergird the diagnosis of shaken-baby syndrome were prone to subjective biases and circular reasoning. Many cases were confirmed only by confession, which could have been extracted from caregivers through investigators’ threats. Other cases are considered “confirmed” simply based on a medical examiner’s judgment of whether parents’ alternative explanations were “reasonable” or not. Whether the science is settled or not depends entirely on which set of imperfect methods and theories one decides to trust.
The science policy scholar Daniel Sarewitz goes so far as to argue that science usually makes public controversies worse. His argument rests on the logical incompatibility of policy expectations and what science actually does — namely, that decision makers expect certainty, whereas science is best at producing new questions. That is, the more scientists study something, the more they uncover additional uncertainties and complexities.
Moreover, can we be sure that scientizing does not introduce its own biases? The sociologist of science Abby Kinchy found that privileging fact-based assessments tends to push out nonscientific concerns. Controversy over GE crops is often narrowed to focus on only the likelihood of clear harm to the environment or human bodies. For many opponents, however, GE crops’ more worrisome consequences are economic, cultural, and ethical, which stem from the difficulty of keeping GE crops in place. Growers of traditional maize varieties have witnessed GE corn genes infiltrating their breeds; farmers growing non-GE crops have been barred from saving seeds or have lost their certified-organic status when GE pollen and/or seeds contaminate their fields. Ignoring questions regarding the right of traditional societies or organic farmers to uncontaminated crops, simply for the sake of keeping the debate “rooted in science,” advantages biotechnology companies at the expense of other groups. Furthermore, strictly scientizing debates over genetic engineering forecloses a broader conversation about whether GE crops can fit into an alternative vision of our agricultural system that is more oriented to small business than to corporate business. Because socioeconomic concerns are not scientific matters, they are not discussed.
The scientized debate regarding driverless cars is similarly biased. Proponents’ discourse is exclusively focused on predicting an automobile-based world without traffic accidents, and as a result it is rarely considered that a more desirable world might be based on mass transit, walking, and biking instead of on more highly computerized cars. Scientizing policy privileges the dimensions of life that are easily quantifiable and renders less visible the ones that are not.
Scientized debates also tend to be biased in terms of who bears the burden of proof. Many large businesses’ lobbies have demanded that any regulation affecting their products should be rooted in “sound science.” On its surface, that demand seems reasonable. Why would anyone not want regulations to be based on the best available scientific knowledge? However, the implication of “sound-science” policy is that no restrictive regulation of an industry can be developed until proof of harm is conclusively demonstrated.
Scientists are often not able to provide firm answers, especially for complex physiological and environmental phenomena — and typically not in the time scales appropriate to policymaking. As a result, calls for “sound science” end up being a delaying tactic that provides an advantage to the industrial firms producing risky products at the potential expense of humans and nonhuman species — who may be needlessly exposed to potentially toxic substances during the decades spent “proving” harm.
Scientists first learned in the 1930s that bisphenol A, a component in many plastics, mimicked estrogen in mammalian cells, but it was not until 1988 that the U.S. Environmental Protection Agency began to regulate the use of bisphenol A and not until 2016 that manufacturers were finally pressured to remove it from their products. In such cases, scientizing policy allows one group of political partisans to frame their more precautionary opponents as antiscience and to portray the debate as already settled by “the facts” — namely, the lack of “conclusive” proof of harm. Absence of evidence is taken to be evidence of absence.
Motivated by the belief that science and the political world are entirely distinct, many citizens have begun to see science as something to be isolated and insulated from explicit political influence and politics as something to be almost entirely guided by scientific evidence. People act and talk as if a kind of apolitical scientific politics can steer controversial policy decisions, thus sidestepping or obviating differences in values or worldview. The resulting actions and talk are, however, far from apolitical but instead amount to a form of fanaticism. That is, political scientism starkly divides societies into friends and enemies, the enlightened and the ignorant. Just look at how political polarization over COVID science is spiraling out of control.
In a culture dominated by political scientism, citizens and policymakers forget how to listen, debate, and explore possibilities for compromise or concession with one another. Instead, we come to believe that our opponents only need to be informed of the “correct” facts or truths, harshly sanctioned, or simply ignored. No doubt there are cases where fanaticism may be justified, but political scientism risks turning every debate with a factual element into a fanatical one.
Taylor Dotson is Associate Professor of Social Science at New Mexico Institute of Mining and Technology and the author of “Technically Together” and “The Divide: How Fanatical Certitude Is Destroying Democracy,” from which this article is adapted.