Social Control of Technological Risks: The Dilemma of Knowledge and Control in Practice, and Ways to Surmount It

Abstract

Effective management of societal risks from technological innovation requires two types of conditions: sufficient knowledge about the nature and severity of risks to identify preferred responses; and sufficient control capacity (legal, political, and managerial) to adopt and implement preferred responses.  While it has been recognized since the 1970s that technological innovation creates a tension between the societal conditions supporting knowledge and those supporting control, the severity and character of this tension varies across issues in ways that inform understanding of what risks are managed more, or less, effectively and easily.  Risk management has tended to be more effective and less contentious for three types of issues: those in which causal mechanisms of risk are well identified even if associated technologies are novel; those in which precaution can be strongly embedded in capital stock and bureaucratic routines; and those in which risks are imposed by scientific research rather than subsequently deployed technology.  Issues for which risk management has tended to be more difficult and contentious also fall into three types: those involving newly identified causal mechanisms of risk; those in which risks are strongly determined by behavioral, social, or political responses to innovations; and those involving diffuse, expansive, and weakly defined technological capabilities.  Separately considering knowledge conditions and control conditions can help identify current and coming issues likely to pose the greatest risk-management challenges, and suggest potential routes to effective risk management, even under large-scale and disruptive innovation.


 

Introduction

In this Article, I discuss the conditions necessary for effective societal control of the risks posed by technology and technological innovation, focusing mainly on environmental and related health and safety risks.  The discussion is mainly at a conceptual and theoretical level, but I also draw on the experience of managing major environmental and safety risks over the past forty years, and include some discussion—albeit speculative—about current risk management challenges and controversies.

It is widely noted that technological innovation poses challenges to law and regulation.  It can do so directly, by generating new or increased risks, or indirectly, by undermining the regulatory practices or blurring the legal concepts and categories that underpin societal responses to risks.1

By focusing on innovation’s risks, I do not aim to deny its benefits.  The principal, intended, and most direct societal consequences of innovations are usually benefits.  This is why people and enterprises pursue and adopt them voluntarily, even enthusiastically, in pursuit of new products, services, and capabilities, as well as lower cost or better quality in existing products and production processes.  The presumption that innovation mostly brings societal benefits underlies long-standing commitments to public support for research and innovation, and long-held principles of freedom of inquiry and research—which in some nations are even constitutionally formalized, and in the United States are embedded in the implicit bargain for support and autonomy of research that has been in place since the early postwar era.2

Mostly benefits, yes, but not exclusively benefits.  Innovation can also bring societal risks and harms.  The typical structural pattern of societal effects, well illustrated by environmental effects, is that the benefits of innovation tend to be anticipated, concrete, immediate, and privately allocable, while its risks or harms tend to be unanticipated, diffuse, delayed, and collective.3

The challenges that innovation poses to law and regulation mostly arise from this structural disparity between the distribution of its benefits and burdens.  This is, for example, a major reason that the formalized endeavors of technology assessment (TA) and environmental impact assessment (EIA) have rarely succeeded at their aspiration to inform rational advance collective decisionmaking about risks, but have instead shifted to more tractable related questions, or degraded into largely procedural requirements.4  Absent the ability to apply confident advance knowledge to guide prospective control of risks, environmental regulation is often stuck in catch-up mode: It must chase after technologies, products, and production processes to control their harmful effects after they are already deployed at scale—and also after they have accreted economic and political interests committed to their continued use and expansion.

The societal conditions necessary for effective control of technological risks can be broadly understood in terms of two requirements: knowledge and control.  Effective control of risks requires sufficient knowledge of what the risks are, what activities influence them, and through what causal mechanisms.  This knowledge is necessary to determine what changes or interventions would reduce risks, and what costs or other effects those interventions would bring—including other risks they might create or exacerbate.  Effective risk control also requires the capability to make those interventions.  This capability combines elements of technical competency, administrative capacity, legal authority, and ability to build political support, with the precise form and mixture of these elements varying with the properties of the risk to be controlled, and the legal, institutional, and cultural setting.

The basic challenge of risk control is that the societal conditions supporting adequate knowledge, and those supporting adequate control, are often in tension with each other.  The best-known statement of this tension came from the British historian of science David Collingridge.5  His eponymous dilemma stated a structural contradiction between the conditions necessary for knowledge and for control, based on a normal chronological sequence in society’s relationship with new technologies and their effects.  Early in the development of new technologies, he argued, the gravity and character of their potential harmful effects are not well enough known to support effective regulation or control.  This is typically the case because early-stage technologies are labile, and their effects only become evident as they are embodied in artifacts, deployed, scaled up, and embedded in social and economic processes around which people and institutions adjust their behavior.  Later on, due to this experience and related advances in knowledge, the character of effects and risks becomes clear enough to reveal the nature of controls that would be desirable.  But by this time, effective control is obstructed by constituencies that have accreted around the technology with interests in its continuance.  In sum, you cannot effectively manage technological risks early because of limited knowledge, and you cannot manage them later because of limited control.

As a crisp theoretical proposition that aimed to cut through a murky and complex reality, Collingridge’s dilemma attracted a lot of attention.  In the decades since its articulation, it has been a perennial reference point for subsequent studies of technology and its societal control.  Yet the tension between knowledge and control is deeper and more systematic even than Collingridge’s statement of it, and other scholars have proposed similarly compelling representations that emphasize different aspects of the tension.  For example, where Collingridge’s formulation stressed chronological sequence, Ludwig, Hilborn, and Walters presented a similarly rich formalization that stressed explicit representation of uncertainty, an aspect of the problem that was only implicit in Collingridge’s formulation.6  Rather than a generic technology with associated risks, Ludwig and his colleagues focus on a natural resource such as a fishery subject to profitable human exploitation, for which the socially optimal or sustainable level of exploitation is uncertain.  They argue that such resources will be systematically over-exploited despite laws and institutions aiming to restrain this—and not just for the familiar collective-action reasons—because political forces press for optimistically biased resolution of uncertainties to favor higher exploitation.  They thus present a distinct causal mechanism from that highlighted by Collingridge, but one that represents a different view of the same basic tension between knowledge and control.

Taken in its starkest or most extreme form, any statement of the tension between knowledge and control—whether that of Collingridge, Ludwig, or others—can yield a condition of complete futility, a statement that adequate societal control of environmental impacts or other risks is impossible.  While this extreme claim is clearly refuted by historical experience, it is also clear that the tension is real, widespread, and instructive: It permeates attempts at societal management of technological risks, and provides a powerful framework to understand variation across issues in how effectively these are managed.  The next two Parts review broad risk-management experience in recent decades and use this framework to identify characteristics of issues in which risks have been more, and less, effectively managed.  Part III proposes approaches to manage the tension between knowledge and control in the context of current and coming risk-management challenges.

I.  How Have We Done? Attributes of Well-Managed Risks

Collingridge’s statement of the knowledge-control tension was based principally on experience of the 1960s and 1970s, anchored in studies of nuclear power and the major sources of air and water pollution, plus a deeper historical review of the development of the automobile.  But the experience of the several decades since then shows that the tension is not as disabling as its starkest formulations would suggest.  For environmental issues and other technological risks, the United States and other industrial democracies have achieved remarkable success, both at mobilizing technological advances to reduce previously known environmental burdens, and at limiting the environmental, health, and safety risks introduced by new technological advances.7

This success rebuts only the most extreme and categorical statements of the knowledge-control tension, not the broader and more nuanced claims that the tension hinders effective risk management.  But this contrary experience does suggest the need for a closer look at the interplay between the conditions of knowledge and control, and how these play out over time in real cases.  Looking across cases with some granularity, it is evident that the knowledge-control tension is widely present and is a useful analytic framework, but that both knowledge conditions and control conditions vary instructively across cases, and that this variation allows effective control of many technology-related risks.  Reviewing this experience, I propose that the instances of most effective control of environmental and other technology-related risks can be grouped into three broad types, each exhibiting specific characteristics that allow the basic tension to be, if not fully surmounted, then at least softened.

A.  Novel Technologies but Familiar Risk Mechanisms

Take another look at Collingridge’s chronological formulation of the dilemma.  The first stage, in which knowledge is too limited to identify potential risks or desired controls, presumes that whenever a technology is new or rapidly changing—and hence weakly understood—any harmful impacts or risks associated with it will also be weakly understood.  But this does not necessarily follow.  Innovations, and the artifacts and processes that embody them, combine elements of new and old, and whether a technology is disruptive, incremental, or familiar, many of its effects, including risks, are mediated by physical flows of material or energy that are observable, predictable, and well understood.  Obvious examples include energy demands of production or product use; air pollutants emitted from combustion; and materials used in production processes or products, or left behind as wastes.  Projecting and understanding the makeup and flows of these materials, and their subsequent fate and transport through the environment, is a separate matter from understanding the new technologies that generate them.  Even disruptive innovation does not disrupt the periodic table of the elements, or the physical, chemical, or biological processes that determine the fate and impacts of these flows.

Most of the major successes in managing environmental, health, and safety risks over the past forty years are of this type.  Whether the technologies involved were old, incrementally or significantly innovative, the risks they imposed were driven by known flows of energy, material inputs, or pollutants, which in turn were subject to control by known regulatory instruments—including laws, prescriptive or performance regulations, incentive-based policies, or codes of conduct or other nonlegal control measures.

Note that my claim here is relatively narrow: For these issues, effective risk control is not fundamentally obstructed by the inability to understand risks, their causes, and potential responses early enough to exercise effective control.  This is not to say that enacting effective regulatory controls is politically easy.  The targets of regulation still have their champions, who resist controls and have varying degrees of political resources to support their resistance.  Advocates of control may need to surmount claims that the activity at issue is not a proper target for regulation—for instance, that it is too small to matter relative to other human or natural processes, or too socially valuable, or is being unfairly singled out to bear regulatory burdens; or that there are no acceptable alternatives to the activity, or no feasible or acceptable means of controlling the risk.  Even if the reality of risks and the need to control them is widely accepted, there is still substantial room for conflict over precisely how tightly and in what way to do so, which depends in part on quantitative characterization of benefits and costs of various levels of control.

Even in these “easy” cases, advocates of control often lose these fights in the first round or two, but the longer game is in their favor.  As the products or activities at issue grow in scale—for example, the number and size of sources of a given pollutant, the number of vehicle-miles traveled, aggregate industrial output—it becomes easier to win the argument that the scale and character of resultant risks warrant control.  Similarly, as sustained arguments over technological feasibility sharpen questions and enable progress, innovators demonstrate advances in particular control methods, and related research advances, it becomes easier to win the argument that the required controls are feasible and can be achieved at moderate cost.8

Regulation thus often lags behind innovation, even for these cases of well-understood risks.  We put millions of dirty cars on the road first, and only then start cleaning them up.9  But control does catch up, so the great race between environment-burdening expansion (of people and their enterprises), and environment-benefiting innovation (reducing burdens per unit activity) remains near a dead-heat, or slightly favors environmental progress.10

I am painting with a broad brush, and some caution is needed in calling these cases successes.  At any given time in this process, multiple forms of pollution and risk exceed what proponents of protection would prefer.  Moreover, the large-scale pattern of controls lagging behind the expansion of burdensome activities then catching up can only be counted a success if nature is sufficiently forgiving, in that those environmental processes bearing human disruptions do not experience sudden collapses or irreversibilities.

B.  Systems With Embedded Precaution

To the extent the cases discussed up to this point overcame the knowledge-control tension, they did so due to benign knowledge conditions.  Even if technologies were novel, their risk mechanisms were well enough understood to allow identification and political adoption of effective controls in time for adequate protection.

A second class of successful experiences in managing technology-related risks does not share these benign knowledge conditions.  These cases involve complex technological systems subject to a low chance of highly destructive failures.  Civilian nuclear power is the leading example of this class of low-probability, high-consequence technology risks, while civil aviation systems, large-scale electrical grids, and some other industrial systems provide related examples.

For these systems, reliable calculation and assessment of risk pathways is difficult and imperfect, due to the complexity of the systems and associated risk pathways, and also in many cases due to data limitations.  In line with Collingridge’s formulation, knowledge limitations are particularly acute following significant innovations to the systems, which change risk pathways in multiple interacting ways that are resistant to full analysis in advance.  Over time, working with stable or standardized systems, these problems attenuate as operating experience increases understanding of system performance.

Civilian nuclear power played a uniquely prominent role in thinking about risk management in the 1970s.  It was widely viewed as the leading example of catastrophic risks that defy normal methods of assessment and analysis.11  Collingridge saw it as the technology that most strongly exhibited his dilemma of knowledge and control.  Reflection on the extremity and supposed unmanageability of its risks was a fruitful stimulus both to new social theory (such as sociologist—and nuclear opponent—Charles Perrow’s theory of the normalization of catastrophic accidents in complex socio-technical systems),12 and to proposed social innovations in risk-management institutions, from the serious to the whimsical (such as physicist—and nuclear proponent—Alvin Weinberg’s thought experiment of the “nuclear priesthood”).13

Yet these cases—including, emphatically, civilian nuclear power—are also instances of successful risk management over the past forty years.  Civil aviation has been an extraordinarily safe activity for decades, with fatal accidents continuing their long-term decline year by year.14  And although nuclear power remains a source of sharp controversy and widespread fear, the actual realization of harms from its use—to the environment, health, or safety—is starkly at odds with the cataclysmic expectations expressed about it.  How can this be so?

In view of continuing dissent over the safety of nuclear power, I address this question—“how can this be so?”—in two ways: first, to demonstrate that it is empirically correct, and only then to inquire into the reasons why it is the case.  The safety record of civilian nuclear power operations is one of remarkable success.  There have been many operational failures with zero or tiny radiation release and inconsequential public exposures—just as there are many, meticulously tracked, sublethal failure events in civil aviation.  In the United States, there has been just one large-scale commercial reactor failure, the partial meltdown of the Three Mile Island reactor in Pennsylvania in 1979.15  This accident exposed the public to radiation, but at levels so low that the number of projected deaths is zero.16  Elsewhere in the world, the reactor failures following the earthquake and tsunami at Fukushima Japan in 2011 also resulted in public radiation exposures so low that projected premature radiation-induced deaths round to zero.17  Radiation releases at Fukushima were substantially larger than those at Three Mile Island, however, so these low exposures and zero projected radiation deaths reflect the aggressive evacuation of neighboring communities—with associated social disruption that was itself responsible for roughly fifty deaths of vulnerable ill and elderly people.18  Both these events reflected multiple failures and errors of equipment design, operation, external conditions, and bad luck.  Yet even with all these conditions, they caused no (or at worst, very few) public deaths.

In contrast, the one civilian nuclear failure that was at least locally catastrophic, the fire and meltdown at Chernobyl in 1986, did kill people.  Roughly thirty-five people died from trauma and acute radiation exposure shortly after the accident, and about 4,000 people out of the roughly five million receiving lower levels of radiation exposure are projected to die prematurely from radiation-induced cancers.19  Chernobyl was a perfect storm of interactions between dangerous design, lack of operator training, reckless operation, and make-matters-worse emergency response, which surely comes close to a worst possible case of dangers from nuclear power.  Yet its total death toll still represents a small fraction of that which occurs every year from respiratory illness in the United States due to coal-fired electrical generation—estimated at 13,200 per year in 2010, down from 23,600 in 2004 due to tighter controls.20  This comparison suggests the black-comedic thought experiment of replacing the U.S. coal generating fleet with Chernobyl-style nuclear reactors, operators, and emergency procedures, all imported from the pre-collapse Soviet Union.  After such a macabre change, the United States could suffer up to three Chernobyl-style disasters each year—a disaster rate much higher than the Soviet Union actually experienced with its dozen Chernobyl-type reactors—yet still save thousands of lives.

Although the conditions for successful management of risks in these cases differ strongly from those in the first type, it is still instructive to apply the knowledge-control framework to understand the determinants of success.  Knowledge conditions are less favorable in these cases.  The risks in these cases arise not from known flows of material or energy associated with routine operations, but from high-severity, low-probability events with complex causal pathways.  The complexity of the systems hinders attempts to characterize and control risks with formal analytic methods or models.  At the same time, the low probabilities of the major risks of concern—and thus the rarity and the long return times of serious realized risks—hinder attempts to understand risks from operational experience, since most individuals and even institutions working with the systems will never experience a severe event.  These conditions attenuate to some degree over time, of course: As specific systems accumulate operational experience—including experience with lower-severity failures—their knowledge relevant to more severe failures improves.

But while these cases have less favorable knowledge conditions than the first type, their control conditions are more favorable, for several reasons.  First, it is possible to engineer multiple redundant safety levels into the systems—in equipment design, operating procedures, and training—so failures do not propagate to catastrophic outcomes.  Second, determinants of risk are concentrated in a relatively small number of facilities (fewer for nuclear power, more but with highly standardized design in the case of civil aviation), operated by technically competent organizations.  As a result, monitoring, control, and diffusion of learning from relevant experience are all built into the system.  Finally, the need for control of these risks is so widely recognized, and the dreaded nature of the potential outcomes so clear, that there is no significant political opposition to the general program of strictly controlling risks, merely some degree of technical and political disagreement over the details of how to do so.  The upshot is that weak knowledge conditions are offset by strong control conditions—rooted in institutional, political, and rhetorical factors—that allow strong, redundant controls.  Precaution is embedded throughout the system, in its capital stock and its organizations and routines.  Relative to a counterfactual in which the character and mechanisms of risks were known perfectly, the systems may even embed too much precaution, including excessively costly or ineffective controls.  But in the practical world of limited knowledge, risks are adequately controlled by this rough-and-ready, redundant approach, despite imperfect understanding of their causal conditions.

So for this type of risk, as for the first type, particular conditions soften the tension between knowledge and control, thereby allowing effective management of technological risks.  As with the first type, however, my characterization of these issues as successfully managed requires some limits and qualifications.  Successful realized experience—a low body count—does not authoritatively demonstrate that risks are well managed, particularly for systems whose risks lie toward the high-consequence, low-probability end of the distribution.  A few decades might not be long enough to tell—although with these systems, realized experience of a range of failures provides a fair degree of empirical foundation for the claim of effective management.  For nuclear power in particular, some risk pathways—in particular, risks related to disposal of high-level wastes (in those countries, including the United States, that have not yet implemented a sound solution to the problem), and risks related to nuclear materials diversion for weapons—are not yet subject to similarly precautionary controls, and thus remain the most serious unresolved risks.

C.  Existential Risks Posed by Scientific Research

A third group of technology-related risks that have been considered over the past few decades merit separate discussion.  These are existential risks—risks that include as possible outcomes fundamental threats to human civilization or survival.  They represent a more extreme form of the “embedded precaution” type discussed above in Part I.A, shifted further toward low-probability, high-consequence extreme outcomes.  As we shall see, they thus might be distinct from issues of the second type, and they might represent an additional set of examples of effective risk management.  Or they might not: Knowledge conditions might be so severe in these cases as to preclude these second-order judgments.

Suggestions that human activities might carry such extreme risks have been raised three times in the past four decades.  On each occasion, the potential extreme risks were raised by scientific research activities themselves, rather than from deployment of subsequent technologies.

The first two instances concerned risks posed by advances in life-science research, due to the prospect of extreme biological or ecological harm following potential release of new living materials into the environment.  The first case concerned the initial experiments in recombinant DNA (rDNA) research in the early 1970s.21  The second concerned viral “gain-of-function” studies conducted and proposed over the past three years, which sought to make certain highly lethal influenza viruses transmissible among humans or other mammals.22  In these cases, some relevant experts judged there to be real risks of catastrophic propagation of modified organisms, although other relevant experts disputed these suggestions.  The third case concerned the prospect of propagating quantum-scale events that could rapidly destroy the Earth and everything (and everyone) on it, potentially triggered by collisions of heavy nuclei occurring in new, high-energy particle accelerators then under construction at the Brookhaven National Laboratory on Long Island, and at the European Center for Nuclear Research (CERN), near Geneva.23  As with the two proposed existential biological risks, a few relevant experts judged these risks sufficient to require serious assessment and a risk-informed decision whether or not the research should be allowed to proceed, while others argued that the proposed risks were remotely implausible, or outright impossible.  In all three cases, the risks might have been extremely low in probability, or might have been unknowable: How you characterize these is strongly diagnostic of your philosophical stance regarding the meaning of uncertainty and its characterization in terms of quantitative probability—in particular, how much of a Bayesian you are.

For these risks, knowledge conditions are even weaker than for the second type.  It is instructive that all three cases concerned scientific research rather than technology or applications, since this is where the first-ever human manipulation of previously unknown or unperturbed natural processes typically occurs.  All three cases posit a scenario in which such an unknown natural process, once perturbed by human meddling, cascades out of control to a catastrophic outcome.  In each case, it was a priori impossible to authoritatively characterize the risks; relevant experts disagreed whether they merited serious consideration; and empirical methods based on analogies to better-known processes were unsatisfactory, providing at best only very weak upper bounds on the probability of catastrophic outcomes.

But while knowledge conditions on these issues were particularly weak, control conditions were particularly strong—mainly because the risks arose directly from scientific research activities.  Scientific research is subject to strong control mechanisms, through public funding and associated decisions of program design and proposal review, including complex conditions such as human-subjects controls.  In addition, it is possible to impose moratoria on research judged particularly risky, such as that now in force in the United States for viral gain-of-function studies.  Even when there is substantial privately funded research, the risk-management decisions of public agencies exercise substantial normative force in persuading others to follow.  Risk control on these issues is further advantaged by the relatively small, tightly connected, and sophisticated social networks of researchers, which allow internal debates over the character and severity of risks and the likely efficacy of proposed controls to proceed with sophistication, focus, and a relative absence of inflammatory rhetoric.

In each of these three cases, there were intensive expert efforts to assess the risks and, as necessary, identify control measures.  In the two life-sciences cases, research was subjected to a moratorium while these assessment efforts proceeded.24  In the early rDNA and collider cases, these efforts generated sufficient confidence among the relevant group of expert insiders, the research proceeded—with newly tightened lab-safety protocols in place in the rDNA case—and the anticipated catastrophic risks were not realized.  The viral gain-of-function case is still under a U.S. funding moratorium and undergoing risk assessment at present.

Even more than for the first two types, my treatment of these cases as successes is contestable.  So also is the distinction between this and the second type.  Like the second type, these cases are marked by weak knowledge conditions but strong control conditions, but here these conditions are more extreme.  Research and risk assessment were conducted, building support to let the research proceed.  It did—in the first two cases, while the third is still contested—and in those two cases, nothing bad happened.  But the extreme weakness of knowledge conditions here allows for conflicting interpretations of these events.  These cases may represent instances of responsible and effective risk assessment and management.  Or perhaps—if the knowledge conditions are bad enough—the attempts at risk assessment may be better understood as serving a ceremonial or ritualistic purpose than as providing a well-founded basis to judge the risks small enough to proceed, so the decision to proceed remains an existential gamble.  It remains possible—notwithstanding subsequent experience of humanity not coming to an end in each case—to interpret each case as taking a perilous step and simply being lucky.

II.  From Past to Prospect: Attributes of Hard-to-Manage Technological Risks

I do not want to paint too rosy a picture.  Not all technology-related risks have been handled equally well and easily over the past fifty years, and even to the extent they have, past success does not necessarily imply future success.  To characterize how serious the knowledge-control tension is likely to be in future risk-management issues, and to identify which are likely to pose the most severe challenges—it is informative to look back with the opposite perspective, asking what characteristics have been prominent in the issues that have posed the greatest difficulty in managing technology-related risks thus far.

Like the successes, I also propose that the most challenging historical cases fall into three broad types with consistent characteristics.  To the extent these same characteristics are prominent in future issues, they may signal particular challenges in future risk management.  As in the prior cases, I discuss these through the lens of knowledge conditions, control conditions, and tensions between these.

A.  Newly Identified Causal Mechanisms of Harm

In the discussion of successful experiences above, the first type (in Part I.A) depended on risks being mediated by identified and measurable flows of materials or energy associated with the technology, and the mechanisms by which these flows caused harm being relatively well known—even if the technologies at issue were novel or rapidly changing.

But this actually states not one condition but two, which must be considered separately.  In some of the most prominent historical cases of environmental risk management, risks were mediated by flows of materials and energy that were identified and measurable, but the causal mechanisms by which these flows caused harm were not previously known.

A newly identified causal mechanism of harm can arise in a few ways.  Most obviously, it can be associated with the introduction of new materials, such as new synthetic chemicals, into use or commerce, but this is not the only way it can happen.  New harm mechanisms can also be associated with materials that were previously thought benign, or with materials that were already known to carry some other risk, but at levels of flow or concentration so low that they were previously thought benign.

This stylized sequence of events—identifying a new causal mechanism of harm, by which some activity or material formerly thought benign is newly recognized as harmful—characterizes nearly all the most difficult and contentious environmental issues of the past half century.  It is the basic story of photochemical smog from automobile emissions, discovered in Los Angeles by Haagen-Smit in the 1950s.25  It describes bioaccumulation and resultant ecological harms from fat-soluble organic chemicals, promulgated by Carson in the early 1960s.26  It is the story of catalyzed depletion of stratospheric ozone by chlorine and bromine atoms released from chlorofluorocarbons (CFS), discovered by Molina and Rowland in the 1970s.27  It is the story of long-range transport and deposition of acidifying chemicals, pieced together by multiple researchers through the 1960s and 1970s.28  And it describes endocrine disruption as discovered by Colborn and her colleagues in the 1990s—this issue representing a second indictment of some of the same organic chemicals identified with another harm mechanism by Carson and colleagues thirty years before, but now causing harm at much lower concentrations through a newly identified causal mechanism.29  Greenhouse gas–driven climate change could be included on this list, but in my view does not quite fit the same historical pattern.  These other issues all leapt to policy prominence quickly upon scientific articulation of the new harm mechanisms.  For climate change, however, the basic causal mechanism has been known for more than a century, and scientific validation of its potential quantitative importance preceded its appearance on the policy agenda by two to three decades.30  Supposed scientific controversy since then—continuing today—over the reality and importance of its harm mechanism is predominantly a fabrication driven by material and ideological interests opposing controls, not by scientific disagreement.31

In these cases, belated identification of causal risk mechanisms implicating products, processes, and technologies already in large-scale use hindered both knowledge and control conditions.  A newly identified mechanism is bound to be initially regarded as uncertain and subject to sincere scientific disagreement, so delay and controversy is likely as scientific knowledge and judgments about the new claim stabilize.  In addition, because the newly claimed mechanism poses a regulatory threat to established activities, this scientific disagreement will unavoidably be mixed with materially motivated advocacy, some of it characterized as scientific disagreement.  A newly identified mechanism may also imply responses to manage the risk that require new regulatory approaches, institutional capacity, statutory authority, or political mobilization.  Responding to the new risk is thus likely to admit more conflict and delay than for modification of controls that fit within existing understandings.  These historical issues that raised new risk mechanisms have gradually come under control, with varying degrees of effectiveness.  But the process of getting there has consistently been slower and more contentious than on issues for which mechanisms were better or longer known.  In management of future technological risks, we should similarly expect newly identified causal mechanisms of harm to be a stronger predictor of difficulty and delay than new technologies.

B.  Technological Risks Dominated by Socio-Political Pathways

Technologies are socially embedded.  This is one of the major lessons taught by science and technology studies over the past forty years,32 and is one of the key points upon which Collingridge’s statement of the knowledge-control dilemma rests.  A large part of his reasoning for why knowledge about desired controls is not available early is that new technological capabilities remain diffuse and uncertain until they are deployed in artifacts, distributed, integrated into economic, social, and political systems—and human behavior adjusts around them.  The actual impacts of technologies, including risks, depend on all these steps.

The importance of these behavioral adjustments, individual or collective, in mediating effects of technologies is widely known.  Indeed, such reactions are the source of one of the major challenges to control of even the most prosaic environmental risks—behavioral feedbacks that undo some of the gains from risk-reducing innovations, usually called “rebound effects.”33  The canonical example of a rebound effect is the response of drivers to increased vehicle fuel economy: With the cost of driving per mile reduced, people drive more, undoing up to one-third of the fuel savings from increased efficiency.34

The magnitude and importance of socio-political embeddedness in determining risks or other impacts of technology is not constant across issues, however.  Even among simple rebound effects, some are large and some are small.  For many technologies, environmental and other risks are strongly determined by intrinsic properties of technologies, products, and processes—for instance, materials in commerce with direct environmental impacts, pollutants or by-products from combustion or other chemical reactions involved in production, products or processes with identified failure modes and associated risks—subject only to small variation as a function of how they are used.  Even newly identified risk pathways, such as CFC-catalyzed ozone depletion, are sometimes determined mainly by biophysical processes, not socio-economic ones.

In other cases, risks and other effects will depend more strongly on how innovations are incorporated into products and used, and on behavioral responses at various levels of aggregation, from individuals to enterprises to states.  Illustrative examples of such stronger socio-political effects, present and prospective, include: the impact of smartphones and social media on driving safety and on adolescent social and sexual behavior; the prospective concern that perceived availability of low-cost climate engineering technologies may undermine political incentives for needed cuts in greenhouse-gas emissions;35 and the prospect that the ability to make targeted genetic enhancements of children’s intelligence, strength, or other socially valued attributes may generate a competitive arms race, exacerbate social inequality, or undermine socially foundational precepts of autonomy or shared humanity.

These cases are diverse, yet still suggest certain commonalities in types of innovations whose effects are likely to depend most strongly on socio-political reactions.  They suggest that strong socio-political dependence of risks is more likely for innovations that create general capabilities open to a wide range of uses or adaptations, whether versatile capabilities and potential uses embedded in new products (such as smartphones), or versatile capabilities in new production technology (such as techniques for genetic manipulation).  In addition, even among innovations that open such wide scope for subsequent use, concern about socio-political risk pathways is likely to be greatest when the range of subsequent choices they enable includes some that implicate familiar moral or social dilemmas, such as inter-temporal choice problems that offer an immediate benefit but carry potential future harms even to the chooser, or collective-action problems in which individuals are tempted to make choices that benefit themselves at others’ expense.  The smartphones, climate engineering, and designer babies cases mentioned above can all readily be expressed in terms of such familiar moral or strategic tropes, such as temptation, immediate gains versus long-term risks, arms races, and others.

Like the prior types, these cases can also usefully be analyzed in terms of the effects of these issue characteristics on knowledge conditions, control conditions, and the tension between them.  When technological risks are strongly modulated by socio-political responses, knowledge is weaker because the causal pathways implicating risks are more complex and uncertain.  They include not just direct impacts of products and production processes on biophysical processes, but also the variability of potential human behavior and preferences, at levels from individual behavior to national or global policy, potentially including motivations destructive to one’s self or others.  Because these socio-political responses in part represent subsequent reaction to choice opportunities that were opened by the innovation (although only in part, since motivations aligned with such subsequent use may also have influenced the development of the innovation), these issues most strongly exhibit Collingridge’s original concern that uses and associated risks can only be known after a substantial delay.  In terms of control conditions, when something identified as a single technology can have either beneficial or harmful effects depending on how it is used, control efforts that target the technology itself are likely to be difficult to enact, and at risk of being misplaced if enacted.  Rather, issues with these characteristics tend to displace the location of potential control away from the technology itself and toward the subsequent usage or behavior, but attempts to control these also pose various difficulties.  Monitoring and enforcement are more decentralized and difficult, and socially beneficial controls are more likely to implicate liberty interests or other deeply held political values.  In this line, it is instructive to consider the history of attempts to limit risks from firearms, whether targeting technology or behavior.

C.  Weakly Defined Technologies

Some areas of technological progress are diffuse, labile, and hard to define or bound.  These areas may be related to new methods for manipulating the nonhuman world, or new ambitions that shape inquires in multiple areas.  Although the actual collection of capabilities, methods, artifacts, and aims at issue may be diverse or protean, these sometimes come to be bundled—in public and political imagination—into one category, perceived as one thing, and given one name.  The name can take on a life of its own in subsequent public and policy debate, becoming the locus for both breathless claims about anticipated benefits or horrific visions of potential harms—with associated calls for policy responses to support, control, or sometimes prohibit the broad, ill-defined thing.  In recent and current debates, I suggest this description applies, in varying ways, to genetically modified organisms, synthetic biology, cloning, artificial intelligence, nanotechnology, and climate engineering, and also to “fracking,” to the extent this term has come to stand for a diverse collection of extraction technologies and associated resources.

In these cases, the name has become the focal point for debate and conflict over risks, but in many cases the name fits poorly with the heterogeneity of underlying activities and associated risks.  This mismatch can obstruct effective control of risks in several ways.  The aggregate term may obstruct informed debate if it suppresses crucial variation in actual mechanisms and severity of risk.  Debates conducted at such a high level of aggregation may tend to polarization, or to falling into unhelpful definitional arguments.  Attempts to enact concrete controls may risk serious errors of targeting, including both over- and under-inclusiveness.  Or alternatively, regulatory boundaries may be drawn clumsily or statically, so incremental advances can easily evade them—as research advances have readily avoided clumsy legislative attempts to ban cloning and stem-cell research.

Like the third type of success discussed above in Part I.C, this type might not be fully distinct, but merely a more extreme subset of the preceding type.  The diffuse, undetermined nature of these technologies can be understood as an even stronger dependence of their risks and other effects on human decisions and reactions.

The challenges of these cases can thus also be analyzed in terms of knowledge conditions and control conditions, but these conditions resemble those of the prior type, albeit in more extreme form.  The diffuse, weakly defined nature of the technology obstructs knowledge of potential risks, because their mechanisms are so heterogeneous and so dependent on specific ways the diffuse capabilities may be implemented and used.  At the same time, effective control is hindered because the location of attempted control may be at risk of being misplaced, of being over- or under-inclusive relative to actual risks, or of being easy to evade by incremental shifts in how capabilities are deployed or used.

III.  Potential Responses, Their Limits, and Their Costs

These issue types merely aim to illustrate conditions associated with stronger or weaker risk management in the past, with suggestive implications for future issues.  They do not purport to be a taxonomy: They are not sharply defined, entirely distinct from each other, or exhaustive of all risk-management cases.  They do, however, identify dimensions of variation likely to remain diverse in current and future risk-management issues, and diagnostic of the severity of associated risk-management challenges.  For example, they suggest that the risk-management agenda will continue to include prosaic issues that call for incremental and straightforward adjustment of current policies and practices, driven by technological changes and other factors.

Yet there are also two reasons to expect an increase in the harder types of risk-management issue.  First, as aggregate human activities press more strongly against global-scale processes and constraints of the finite Earth, it is reasonable to expect more risks that involve causal mechanisms—geophysical or ecological—that have not previously been thoroughly explored or well understood, even if they were known in theory.  Second, several directions of broad technical advance now underway—including synthetic biology and other interventions enabled by ever more powerful tools of genomic manipulation, climate engineering, artificial intelligence, and neuroscience, all of these enabled by continuing advances in IT and nanotechnology—suggest transformative expansions of potential applications and uses, making it likely that more and higher-stakes risk issues will be driven by interactions between technical advances and socio-political responses.

What does this mean for responses?  Every issue is unique, of course, and the details matter.  Yet a few general points about how to maintain an adequate overlap of knowledge conditions and control conditions also suggest themselves.

First, regarding knowledge conditions: If important risks are increasingly driven by mechanisms for which knowledge is weaker—either because they are less well established scientifically, or because they are strongly driven by socio-political reactions rather than intrinsic material properties of technologies—and the option of waiting for better knowledge is unacceptable due to the risk of incurring serious harms while waiting, what then is to be done?  One response would be to open up risk assessment processes to inquiries that are more exploratory and speculative, less modeled on scientific practices and authorities—stepping back from standardized hypothesis testing norms as the threshold for taking claims seriously, yet somehow without giving up disciplined, critical inquiry.  The aim of such processes would not be to authoritatively establish new factual claims about the world, but to characterize potential risk pathways and bound possibilities, even under deep uncertainty about the specific form and uses of innovations and associated risk mechanisms.  Such processes would take socio-political reaction to innovations seriously (although hopefully would not be dominated by worst-case thinking), and would be willing to pose questions of the form, “how might people want to use this capability”?  One aim of such processes would be to make certain kinds of speculation about weakly established mechanisms or human responses respectable elements of risk-control debates—yet at the same time, to provide some degree of discipline to the speculation, somehow anchored around explicit (dare I hope, even quantitative) discussions of uncertainty.

Second, regarding control conditions: If, by assumption, high-stakes risk control decisions must be made under increasingly deep uncertainty, this implies the need for greater willingness to enact controls without knowing the precise terms of desired control—to over-control in the interests of precaution—including a wider range of scope and form of controls.  It might be necessary, for example, to intervene in high-stakes technological innovations at early stages of their development; to exercise greater control over scientific research itself, including explicit restrictions on research in areas judged likely to generate high-risk and hard-to-control capabilities; or to impose advance restrictions on particular applications or uses of advancing technological capabilities.

If this seems radically dirigiste, a current trend illustrates why it might nevertheless be needed.  Advanced genetic editing kits based on CRISPR/Cas9 technology are starting to be available commercially at low cost (about $200).  Although capabilities of these are limited at present, enthusiasts are anticipating a do-it-yourself synthetic biology movement, with thousands of hobbyists creating new designer microbes in their homes.  Surely nothing could go wrong with that.  In the face of such distributed capabilities, options for societal control over risks appear limited to two broad approaches: Either intervene earlier in the process of research and innovation to monitor or limit subsequent creation and distribution of dangerous capabilities; or give up, ride the wave, and assume it will all be fine.

If the choice is not just to hope for the best but to broaden regulatory authority under weaker knowledge conditions, two consequences appear to follow.  First, since controls would be adopted under deeper uncertainty, they must also be able to adapt in response to greater knowledge and experience, including the ability to tighten, loosen, or change the scope and character of previously enacted controls.  Second, more regulatory authority should be delegated to technically competent, knowledge-driven processes—including some authority to define the scope and boundaries of technologies, uses, or impacts subject to regulation that is understood to involve legislative aspects in present regulatory systems.  This would represent a real, albeit incremental, shift in the balance of regulatory authority, from democratically constituted decision processes toward technocratic ones.36

All these changes would raise serious questions and concerns.  Broadened regulatory authority raises problematic issues of procedural recourse for regulated entities, and of accountability for regulatory decisions and broader questions of democratic legitimacy.  It would also represent a stark break with current regulatory culture, in which it is presumed that research, technologies, and uses may proceed absent strongly established and compelling societal interests to the contrary.  This culture may already be starting to shift for transformative new technologies, however.  Calls for strict control or even broad prohibitions on research or technology are increasingly common in artificial intelligence, various areas in life sciences, and other fields, including from prominent conservative and industry figures.37  These calls have elicited forceful reactions based on expansive liberty claims, but also more measured objections based on the prudence and feasibility of attempting such controls.38  The debate does suggest, however, that my proposals for expanded regulatory authority are not particularly radical.  Indeed, these proposals can be viewed as an attempt to find viable middle ground between present regulatory processes that look increasingly ineffectual, and broad calls for prohibitions.

[1].        See [small-caps]Jacques Ellul, The Technological Society [end-small-caps](1964); [small-caps]Langdon Winner, Autonomous Technology[end-small-caps] (1977).

[2].        See [small-caps]David H. Guston, Between Politics and Science[end-small-caps] (2000).

[3].        See, e.g., [small-caps]Francis Fukuyama, Our Posthuman Future: Consequences of the Biotechnology Revolution [end-small-caps](2002).

[4].        See [small-caps]Douglas A. Kysar, Regulating From Nowhere: Environmental Law and the Search for Objectivity[end-small-caps] (2010); Lynton K. Caldwell, Beyond NEPA: Future Significance of the National Environmental Policy Act, 22 [small-caps]Harv. Envtl L. Rev. [end-small-caps]203 (1998).

[5].        [small-caps]David Collingridge, The Social Control of Technology[end-small-caps] (1980).

[6].        Donald Ludwig et al., Uncertainty, Resource Exploitation, and Conservation: Lessons From History, 260 [small-caps]Sci.[end-small-caps] 17, 36 (1993).  These authors were colleagues of C.S. (“Buzz”) Holling, and pioneers with him of the concept of “adaptive management”—a profound aspiration for human interactions with natural systems that has suffered the misfortune of descending into cliché without being adequately elaborated.  See [small-caps]Adaptive Environmental Assessment and Management[end-small-caps] (C.S. Holling ed., 1978); see also [small-caps]Panarchy: Understanding Transformations in Human and Natural Systems[end-small-caps] (Lance H. Gunderson & C.S. Holling eds., 2002).

[7].        Richard J. Lazarus, Judging Environmental Law, 18 [small-caps]Tulane Envtl L.J.[end-small-caps] 201 (2004).

[8].        See, for example, the detailed discussion of technical progress in Chlorofluorocarbon (CFC) alternatives in [small-caps]Edward A. Parson, Protecting the Ozone Layer: Science and Strategy [end-small-caps](2003).

[9].        Jennie C. Stephens & Edward A. Parson, Industry and Government Strategies Related to Technical Uncertainy in Environmental Regulation: Pollution From Automobiles (Oct. 16–18, 2003) (prepared for presentation at the Open Meeting of the Global Environmental Change Research Community, Montreal, Canada), http://sedac.ciesin.columbia.edu/openmtg/docs/Stephens.pdf.

[10].     [small-caps]Arnulf Grübler, Technology and Global Change [end-small-caps]367–92 (1998).

[11].     John H. Perkins, Development of Risk Assessment for Nuclear Power: Lessons From History, 4 [small-caps]J. Envtl Stud. & Sci. [end-small-caps]273 (2014); see also M. Granger Morgan et al., Why Conventional Tools of Policy Analysis Are Often Inadequate for Global Change, 41 [small-caps]Climatic Change[end-small-caps] 271 (1999).

[12].     [small-caps]Charles Perrow, Normal Accidents: Living With High-Risk Technologies [end-small-caps](Princeton Univ. Press 1999) (1984).

[13].     Alvin M. Weinberg, Social Institutions and Nuclear Energy, 177 [small-caps]Sci.[end-small-caps] 27, 34 (1972).

[14].    [small-caps] Int’l Civil Aviation Org., Safety Report[end-small-caps] (2016).

[15].     See Backgrounder on the Three Mile Island Accident, [small-caps]U.S. Nuclear Reg. Commission[end-small-caps], http://www.nrc.gov/reading-rm/doc-collections/fact-sheets/3mile-isle.html [https://perma.cc/UHV2-GGA5] (last updated Dec. 12, 2014).

[16].     Maureen C. Hatch et al., Cancer Near the Three Mile Island Nuclear Plant: Radiation Emissions, 132 [small-caps]Am. J. Epidemiology [end-small-caps]397 (1990).  These numerical estimates count all premature deaths that have been or will be caused by radiation exposure from the accident.  For exposure levels as low as those at Three Mile Island, there are no observable near-term health effects, so premature deaths are all due to eventual induction of cancers and their number is estimated statistically based on known dose-response functions.  No individual cancer death can be attributed to the exposure, only the aggregate number.

[17].     The approximately 150 workers in the plant during and after the accident received substantially higher doses than any members of the public, at levels likely to be associated with approximately one or two premature cancer deaths among this group.  See Geoff Brumfiel, Fukushima’s Doses Tallied, 485 [small-caps]Nature [end-small-caps]423 (2012).

[18].     U.N. Sci. Comm. on the Effects of Atomic Radiation, Rep. to the General Assembly With Scientific Annexes, U.N. Doc. A/68/46 (2013).  The tsunami and earthquake that triggered the reactor failure, as distinct from the reactor failure, killed nearly 16,000 people and displaced more than 200,000.

[19].     [small-caps]Chernobyl Forum, Chernobyl’s Legacy: Health, Environmental, and Socio-economic Impacts[end-small-caps] 9 (2d rev. version 2003–2005).  These statistical estimates of total future deaths vary widely: This United Nations estimate represents a substantial downward revision from an earlier estimate of 10,000 by UNSCEAR (the United Nations Scientific Committee on the Effects of Atomic Radiation).

[20].     [small-caps]Clean Air Task Force, Dirty Air, Dirty Power: Mortality and Health Damage Due to Air Pollution From Power Plants [end-small-caps]12 (2004); [small-caps]Clean Air Task Force, The Toll From Coal: An Updated Assessment of Death and Disease From America’s Dirtiest Energy Source[end-small-caps] 5 (2010).

[21].     Paul Berg & Maxine F. Singer, The Recombinant DNA Controversy: Twenty Years Later, 92 [small-caps]Proc. Nat’l Acad. Sci. U.S.[end-small-caps] 9011 (1995).

[22].     Simon Wain-Hobson, H5N1 Viral-Engineering Dangers Will Not Go Away, 495 [small-caps]Nature [end-small-caps]411 (2013).

[23].     [small-caps]Richard A. Posner, Catastrophe: Risk and Response [end-small-caps](2004); Edward A. Parson, The Big One: A Review of Richard Posner’s Catastrophe: Risk and Response, 45 [small-caps]J. Econ. Literature[end-small-caps] 147 (2007).

[24].     Megan Herzog & Edward A. Parson, Moratoria for Global Governance and Contested Technology: The Case of Climate Engineering (UCLA Law Sch. Pub. Law Research Paper No. 16–17, 2016), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2763378.  Explicit moratoria were not judged necessary in the collider cases, since the experiments of concern could only be conducted in the two enormous facilities at issue, which were in their long construction periods when the risk debates occurred.

[25].     See generally A.J. Haagen-Smit, The Air Pollution Problem in Los Angeles, 14 [small-caps]Engineering & Sci. [end-small-caps]7 (1950).

[26].     [small-caps]Rachel Carson, Silent Spring[end-small-caps] (1962).

[27].     Mario J. Molina & F. S. Rowland, Stratospheric Sink for Chlorofluoromethanes: Chlorine Atom-Catalysed Destruction of Ozone, 249 [small-caps]Nature [end-small-caps]810 (1974).

[28].     See, e.g., Gene E. Likens & F. Herbert Bormann, Acid Rain: A Serious Regional Environmental Problem, 184 [small-caps]Sci. [end-small-caps]1176 (1974).

[29].     [small-caps]Theo Colborn et al., Our Stolen Future: Are We Threatening Our Fertility, Intelligence, and Survival?—A Scientific Detective Story [end-small-caps](1996).

[30].     See generally [small-caps]Spencer R. Weart, The Discovery of Global Warming[end-small-caps] (2003).

[31].     [small-caps]Andrew E. Dessler & Edward A. Parson, The Science and Politics of Global Climate Change: A Guide to the Debate[end-small-caps] (7th prtg. 2009); [small-caps]N. Oreskes & E. M. Conway, Merchants of Doubt[end-small-caps] (2010).

[32].     See, e.g., [small-caps]The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology[end-small-caps] (Wiebe E. Bijker et al. eds., 1987).

[33].     These effects in their most extreme form are called “Jevons’ paradox,” after the observation of William Stanley Jevons that increased efficiency of coal use can cause such strong demand responses to the improved performance and reduced cost of coal-using equipment, that total coal consumption increases, as induced increases in demand exceed direct efficiency-induced reductions.  See [small-caps]W. Stanley Jevons, The Coal Question [end-small-caps](A.W. Flux ed., Augustus M. Kelley 3d rev. ed. 1965) (1865).

[34].     See Steve Sorrell et al., Empirical Estimates of the Direct Rebound Effect: A Review, 37 [small-caps]Energy Pol’y [end-small-caps]1356, 1360 (2009).

[35].     Edward A. Parson & Lia N. Ernst, International Governance of Climate Engineering, 14 [small-caps]Theoretical Inquires Law [end-small-caps]307 (2013).

[36].     See Edward A. Parson, Expertise and Evidence in Public Policy: In Defense of (a Little) Technocracy, in [small-caps]A Subtle Balance[end-small-caps] 42 (Parson ed., 2015).

[37].     See, e.g., [small-caps]Fukuyama[end-small-caps], supra note 3; Autonomous Weapons: An Open Letter From AI & Robotics Researchers, [small-caps]Future Life Inst.[end-small-caps] (July 28, 2015), http://futureoflife.org/open-letter-autonomous-weapons [https://perma.cc/4G68-2LT8]; Bill Joy, Why the Future Doesn’t Need Us, [small-caps]Wired [end-small-caps](Apr. 1, 2000, 12:00 PM), https://www.wired.com/2000/04/joy-2 [https://perma.cc/YHV5-YEF2].

[38].     See, e.g., Gary E. Marchant & Lynda L. Pope, The Problems With Forbidding Science, 15 [small-caps]Sci. Engineering Ethics[end-small-caps] 375 (2009).

About the Author

Edward A. Parson is Dan and Rae Emmett Professor of Environmental Law and Faculty co-director of the Emmett Institute on Climate Change and the Environment at the University of California, Los Angeles.

By uclalaw