Imagine a world that is aggressively engineered for us to achieve highly desirable objectives. In this hypothetical future, technology will serve as the means for governing—or one might say, micromanaging—our world to prioritize three distinctive yet interrelated normative ends: optimized transactional efficiency, resource productivity, and human happiness.
Now, even though we do not currently live in such a world, the technologies required for it to exist are already being rapidly developed and deployed. Take, for example, promoters of the Internet of Things, big data, sensors, algorithms, artificial intelligence, and various other related technologies. They make seductive promises, including that increased “intelligence”—“smart” phones, grids, cars, homes, clothing, and so on—will minimize transaction costs, optimize productivity, and thus inevitably increase our happiness.
It is important to note that despite what some law and economics professionals might say about our current world, society is not really structured to optimize social institutions so as to maximize efficiency, productivity, or happiness. In fact, society usually takes the opposite approach. The social option value of leaving a wide range of opportunities open for the future generally exceeds the value that society could realize by trying to optimize systems in the present.2 In other words, at least in the United States, the default operating principle of social governance of people and shared resources is to leave things underdetermined; this allows individuals and groups to engage in self-determination with different outcomes, depending on the context and changing conditions.3
Our world already seems to be changing rapidly. Technologies govern so much of our day-to-day activities, and do so with such powerful consequences, that it can be difficult for social institutions to keep pace. Cynically stated, tech enthusiasts have been known to celebrate disruptive innovation without critically examining how social practices, customs, or institutions are undermined.4
Now let us turn back to the future world we have been asked to imagine for the Program on Understanding Law, Science, and Evidence (PULSE) conference. There, we will even more thoroughly rely on technologies to intelligently govern our behavior. To be clear from the start, we do not believe this reliance will come about because technologies will have become sentient, autonomous artificial intelligences (AIs) that enslave humanity. Instead, our hypothesis is much simpler and, we think, more plausible than the Frankensteinian alternatives. We imagine that within the next few decades, (1) we will have gradually built and connected smart techno-social environments that actually deliver on their promises; (2) the scope of deployment will expand to the point where there is seamless interconnection and likely integration across all environments within which humans live; and (3) the normative agenda executed throughout all this construction and deployment will be optimal efficiency, productivity, and happiness.5
The path of engineered determinism we are heading on surely allows for many different futures; a change in direction, even 180 degrees, is always possible. Nothing, other than entropy (as Isaac Asimov suggested in The Last Question), is inevitable. But there are many reasons to believe that the future envisioned here is plausible, and given the path we are on, it may even be a reasonable approximation of what lies ahead.
If the world we are envisioning seems awfully stark, know that its intellectual seeds have already been sown. To explain why this is the case, we will focus on three prominent thinkers from the twentieth century who provided three critical pieces of the utopian puzzle: First, we consider Ronald Coase, the Nobel Prize-winning economist, whose work has had a profound influence in economic, legal, and policy circles.6 Second, we focus on Frederick Taylor, whose theories about scientific management of labor revolutionized management of humans in the workplace and beyond.7 Third, we consider Robert Nozick, a philosopher whose thought experiment about an experience machine that one could plug into presciently anticipated the role of technology as means and framed the fundamental normative struggle over whether to succumb to hedonism.8
Coase envisioned our possible future, although he did not know it at the time.9 He was concerned with the costs associated with market transactions—specifically how different institutions affected those costs. He famously imagined a world without transaction costs and postulated that in such a world we would not need to worry about how ownership of resources was allocated or even about how ownership rights were designed. It would not matter who owned what because everything would sort itself out efficiently through market exchange. The beautiful magic of friction-free, costless exchange would inevitably lead to efficiency, and thus maximum social welfare, and thus happiness, and thus . . . UTOPIA!
George Stigler, another Nobel Prize-winning economist, labeled this the Coase Theorem, and generations of law and economics professionals debated its implications for law and policy.10 Ironically, the Coase Theorem was not actually Coase’s vision of the future at all. Coase primarily intended the imagined world to serve as a simple baseline with which to assess, evaluate, and compare various institutions and real-world situations.11 Coase understood very well that the real-world was rife with transaction costs, externalities, and various other complications that made the imagined zero transaction cost world an obvious fairy tale. And so it is not surprising that Coase did not elaborate much on possible means for achieving the imagined zero transaction cost world; again, he did not hold it out as a utopian end to aim for in the first place.12
But, unfortunately, many others did. For decades, Coase’s frictionless world was mistakenly associated with Coase’s vision of the ideal through the Coase Theorem. For a few, it described within reasonable approximation many real-world markets. For many others, it described an achievable state of affairs and thus a goal. This is how George Stigler and many others fashioned it. Transaction costs stood in the way of efficiency and ultimately utopia. So we ought to eliminate them.
Putting aside an analysis of what Coase actually meant, we note how the Coase Theorem and its idealization of the frictionless world have influenced the path we are on. It is difficult to trace intellectual history and measure influence, and we may be conflating many different influences into the Coase Theorem. But if that is the case, consider it emblematic of a larger trend. It seems fair to say that the Coase Theorem, at least implicitly, underwrote the idea that minimizing transaction costs is in and of itself an objective worthy of pursuit by society.
The means to pursue this objective naturally include both social-institutional and technological innovation.13 Coase (and generations of law and economics professionals) focused on social-institutional means—institutions, such as property law, and organizations, such as the firm. But it is important to recognize that technological substitutes for these means are and will continue to be available and increasingly relevant. For example, smart contracts and (semi) autonomous virtual organizations are technological substitutes that might dramatically reduce the transaction costs associated with using conventional institutional and organizational forms.14 Modern technologies and many technologies on the horizon will serve as efficient means for minimizing transaction costs across various contexts.
Transaction costs can be defined broadly or narrowly. Coase focused somewhat broadly on the various costs associated with consummating a market transaction, ranging from costs of identifying parties, negotiation, enforcement, and so on.15 Harold Demsetz (and others) criticized the scope of Coase’s analysis and advocated for a narrower definition.16 Of course, one can just as easily head in the opposite direction and criticize Coase for being too narrow: Why focus only on formal market transactions? There are so many other transactions—or even more broadly, human interactions—inefficiently stifled by transaction costs. Our point, therefore, is to note that the ambiguous breadth of the definition reveals how easy it is to extend the underlying concept and accompanying logic. While property law experts may nitpick the fine contours for purposes of evaluating the optimal design of property rights, the power of the Coase Theorem to shape economic, political, and ideological agendas was not and is not limited by such nitpicking or nuance.
Take, for a counterintuitive example, the rise of behavioral law and economics and in particular the “nudging agenda.”17 Cass Sunstein, Richard Thaler, and others probably would not associate nudging with Coase, much less the Coase Theorem or the goal of minimizing transaction costs to enable efficient private ordering. But we think it is reasonable to reframe nudging in Coasean terms as an agenda that aims to minimize if not eliminate a subclass of (or if not, then a close cousin to) transaction costs, specifically costs associated with errors in human decisionmaking. Sometimes the relevant costs concern transactions between different people and sometimes they concern “transactions” between the same person (such as between me and my future self—think about saving for retirement or eating more healthy food). In a frictionless world, nudging might not be needed at all; cognitively induced errors presumably would be easily (costlessly) corrected. In our friction-filled world, nudging entails interventions that reshape the “choice architecture” people encounter for the purpose of minimizing costs (and costly errors) attributable to cognitive biases and other impediments to efficiency, and ultimately, dare we say, . . . utopia.18
Our extension of the Coase Theorem to the nudging agenda admittedly depends on an expansion of the concept of transaction costs well beyond the costs directly associated with consummating market transactions. We are comfortable with this move for a few reasons. First, over the past half century, economics has loosened the boundaries between markets and nonmarkets, such that economic analysis of nonmarket systems often employs the logic and tools of market analysis. Second, it is often hard to differentiate transactions from many person-to-person interactions that are not technically market transactions but nonetheless entail similar frictions or costs. Third, and most important, the logic of the Coase Theorem generalizes and, at least in the world we are imagining, does not depend on the magic of markets; instead, it depends on smart technologies.
B. Taylorist Utopia: A Vision of Scientifically Managed Human Labor and Perfectly Productive Workplaces19
Another visionary from the twentieth century, Frederick Taylor, dramatically influenced modern society both as it currently exists and as we imagine it could. Taylor famously developed his theories about scientific management of humans in the workplace.20 Like Coase, his work was motivated by concerns about efficiency. Taylor saw substantial inefficiencies in factories and other workplaces, and he attributed many of the inefficiencies to mismanagement of labor. He carefully studied workers and their work, examining minute details of tasks performed in the workplace. Based on the data collected, Taylor developed a system for optimizing their performance with the objective of increased efficiency and productivity.
At one level, Taylor’s management system is a type of technology,21 one that depends heavily on data. Taylorism is one of the best early examples of data-driven innovation, a concept currently in vogue.22 Taylor and his disciples relied on personal observations written in notebooks and careful analysis of various inputs, outputs, processes, and procedures across the many workplaces they studied. Taylor’s critics alleged (accurately in many cases) that Taylor’s prescriptions for management often had an ad hoc flavor to them; when the data was incomplete, Taylor relied on his own judgment, which could not be considered scientific. Yet those gaps in data would close. Twentieth century technological innovations, ranging from the computer to the camera, dramatically upgraded the capability of managers to gather, process, evaluate, and act upon data.23 Taylorism spread across industries and beyond the factory floor, to hospitals, schools, and various other contexts.
At a more fundamental level, Taylor’s management system was a revolutionary system for the techno-social engineering of humans.24 Taylorism and Fordism are famous both for their underlying objectives (to increase efficiency, quality, and productivity for the ultimate benefit of managers, owners, and capitalists) and means (by managing factory workers in various ways that get them to behave like machines).25 We emphasize here a critically important aspect of this type of techno-social engineering. It is the environmental nature of the means, the way in which the managers employing the management practices advocated by Taylor reconstructed the physical and social environments within which their workers worked. The factory not only produced whatever widget the company eventually sold, but it also produced machine-like humans, sometimes referred to as automatons. Critics recognized and railed against this effect on workers. Workplaces and schools are interesting examples because both define and in part are defined by the physical spaces, social institutions, and increasingly by the technologies that together constitute particular environments designed to engineer humans.26
Today Taylorism remains pervasive. Taylorism had its ups and downs across business schools, management consultancies, and factory floors throughout the twentieth century. Some companies moved away from it to alternative systems for managing labor. Nonetheless, the basic principles of Taylorism have become deeply embedded in how society conceptualizes all sorts of management, ranging from businesses to government to schools to amateur athletics to child rearing.
Tomorrow, or at least in the not so distant future we are imagining, Taylorism is one of the building block philosophies that shape the engineered determinism we have posited. Using data about human labor, task performance, and so on, we suspect the trend in workplace surveillance and management will only grow and expand in scope.
Uber-ization of human resources (time, attention, effort, etc.) across various industries is simply a form of Taylorism extended beyond formal employer-employee contexts. Like vehicles, physical space, and computing resources, human physical labor can be uber-ized, meaning optimized for on-demand allocation determined by data and algorithms.
If the Coasean vision is one of frictionless interaction, transaction, and exchange, the Taylorist vision is complementary but focused on minimizing a different set of costs, namely those associated with misallocated or wasted human capital. Ironically, in the near future, eliminating productive inefficiencies that arise from mismanagement of labor might entail getting rid of human managers altogether and turning instead to smart technologies.
But here is the rub: There is no reason to limit technologically optimized and implemented Taylorism to traditional work. We imagine the logic extending to a much wider range of actions that depend upon human labor (time, attention, effort, etc.)—whether driving a car, caring for one’s children, exercising our bodies and minds, or virtually any other human activity. In the future we are imagining, intelligent technological systems—and not necessarily sentient ones—will be deployed to maximize human productivity throughout our lives. This is the Taylorist utopia.
C. Nozick’s Experience Machine (Utopia or Dystopia?): A Vision of Technologically Managed Experience and Perfectly Happy Lives
You might be wondering: So what will we be doing while smart tech systems are managing everything—driving our cars, caring for our children, even managing our bodies as we (or they?) perform physical tasks? The answer is simple: We will be entertained!
In the 1970s, philosopher Robert Nozick wondered whether he or anyone else would choose to be plugged into a hypothetical “experience machine” that created any experience he desired. For example, he could experience taking on many different roles, such as being a great novelist, a father, or a saint. Nozick constructed the thought experiment to challenge hedonism and the belief that all that matters in life—in being human—is our subjective experience. “Would you plug in?” he asked. “What else can matter to us, other than how our lives feel from inside?”
Nozick’s thought experiment raises many important puzzles and some of the most interesting questions are buried in his guiding assumptions. For example, does it matter that Nozick proposed a single machine as the tool that would supply an optimal life experience?
Nozick seemed to imagine a huge mainframe computer that one would plug into and he leaves details concerning the techno-social engineers who built the machine in the background, as if they are mere cogs in the machine. Nozick may have been wrong about the specifics of the machine; after all, mainframes seem so old-fashioned. But he was not so far off in other respects. Presciently, he imagined superefficient neuropsychologists (rather than philosophers, theologians, economists, politicians, and various others) as the relevant experts on human experience who could supply us with the sensations we desire and ostensibly crave.
We have not built Nozick’s machine, but we are making progress on a different model. Let us call it the Experience Machine n.0. In the years since Nozick formulated his ideas, our techno-social engineers have dramatically improved their tools and capabilities to shape both our desires and experiences. The Experience Machine n.0 will not be a 1970s-era mainframe computer that one plugs into with a cord. Nor will it be the dystopian world of The Matrix, where machines enslave humanity, using us as fuel cells while satiating us with virtual experiences. Our techno-social engineers are humans, not sentient machines. Instead, it will be a distributed, interconnected network of sensors, computers, and related technologies. The Experience Machine n.0 will be environmental. It will be our environment. Deployed and integrated incrementally over decades, everyone will have been gradually prepared for and conditioned to accept it. This Experience Machine n.0 will reshape our entire world and ultimately us.
Nozick and those who have since engaged his thought experiment assumed we would have a choice. It is important that he asked if we would voluntarily plug in. The presumption of choice is too much to simply assume, however. In reality, whether you have the practical freedom to choose to plug in or out remains an open question.27 It just might be the most important question of the twenty-first century.
Its answer depends upon our path, or how we get to the distributed Experience Machine n.0. So we should ask: How could we get to a Coasean-Taylorist-Hedonic Utopia?
One answer is: a slippery slope.
Another answer is: engineered complacency.
Another answer is: the aggregation of trillions of perfectly rational choices.28
Yet another answer is: ubiquitous deployment of smart tech resource management systems for the purposes of maximizing human happiness at minimal social cost.
The last answer suggests the means are technological systems that society has gradually deployed. It also identifies the optimization criterion, or simply the relevant end: maximum human happiness at minimal social cost. This end flows quite naturally from the merger of the utopian visions and logics as well as the social and technological trends we are seeing today.
We would be remiss if we did not point out an important implication of the optimization criterion. It seems to us that the cheapest way to make large numbers of human beings perfectly happy—particularly when using the sorts of technological means we are imagining and assuming such technologies are deployed gradually over decades—is to set expectations very low, in which case the tech system need only meet or even barely surpass expectations. As the hedonists know and often are prone to emphasize, people adapt to their conditions and, subsequently, their corresponding happiness levels typically adjust. So the goal might very well be to shape beliefs, preferences, and expectations in a manner that makes supplying happiness cheap and easy. At the end of the day, cheap satiation might constitute optimal happiness.
Connecting the three visions of utopia, we are left with a techno-social-scientific system for managing humans, both as objects and subjects. Taylor once said, “In the past the man was first; in the future the system must be first.” In the next Part, we consider a thought experiment to explore what it might mean for humans to be cogs within the techno-social-scientific system built to deliver a Coasean-Taylorist-Hedonic Utopia.
How much control over your life would you outsource to technology? And at what point, if any, would delegating control to machines become dehumanizing? These are tough questions. To help you think them through, we would like to direct your attention to a vision of the future. Here is a scene adapted from Shephard’s Drone, an unpublished novel that one of us wrote.
Hundreds of people are walking. The sidewalk is congested, and one out of every three walkers is a stumbling, bumbling idiot—either meandering like a snake or stopping suddenly like a meerkat, chatting away on a cell phone or worse, swiping and thumbing a screen on their mobile devices, oblivious to everyone else around them, just not giving a hoot about anyone beside themselves and whoever they’re interacting with—if, indeed, it really is an actual person and not the latest cat video on YouTube. Meandering snakes. Sudden-stop meerkats. Totally annoying!
I-80 during rush hour. Murderous rage and frustration. For some, desperation—those poor souls who had to pee! The highway was congested. Bumper to bumper traffic swelled and surged and then suddenly stopped in a flash mob of red taillights. He thought of a huge swarm of meerkats running full speed and then freezing at the first sign of danger.
I-80 during rush hour. Elation. Relief. Traffic was moving, managed, in sync. The cars were equipped with auto-drive systems and received data from the highway sensor networks. Ants. Awesome sensing, communicating, cooperative management systems. Content ants.
Google announces a new version of its long defunct wearable device Google Glass. This is a game changer. Revolutionary, in fact. So long cell phones, smart phones, hand-held mobile whatevers. Until now, there had been healthy competition in the mobile communications and computation sector. But this changed everything. No one expected the synergistic combination. The glue technology, the one that made it possible, was the motor function management software and the interface through Google Glass with the human brain and body.
Initially, the tech was developed as a small independent project to help accident victims who were paralyzed or lost control of certain parts of their bodies. Who’d have thought to combine the three technologies—Google Glass, automated, self-driving cars, and the motor function management system? Utterly brilliant. He watched hundreds of people walking and marveled: Snakes and meerkats to ants.
What this scene describes is a wearable technology that allows people to delegate the mundane task of physical movement through the world to a complex navigation, sensory, and motor function management technology. But further into the action, the novel ups the ante and describes implanted chips that modify humans in part by connecting them to ubiquitous sensor networks.
Of course, this is science fiction. But so too are many thought experiments that help narrow our attention to essential considerations. In this case, the scenario is less farfetched than you might think. Max Pfeiffer, a researcher in the Human-Computer Interaction Group at the University of Hanover in Germany, ran an experiment where he manipulated how students navigated through a park.29 By stimulating their sartorius muscles with electrical current, he directly guided their turns, nudging movements to the left and right. While this scenario sounds like invasion of the body snatchers science fiction, apparently the combination of existing smartphone technology and electrodes is all that is needed to inaugurate an innovative “pedestrian navigation paradigm.”
Pfeiffer successfully assumed the role of an aggressive GPS device, but his prototype is not ready to compete with Waze. Still, he has a successful proof of concept, and this makes it hard to avoid speculating on what future, fully automated, consumer versions of the technology might be used for. Like all optimistic researchers, Pfeiffer and his collaborators imagine a range of socially beneficial applications. Their vision revolves around three types of experience—multitasking, dispensing precise geo-location information, and coordinating group movement—which are embodied in several appealing scenarios: enhanced fitness (think of runners easily trying out new routes and optimizing existing routines); novel sports (imagine coaches going beyond today’s limits of merely proposing suggested game plays, and literally choreographing how their teams move); improved job performance (picture firefighters effortlessly zeroing in on danger zones); upgraded crowd control (envision concert-goers immediately knowing how to find their seats in a large stadium and how to clear out in an orderly and calm manner if an emergency arises); and, of course, low transaction cost navigation.
In order to truly get a handle on the significance of actuated navigation, we need to do more than just imagine rosy possibilities. We also need to confront the basic moral and political question of outsourcing and ask when delegating a task to a third party has hidden costs. To narrow down our focus, consider the case of guided strolling. On the plus side, Pfeiffer suggests that senior citizens will appreciate help returning home when they are feeling discombobulated; tourists will enjoy seeing more sights while freed up by the pedestrian version of cruise control; and friends, family, and co-workers will get more out of life by safely throwing themselves into engrossing, peripatetic conversation. But what about the potential downside?
Critics have identified several concerns with using current forms of GPS technology. They have reservations about devices that merely cue us with written instructions, verbal cues, and maps that update in real-time. Nicholas Carr warns of our susceptibility to automation bias and complacency, psychological outcomes that can lead people to do foolish things, like ignoring common sense and driving a car into a lake.30 Hubert Dreyfus and Sean Kelly lament that it is “dehumanizing” to succumb to GPS orientation because it “trivializes the art of navigation” and leaves us without a rich sense of where we are and where we are going. Both of these issues are germane.31 In principle, technical fixes can correct the mistakes that would guide zombified walkers into open sewer holes and oncoming traffic. The issue of orientation and value of both knowing and even not knowing where one stands in relation to our physical environment and to others, however, remains a more vexing existential and social problem. Being lost and struggling with the uncertainty may provide us with opportunities to develop ourselves and our relationships.
Pfeiffer himself recognizes this dilemma. He told Wired that he hopes his technology can help liberate people from the tyranny of walking around with their downcast eyes buried in smartphone maps. But he also admitted that “when freed from the responsibility of navigating . . . most of his volunteers wanted to check email as they walked.”32 At stake, here, is the risk of unintentionally turning the current dream of autonomous vehicles into a model for locomotion writ large. While the hype surrounding driverless cars focuses on many intended benefits—fewer accidents, greener environmental impact, less congestion, and furthering the shift to communal transportation—we should not lose sight of the fact that consumers are being wooed with utopian images of time management. Freed from the burden of needing to concentrate on the road, we are sold on the hope of having more productive commutes. Instead of engaging in precarious (and often illegal) acts of distracted driving, we will supposedly tend to our correspondence obligations in a calm and civilized way. Hallelujah, we will text, email, and hop on social media in our “private office on wheels” just like if we were on a bus, train, or plane—but, thankfully, without having to deal with pesky strangers.
What Pfeiffer’s subversive subjects show is that a designer’s intentions alone do not determine how consumers use technology.33 In a world where social and professional expectations pressure people to be online, folks will be tempted to exploit newly found openings in their schedules to satisfy the always-on, no freedom to be off requirement. And when industries get a sense that people have more time on their hands to attend to work-related activities, they will ratchet up their expectations for how productive their employees need to be. Such pressure disincentivizes us from pursuing balanced lives and makes a mockery of the cliché that if you do not like a technology, simply do not use it. Indeed, just as historical decisions about building infrastructure to support an automotive culture have made it untenable for many people in the United States to walk or ride their bicycles to work, shifts in the digital ecosystem can make it harder for us to take an enjoyable stroll.
The prospect of being further chained to our devices is disturbing. But the thought of outsourcing our physical abilities just to free up attention raises a more disconcerting problem—one with deep psychological and metaphysical consequences. Actuated navigation is not just a process that turns voluntary into involuntary behavior. If done habitually, it is an invitation to dissociate from our physicality and objectify our incarnate bodies as mere commodities. To see why this is the case, we need to consider some basic ways bodies and technologies interact.
Bodies, of course, come in various shapes and sizes and have different abilities. Many people rely on prosthetics to move, such as canes, walkers, wheelchairs, and artificial limbs. These can be deeply embodied tools that expand a person’s sense of agency—especially when society commits to the resources needed for them to be widely accessible and used effectively, and embraces a sense of justice that makes it abundantly clear that it is wrong to hassle anyone for relying on them.
Let us think about this in what philosophers call phenomenological terms. As the French thinker Maurice Merleau-Ponty famously argues, a blind person cannot use a cane to see colors.34 The act of tapping just cannot reveal how gray a street looks—at least, not yet. But the technology can expand perceptual abilities by “extending the scope and active radius of touch and providing a parallel to sight.”35 Indeed, the person who becomes an expert at using a cane experiences the stick as a direct extension of her being—more like a sense organ that is attuned to the world, than an external object that requires attention-grabbing, mechanical movements to master.
In a similar way, a seasoned driver feels that a car is an extension of herself. She gets in, cranks up the tunes, navigates on the highway while singing along, and arrives at her destination delighted that she became one with the vehicle and intuitively exhibited skill. By contrast, a beginner’s journey involves deliberating about all kinds of things and feeling a pronounced sense of separateness. Beyond needing to pause to consider who gets the right of way at a four-way stop, she needs to engage in all kinds of abstract reflections—like explicitly focusing on putting her hands on the 9-3 position before starting to drive (10-2 is pre-airbag).
The prosthetic and driving examples show that we are quite adept at using technology to expand our embodied relation to the world, and with it, our senses of identity. In a suburban area, for example, it is easier for the person who owns or leases a car to see herself as independent than it is for someone who depends on public transportation and is subservient to a schedule that other people set. While this particular case may be objectionable from a moral point of view (not everyone can afford to opt-out of public transportation, and it may be environmentally wrong not to use it), the basic phenomenology of enlarged capacities shows that it is a mistake to see so-called “cyborg” fusions as inherently alienating.36
The question, then, is what is the difference between outsourcing walking and getting around with the help of a cane or car. That is easy to answer in the case of malicious hacking. If a third party engaged in a version of Pfeiffer’s experiment that enabled her to take over another’s body and move him or her in directions that he or she did not want to go, individual autonomy would be violated. But if we freely choose a destination and actuated navigation helped us get there straightaway without any imposed stops, our autonomy apparently would be respected.
But we must think more carefully about the logic underlying the outsourcing. Turning to outsourced walking for the purpose of freeing up our attention is an act that so strongly privileges mental activity—say, the conversation we are having with the person walking next to us or via text—that it effectively treats the body as nothing more than an impediment that needs to be overcome. Our bodily engagement with the physical world could then be viewed as a logistical and navigational transaction cost to be minimized, even eliminated if possible.
By this logic, we should not just give up control of a single physical ability. We should willingly delegate all kinds of other movements that prevent us from being totally engrossed in intellectual activities: chewing, showering, shaving, etc., are nothing but corporeal subversions that get in the way of more elevated affairs. Perhaps even the effort to raise our cheeks to smile is a waste. To avoid opportunity costs, why not eliminate that too, and purchase an app that triggers the requisite movements when it detects patterns that make smiling appropriate? And if that is where we draw the line, why is that the case? Is it because smiling is an essential component of our unique identities and we want to avoid becoming tragic figures like Batman’s enemy the Joker—a villain who appears existentially menacing because his face is forever frozen and incapable of fully conveying expression?
Now, some of you might be fine outsourcing as much bodily movement as possible and think the very idea of imposing limits is ludicrously old-fashioned. Perhaps you yearn for the day you can upload your consciousness into a machine and be rid entirely of your pesky body. For some “transhumanists” this is indeed the moment when we finally can evolve beyond recognizable human limits and start living the good life.37 Futurist Raymond Kurzweil, Director of Engineering at Google, predicts that by 2030 “our brains will be able to connect directly to the cloud” and not too long after “we’ll also be able to fully back up our brains.”38
But others will feel diminished by autopilot dualism that makes our bodies mere cogs in the machine of our mental life. If you fall into this camp, it is empowering to move beyond gut feelings and vague impressions of discomfort and figure out exactly what is the basis of your opposing perspective. We hope this Essay helps. We would like for it to add to your go-to arsenal of supporting concepts, arguments, and examples.
Even if you think that uploading your mind is a good idea, you might want to pause to be sure you have carefully thought about what is in store if you do. Although it may not be immediately obvious, the optimizing logic that makes it attractive to delegate away bodily functions applies to mental operations as well. And this means once the outsourcing spiral commences, you might regret where it ends.
There are many ways a purely mental life could be lived. In ancient Greece, for example, Aristotle depicted God as an immaterial Prime Mover who only thinks about his own eternal thinking.39 But for our purposes, a decent place to start—as an intuition pump, if nothing else—is the scenario that is depicted in The Matrix: Imagine human bodies are tethered to vats while human minds live virtual lives in a programmed simulation of our current world. In this scenario, human beings still need to grapple with all of the same physical interactions that we currently do. They climb stairs. They open doors. They cook food.
Why does this familiar narrative persist? A compelling explanation is that the programmers recognized that physical structure is necessary for our mental life to be satisfying and meaningful. Note, however, that the optimizing logic we have discussed in this Essay could persist. If so, there would be a desire for further reductions in transaction costs and more easily obtained bliss. Where would that lead us? A vicious circle where outsourcing occurs in the virtual world too? An ever-narrowing spiral? But to what end?
The rub, here, is that if autonomy is retained we still presumably have work to do in making decisions about our purely mental lives. If we retain free will, we still presumably have to experiment with different kinds of experiences to form our preferences and figure out what we like. And, we still presumably have to learn to develop interesting beliefs and contested knowledge. But making decisions, experimenting, and learning (among other mental processes) are costly endeavors, and again, the relentless optimizing logic would press toward minimizing and if possible eliminating these costs.
Again, we must ask, to what end? The answer, it seems, is cheap bliss.
We now question the premise developed throughout this Essay. Would a Coasean-Taylorist-Hedonic Utopia really be utopian? You have sensed our doubts, to be sure. It is not an easy question, as Nozick’s thought experiment and the ensuing debates demonstrate. Empirically speaking, some people are hedonists; others are not. Some people would plug into the experience machine; others would not. There are many reasons one might question those decisions, ranging from concerns that people are “fighting the hypothetical”40 to arguments that we are asking the wrong question in the first place because it does not matter what people would choose or what they believe (for example, because their preferences and beliefs are themselves contingent and learned or because there are deeper, antecedent metaphysical questions that need answers first).
Putting those objections aside for the sake of argument, it seems reasonable to conclude that one person’s utopia might be another’s dystopia. In other words, for some people, a Coasean-Taylorist-Hedonic Utopia really would be utopian. Basically, for those who would plug into the experience machine, the Coasean-Taylorist-Hedonic world we have described might be ideal.
But for those who would not plug into the experience machine, it is unclear what reasons are motivating their conclusion. Perhaps it would be good for them, and they would never know the difference, so there is nothing to worry about. Moreover, it is not clear who these objectors might be. In the present, we might find some of them—that is, people who would reject the opportunity to plug in. But perhaps those people would do so for mistaken reasons. Perhaps they would do so because they did not trust the machine to perform as perfectly as the thought experiment suggests. Such a reason would be “fighting the hypothetical,” as law professors like to say, and that is not allowed because it fails to get to the heart of the actual issue raised. Nonetheless, even if there are some people in the present who would reject the opportunity to plug into the experience machine for the right reasons—that is, without fighting the hypothetical and due to genuinely acceptable reasons—it might be hard to imagine that we would find any such people in the future. In moving from the present to the future, there will be many gradual shifts in beliefs, preferences, and values as people become accustomed to both the means and the ends.
And this is why we must confront the issue today. To preserve the opportunity for future objectors to exist—if this is something we care about, which itself is an interesting philosophical question—we might need to take actions to preserve underdetermined environments and the freedom to “be off,” which we use loosely to refer to the freedom to be free from the types of techno-social engineering that lead to the Coasean-Taylorist-Hedonic world.
Consider a different way to frame our concern. Humans are naturally inefficient. We are very costly beings to sustain. One way to understand the power of the Coasean-Taylorist visions is that both entail minimization of various costs associated with humans being human. For humanists, this is obviously troubling.
Optimal happiness is, of course, the selling point, the attraction that makes the world we have imagined plausibly utopian. In this world, human cognition and attention is, at least for the vast majority of people, purely consumptive and governed by the happiness principle. Satiation and entertainment is virtual, programmed, and optimal. This is similar to The Matrix but different in critical ways. It is not about enslavement by sentient machines. Rather it is about enslavement by our own governing principles and thus by ourselves.
. This Essay is written for the Program on Understanding Law, Science, and Evidence (PULSE) Conference (UCLA Law, Apr. 2016). The Essay builds on prior work and draws on excerpts from our forthcoming book. See generally [small-caps]Brett Frischmann & Evan Selinger, Being Human in the Twenty-First Century[end-small-caps] (forthcoming 2017); Brett Frischmann, Human-Focused Turing Tests: A Framework for Judging Nudging and Techno-Social Engineering of Human Beings (Cardozo L., Faculty Research Paper No. 441, 2014), http://ssrn.com/abstract=2499760 [https://perma.cc/M985-5AD2]; Brett Frischmann, Thoughts on Techno-Social Engineering of Humans and the Freedom to Be Off (or Free From Such Engineering), 17 [small-caps]Theoretical Inquiries Law[end-small-caps] 535 (2016), http://www7.tau.ac.il/ojs/index.php/til/article/view/1430 [hereinafter Frischmann, Thoughts]; Evan Selinger & Brett Frischmann, Will the Internet of Things Result in Predictable People?, [small-caps]Guardian [end-small-caps](Aug. 10, 2015, 11:56 EDT), http://www.theguardian.com/technology/2015/aug/10/internet-of-things-predictable-people [https://perma.cc/DT4Q-7DNK].
. Of course, this is not always true. The concepts of underdetermining and overdetermining obviously depend upon a shared baseline—under and over what? The baseline depends upon society’s normative values.
. We can say quite a bit more about why this is important. On the social option value of open, underdetermined infrastructures, institutions, and environments, see [small-caps]Brett Frischmann, Infrastructure: The Social Value of Shared Resources [end-small-caps]227–53 (2012); see also Brett Frischmann, Speech, Spillovers, and the First Amendment, 2008 [small-caps]U. Chi. Legal F.[end-small-caps] 301 (on how the First Amendment sustains a spillover-rich environment); see also [small-caps]Julie E. Cohen, Configuring the Networked Self: Law, Code and the Play of Everyday Practice[end-small-caps] ch. 9 (2012), http://www.juliecohen.com/page5.php (discussing semantic discontinuity and the importance of room for play).
. “Innovation rivals capitalism among modern American gods, and it is blasphemous to question progress or attempt to slow down innovation and consider which path society might choose.” Frischmann, Thoughts, supra note1; see also Brett Frischmann & Mark McKenna, Comparative Analysis of (Innovation) Failures and Institutions in Context (2015) (unpublished manuscript) (on file with author) (explaining the incredible variety in actual normative objectives that are, or can be, conflated in the buzzword of innovation, and how this appeals to innovation in the abstract or without more specific normative commitments that are ultimately useless and often merely (dis)guises for other objectives, such as a commitment to capitalism or laissez-faire).
. In our forthcoming book, we call it the Experience Machine n.0 or the distributed experience machine.
. We focus on the imagined world without transaction costs. See R. H. Coase, The Problem of Social Cost, 3[small-caps] J.L. & Econ. [end-small-caps]1, 1–44 (1960).
. See generally [small-caps]Frederick Winslow Taylor, Shop Management [end-small-caps](1912).
. See [small-caps]Robert Nozick, Anarchy, State and Utopia[end-small-caps] 42–45 (1974).
. Coase, supra note 6, at 1–44.
. [small-caps]George J. Stigler, The Theory of Price[end-small-caps] 113 (3d ed. 1966); George J. Stigler, The Law and Economics of Public Policy: A Plea to the Scholars, 1 [small-caps]J. Legal Stud. [end-small-caps]1, 1–12 (1972); George J. Stigler, Two Notes on the Coase Theorem, 99[small-caps] Yale L.J.[end-small-caps] 631, 631–33 (1989).
. See Brett M. Frischmann & Alain Marciano, Understanding The Problem of Social Cost, 11 [small-caps]J. Institutional Econ.[end-small-caps] 329, 329–52 (2014).
. Of course, Coase recognized the benefit of reducing transaction costs in various contexts through technological, organizational, or other innovations. He did not, however, hold minimization of transaction costs as a primary normative end.
. Fetishizing innovation itself may be one of the influences we have conflated. See supra note 4.
. See [small-caps]Aaron Wright & Primavera DiFilippi, Chain [end-small-caps](forthcoming 2017).
. Coase, supra note 6, at 15–19.
. Harold Demsetz, The Cost of Transacting, 82 [small-caps]Q.J. Econ.[end-small-caps] 33, 33–53 (1968); Harold Demsetz, Toward a Theory of Property Rights, 57 [small-caps]Amer. Econ. Rev.[end-small-caps] 347, 347–59 (1967).
. See, e.g., [small-caps]Cass Sunstein, Choosing Not to Choose [end-small-caps](2015); [small-caps]Cass Sunstein, Why Nudge?[end-small-caps] (2012); [small-caps]Richard H. Thaler & Cass R. Sunstein, Nudge: Improving Decisions About Health, Wealth, and Happiness[end-small-caps] (2008); On Amir & Orly Lobel, Stumble, Predict, Nudge: How Behavioral Economics Informs Law and Policy, 108[small-caps] Colum. L. Rev. [end-small-caps]2098 (2008).
. Notably, many examples in the nudging literature—such as the GPS—involve technological means for minimizing transaction (and other) costs.
. This Subpart is excerpted with some modifications from another paper. See Frischmann, Thoughts, supra note 1; see also Brett Frischmann & Evan Selinger, Engineering Humans With Contracts (unpublished manuscript) (on file with author) (developing a Taylorist theory of electronic contracting and drawing a connection between Taylor’s “time and motion studies” and eye tracking and related web design studies).
. See [small-caps]Taylor[end-small-caps], supra note 7.
. Some would call it technique and reserve technology for systems of applied knowledge that employ a material component. We do not have such a limited definition in mind, however, and this is not the place to debate the issue.
. See [small-caps]Organisation for Econ. Co-operation & Dev., Data Driven Innovation: Big Data for Growth and Well-Being [end-small-caps](2015), http://dx.doi.org/10.1787/9789264229358-en [https://perma.cc/YB6R-EP3R].
. See, e.g., Ifeoma Ajunwa et al., Limitless Worker Surveillance, 105 [small-caps]Calif. L. Rev. [end-small-caps](forthcoming 2017) (manuscript at 4–13), http://ssrn.com/abstract=2746211 [https://perma.cc/RU9K-YCCU] (describing this evolution).
. This is a topic we explore in depth in our forthcoming book. See [small-caps]Frischmann & Selinger[end-small-caps], supra note 1. Of course, techno-social engineering is nothing new. Humans have been shaped by technology ever since tools were invented. But this fact too easily preempts evaluation. See id.
. There is a rich history and debate surrounding the example of workplaces in which automation and management practices dehumanize workers and treat them like machines. See generally [small-caps]Simon Head, The New Ruthless Economy: Work & Power in the Digital Age [end-small-caps](2003); [small-caps]David F. Noble, Forces of Production: A Social History of Industrial Automation[end-small-caps] (1984).
. Again, we might return to nudging. Although its advocates might not associate their work with Taylorism, it is difficult to avoid. As we noted, nudging entails engineering the choice architecture—often, an environmental intervention—to impact human behavior, for the ultimate objective of efficiency and in many cases productivity. Workplace nudging has been practiced well before the nudging agenda gained traction, and it is only growing more extensive.
. Consider the more familiar example of electronic contracting. Online contracts may provide you with a formal opportunity to exercise your freedom to choose—whether to click “I agree” or not, but that formal opportunity does not necessarily equate with practical freedom. If the online electronic contracting environment is designed to nudge consumers to behave automatically, like stimulus-response machines, then the freedom to choose may be illusory. See Frischmann & Selinger, supra note 19.
. Another answer is: a global tragedy of the commons, which we refer to as humanity’s techno-social dilemma.
. The next dozen paragraphs are excerpted with some modifications from another article. Evan Selinger, Opinion, Automating Walking Is the First Step to a Dystopian Nightmare, [small-caps]Wired [end-small-caps](May 20, 2015), https://web.archive.org/web/20150521074617/http://www.wired.co.uk/news/archive/2015-05/20/the-future-of-walking.
. [small-caps]Nicholas Carr, The Glass Cage[end-small-caps] 67–85 (2014).
. [small-caps]Hubert Dreyfus & Sean Dorrance Kelly, All Things Shining: Reading the Western Classics to Find Meaning in a Secular Age [end-small-caps](2011).
. Nick Stockton, Scientists Are Using Electrodes to Remote-Control People, [small-caps]Wired: Science [end-small-caps](Apr. 20, 2015, 7:00 AM), http://www.wired.com/2015/04/scientists-using-electrodes-remote-control-people [https://perma.cc/CV3Y-3HWD].
. For further discussion of this issue, see generally Don Ihde, The Designer Fallacy and the Technological Imagination, in [small-caps]Philosophy and Design: From Engineering to Architecture [end-small-caps]51 (Peter E. Vermaas et al. eds., 2008).
. [small-caps]M. Merleau-Ponty, Phenomenology of Perception [end-small-caps](Colin Smith trans., Routledge & Kegan Paul Ltd. 8th ed. 1978) (1962).
. See id.
. Donna Haraway, A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century, in [small-caps]The Cybercultures Reader [end-small-caps]291, 292–93, 295, 302–03, 310–16 (David Bell & Barbara M. Kennedy eds., 2d ed. 2001).
. A fine primer on what the contested term “transhumanist” (literally meaning “beyond the human”) entails is Nick Bostrom’s Transhumanist Values, http://www.nickbostrom.com/ethics/values.html [https://perma.cc/4N4K-FLX4]. For a more polemical definition, see Zoltan Istvan, A New Generation of Transhumanists Is Emerging, [small-caps]Huffington Post [end-small-caps](May 10, 2014) http://www.huffingtonpost.com/zoltan-istvan/a-new-generation-of-trans_b_4921319.html [https://perma.cc/H4Y7-H4JN]. The locus classsicus for defining the “Singularity” and elaborating upon its implications is [small-caps]Ray Kurtzweil’s The Singularity is Near [end-small-caps](2006). For a discussion about augmenting our weak biology with more powerful technology, see Luke Mason, Would You Swap a Healthy Eye for a Bionic One With Additional Functionality?, [small-caps]Wired[end-small-caps] (Sept. 2, 2012), http://www.wired.co.uk/article/seeing-beyond-human-transhumanism [https://perma.cc/PZ7V-HH2Y].
. Jillian Eugenios, Ray Kurzweil: Humans Will Be Hybrids by 2030, [small-caps]CNN: Money[end-small-caps] (June 4, 2015, 12:26 PM), http://money.cnn.com/2015/06/03/technology/ray-kurzweil-predictions [https://perma.cc/4C8W-LCLU].
. Aristotle discusses God in Metaphysics Book 12, sections seven and nine. See [small-caps]Aristotle, Metaphysics[end-small-caps] (W.D. Ross trans.) (350 B.C.E.), http://classics.mit.edu/Aristotle/metaphysics.html [https://perma.cc/D4ZY-RK7C].
. Fighting the hypothetical occurs when someone questions the premises that are baked into the thought experiment. For example, one might worry that the experience machine would not work as promised. Of course, this defeats the point of the thought experiment, which is to sideline such concerns and focus attention on the core philosophical question raised by Nozick. See [small-caps]John Bronsteen et al., Happiness and the Law[end-small-caps] 172–75 (2014) (suggesting that the experience machine thought experiment pumps “inadmissible intuitions”).