Just as they will change healthcare, manufacturing, and the military, robots have the potential to produce big changes in policing. We can expect that at least some robots used by the police in the future will be artificially intelligent machines capable of using legitimate coercive force against human beings. Police robots may decrease dangers to police officers by removing them from potentially volatile situations. Those suspected of crimes may also risk less injury if robots can assist the police in conducting safer detentions, arrests, and searches. At the same time, however, the use of robots introduces new questions about how the law and democratic norms should guide policing decisions—questions which have yet to be addressed in any systematic way. How we design, regulate, or even prohibit some uses of police robots requires a regulatory agenda now to address foreseeable problems of the future.
In July 2016, Dallas police chief David Brown decided to end a violent standoff with Micah Johnson, who had fatally shot five officers and wounded several more, in an unusual way. As a makeshift solution, the police attached a pound of the plastic explosive C4 to a Remotec F-5, a robot designed for bomb disposal. The intentional detonation of the explosive killed Johnson, and was the first deliberate use by American police of a robot armed with deadly force.
Keep in mind that this improvised solution was a remotely controlled robot. The robot was not designed to harm people, and it lacked any ability to make independent decisions. Nevertheless, the use of the robot in Dallas comes at a time when many people are predicting that sophisticated police robots will one day become “useful, cheap, and ubiquitous.” Hundreds of robots—most of them made for bomb disposal—are already in the possession of local police departments. Many such robots will soon employ artificial intelligence and will be expected to operate with a degree of independence. The near certain use of these robots by the police raises questions about what sorts of limits and regulations should be imposed on their use.
Consider a future in which robots could supplement or replace some basic police functions. An autonomous police vehicle patrols a neighborhood and briefly detains a person deemed suspicious so that an officer miles away can subject him to questioning. During the detention, the vehicle dispatches a micro drone to obtain a DNA identification sample. Or consider the possibility of thousands of autonomous police drones the size of insects flying through a city without detection to conduct surveillance and carrying nano-chemicals to disable dangerous suspects. Or imagine social patrol robots that dispense advice to the lost, record surveillance data, and call in other robots to assist in unexpectedly hostile situations.
Rapid changes in technology have significantly shifted how police perform their jobs. The squad car and the two-way radio provided the police with a geographic range and communication ability far superior to traditional foot patrol. Robots represent the next leap. Robot staff have already attended to guests at the Henn-na Hotel in Japan. Hotel robots deliver towels and coffee to guests, while other robot bartenders serve drinks and still others deliver pizza. Robot journalists create online content for Thomson Reuters and Yahoo. A novel co-written with an artificial intelligence (AI) program advanced to the first stage of a Japanese literary contest. Semiautonomous Reaper unmanned drones carry Hellfire missiles. In the near future, robots will probably serve as our delivery drivers and our garbage collectors. Robots like the Japanese Robear will probably provide eldercare to seniors. Pepper the robot will be an anthropomorphic companion that will provide us with emotional care.
As for policing, Dubai plans to introduce patrol robots with artificial intelligence to its streets by 2020. The Chinese AnBot can independently patrol, and upon a remote operator’s command, use its mechanical arm to grab people as well as deploy its “electrically charged riot control tool.” Equipped with infrared cameras, microphones, and license plate readers, the American Knightscope security robot can patrol independently, for $7.00 an hour. Machines endowed with artificial intelligence and the capacity for independent action will have profound impacts on policing. To be sure, advances in technology have always given police new powers. Robots, however, may be different in kind. Like the internet, robots raise new issues and challenges to the regulation of policing.
Police robots raise special questions because of the powers we entrust to the police. If the development of military robots provides any guidance, then we can expect some police robots to be artificially intelligent machines capable of using legitimate coercive force against human beings. Military robots will also possess these powers, with an important difference: We will not expect police robots to exercise deadly force against a hostile enemy. More importantly, constitutional law and democratic norms constrain the police. How we design, regulate, or even prohibit some uses of police robots requires a regulatory agenda now to address foreseeable problems of the future.
Sophisticated and inexpensive robotics will be attractive to the police just as they have been to the military. The federal government is already spending significant amounts of money and attention on robotics research. Those robotic applications will make their way to policing, and federal monies for robotics will become available to local law enforcement agencies just as they have in the case of recent technologies like body cameras, biometrics, and big data analysis. What is more, police departments will likely raise the argument that they must be prepared for a robotics arms race, such as against criminals and terrorists who could 3D print an army of weaponized micro-drones.
This Article considers the law and policy implications of a future where police robots are sophisticated, cheap, and widespread. In particular, I focus on questions raised by the use of robots able to use coercive force, as opposed to robots with surveillance or other support functions. Drawing upon the rapidly developing body of robotics law scholarship, as well as upon technological advances in military robotics—from which policing will surely borrow—we can anticipate the kinds of regulatory challenges we will face with the future of police robots.
The definition of a police robot depends on the definition of the term robot itself. Popular depictions of robots going back to the 1920s suggest robots are machines in humanoid form; think of the Maschinemensch in Fritz Lang’s 1927 film Metropolis, or Rosie the maid robot in the Jetsons. Yet robots neither have to look like people nor behave in any specific way. Robots can look like humans, animals, or insects; they can provide information, fire upon an enemy, or engage in financial trades. Indeed, there is no single definition of a “robot.”
An emerging consensus has suggested, however, that a robot be defined as any machine that can collect information, process it, and use it to act upon the world. These qualities vary widely from one robot to another. This sense-think-act model might describe anything from a “bot” that makes independent online purchases to an eldercare robot that provides emotional comfort or assists with lifting objects. In appearance, robots could take any form. Some military robots, for instance, may assume the shape of four legged centaurs to enhance stability. Thus, if a robot processes information it senses and acts upon it, a police robot does so in order to perform a task traditionally assumed by human police officers.
That robots might look alive and act in unpredictable ways also distinguishes them from other technologies. Those special attributes of robots might counsel robot-specific policies. Robotics law scholar Ryan Calo has identified these qualities as “embodiment, emergence, and social valence.”
First, the physicality of robots enables them to translate their data analysis into action. Robots act upon the world: They can lift objects, transport people, create art, and engage in commerce. And unlike other robots that may cause real-world harm through accident, police robots—at least some of them—will be designed with the capacity to exercise deliberate coercive force. That physicality creates new operational possibilities for the police, but it also raises new types of concerns when autonomous machines may be able to harm people by design.
Second, robots with artificial intelligence will behave in ways that are not necessarily predictable to their human creators. Some robots may just replace human labor in jobs that are repetitive and dull, but others will be capable of adapting to their environment, learning from mistakes, and become increasingly skilled in their assigned tasks. At one end of the spectrum, a robot may be a glorified vacuum cleaner, designed to address the drudgery of housecleaning. At the other end, a robot’s artificial intelligence may be designed not only to act upon processed information, but also to improve its performance over time by learning from past mistakes. Not all of this learning will be welcome. Microsoft quickly disabled its social chatbot Tay after it incorporated online responses and began spouting racist speech and called for genocide online. Since artificial intelligence would drive the behavior of robots, robots may behave in ways that we cannot predict, or even explain afterwards.
Artificial intelligence by itself is not unique to robotics. We can already feel the impact of big data—applying complex computer algorithms to massive sets of digitized data—in fields like finance, healthcare, and even policing. A number of police departments already use artificial intelligence in software that tries to identify future geographic locations where crime will occur, to predict which individuals may be at highest risk for violent crime commission or victimization, and to identify which among the billions of daily Internet posts amount to suspicious behavior. The police employ these big data tools, however, as guidance for human decisionmaking. Robots with artificial intelligence are distinct because they would be able to translate their analysis of data into physical action.
Third, robots are different from other technologies because they are in appearance somewhere between inanimate objects and humans. No robot today will fool a person into believing that it is alive, but many robots do not seem completely inert, either. Research suggests that we tend to approach robots as if they had human characteristics. Exploiting the human-like nature of some robots could be useful. We could deliberately design caretaking robots to be physically cute (such as rounded shapes, humanoid faces) to maximize their benefits, whether for children or the elderly.
The ambivalence we feel toward robots might also counsel new legal characterizations particular to them. We may think that a person smashing his lawn mower merely has an ill temper, but that a person abusing a social robot is cruel. If robots are designed to serve as pets, caregivers or friends, could robots be the victims of criminal assault, abuse, or even rape and murder? In this way, the law may need to extend some legal protections to robots, for some of the same reasons we criminalize animal cruelty. We prohibit the infliction of needless violence against some animals because such behavior reflects something depraved about the perpetrator. Though we may not mistake robots for humans yet, we may soon reach a point where machines endowed with artificial intelligence may need protection from human abuse.
The future of robotic policing can now be found in developments within the military. The military has used remote controlled robots for more than a decade. The Department of Defense is preparing for a future in which nearly autonomous robots will play a central role in warfare. Consequently, the military is spending the most money and attention on robotics. Peter W. Singer has chronicled these changes in great detail, and argues that military robots will change not just the tools we use to fight wars, but the very nature of war itself.
Robots are in use in active conflicts around the world. Predators, Global Hawks, and Ravens have flown over Afghanistan, and MARCBOTs have aided soldiers on the ground in Iraq. A clear advantage of robots is their ability to act as “force multipliers.” Future armies may assemble attachments that include as few as 150 human soldiers and as many as 2000 robots. More importantly, however, autonomous machines capable of deadly force are also likely to change military tactics and strategy.
Unlike people, robots can go places without compromising the safety of soldiers. A “sensor”—shorthand for a drone’s human operator—can direct the launch of a Predator’s Hellfire missile in Afghanistan without leaving his seat in Las Vegas. Robots can look for land mines and protect the lives of soldiers who would have had to assume minesweeping responsibilities themselves. Unmanned submarines can launch smaller autonomous robots to look for hostile ships while drawing less attention to themselves than human-operated subs. Medbots—robotic ambulances—of the future may find wounded soldiers, retrieve them, and autonomously diagnose and treat them while retreating from the battlefield.
Robots can also behave in ways that humans cannot easily mimic. Scientists are always looking to enhance solider stamina. In the 1940s, a solution was amphetamines; today, it is Adderall. But robots never lose their accuracy because of fatigue, boredom, or stress. Robots do not harbor revenge or rage. Robotics researchers are working on autonomous vehicles for the air, ocean, and land that can operate for days and weeks on end.
That relentless attention to task may have other strategic benefits as well, although how exactly remains unclear. How will human combatants facing tireless robotic soldiers feel? Enemy forces may buckle in the face of robotic soldiers that cannot die and do not retreat. Or, the opposite may come true; the presence of robotic soldiers may galvanize enemy forces in a “war against the machines.” Alternatively, enemy forces may simply fight back with their own robots.
The unique characteristics of robots will also shape fundamental military tactics. One application of robotics imagines “swarms” of small robots that move in the same way birds and insects do: in unison, though without a defined leader. In nature, individual bees or birds do not rely on high degrees of intelligence to avoid crashing into each other; rather, each member follows simple rules. Robotic swarms can work this way, without sophisticated programming. In a swarm, robots could assemble together rapidly as a unit, and then just as quickly disperse to continue with surveillance missions.
What develops first in the military often finds its way to domestic policing. There has long been a close relationship between both the culture and institutions of the military and law enforcement. The bureaucratic hierarchy in policing—adopting titles like sergeant, lieutenant, and captain—reflects the military’s influence. Not only are many rank and file officers former members of the military, many police departments actively recruit from their ranks as well. We even use war metaphors to describe our domestic policing strategies.
This military influence extends to specific tactics and technologies used by the police. While the federal Posse Comitatus law forbids the use of the military for civilian policing, military equipment and training has trickled down to police departments through other means. For instance, the so-called “1033 Program,” part of the National Defense Authorization Security Act of 1997, is the federal initiative that has transferred surplus military equipment such as MRAPs (Mine-Resistant, Ambush-Protected vehicles), grenade launchers, and amphibious tanks to local police departments. While the public may have been shocked at images of police officers wearing combat fatigues and carrying M16s during protests in Ferguson, Missouri in 2014, these police officers were little different from the hundreds of other police departments who had been recipients of military equipment transfers under the 1033 program. Similarly, police SWAT teams, now common in police departments around the country, were created as specialized paramilitary groups. Former LAPD chief Daryl Gates, credited with establishing the first SWAT teams, brought in ex-Marines to help train these small groups of officers to act and dress like soldiers in volatile situations.
Imagine police robots that could surround a suspicious person or even halt a speeding car. This might take the form of a swarm of small robots, each less than a pound, designed to incapacitate a person by surrounding him and by using nonlethal force. Consider further that such a swarm would be capable of using some form of coercive force to prevent an unwillingly detained person from flight. A “Multi-Robot Pursuit System” that guides packs of robots to “search for and detect a non-cooperative human”—part of a Pentagon request for contractors—would surely be useful to the police.
Even if this use of robots is still just a concept, we can anticipate the kinds of legal and policy challenges that might arise. First, how much should humans remain “in the loop”—maintain some degree of involvement, in other words—in the use of robot police? Second, how much coercive force should we permit police robots to exercise? Third, how might the use of police robots affect legal determinations like reasonable force? Fourth, will police robot use further reinforce the social inequities in policing? Finally, how can we develop a uniform approach to policing police robots?
How much should police delegate decisions about force and coercion to their own robots? Take a look at the robotics currently on the market—the idea that we might lose control over them seems almost laughable. No consumer today fears their housekeeping Roomba, and even the most advanced private security robot available now could be disabled by a swift kick. But technology changes fast. The Pentagon’s Autonomous Research Pilot Initiative funds research for algorithms that will “increase a system’s level of autonomy.” Artificial intelligence experts hint that we might see humanlike artificial intelligence within a few decades, not a century. Within our lifetime, robots might not only seem “human” in their basic intelligence, but emotional, perhaps even “superhuman.”
Not every robot will display such capabilities. Today, some machines the military deems “robots”—like the widely used Talon—are controlled completely by remote human operators. Other robots use artificial intelligence to operate independently for limited tasks; the remote operator of a Global Hawk, for instance, “just clicks” a computer mouse to tell the robot to taxi and take off. A fully autonomous robot would need no human input at all once someone has defined the robot’s mission.
Greater degrees of autonomy in military robotics seem inevitable. One of the efficiencies gained by military robots is that large numbers of them will be capable of independent action with one human actor’s oversight. Imagine a phalanx of military robots controlled by one human operator, perhaps thousands of miles away. As a result, fewer human lives are placed at risk. Such robots would not increase efficiency if each required an independent human operator.
On the battlefield, some decisions must be made within fractions of a second. Waiting for human approval or veto may be critical time wasted, particularly if a robot must calculate how and whether to launch a counterattack. Not only might there be insufficient time to oversee a single robotic decision, but there may be little opportunity for a human operator to closely supervise the split second decisions of the dozens of robots over which he has control. Human involvement in such a case might take the form of a veto power, if at all.
Current military research already supports the development of robots with greater degrees of autonomy. One research goal of the Pentagon is to establish linked autonomous systems so that robots can communicate to one another in a rapidly changing environment. In the military, autonomous drones could scan a combat area and communicate with ground robots to find suspicious places or people.
The possibility that some robots capable of hurting or killing people will be capable of complex, independent action raises concerns, however. In the near future, robots could make decisions in ways that we cannot easily control or understand. The question of human involvement is itself complicated, because artificial intelligence itself is becoming more complicated. Assume we require that a human must assess a robot’s determination to use coercive force. Deciding whether a machine with artificial intelligence has made a good decision may not be easy, since the algorithm’s processes may not be totally intelligible to a human operator. Even if we have access to an algorithm’s source code, we still might not know how or why it reached its decision. Engineers at Google, for instance, recently conceded that they do not fully understand how Google’s successful RankBrain AI works, only that it works well. Requiring a human “in the loop” may mean little if how the robot came to its conclusion remains opaque to the human being in charge.
Armed robots with some degree of autonomy are also likely to be vulnerable to criminal interference (hacking) as well as malfunction. Our current experience with the security of electronic devices provides little assurance otherwise. Security researchers have discovered vulnerabilities that make possible the hacking of “smart” cars, insulin monitors, thermometers, refrigerators, and locks.
For now, armed and independent military robots are not a reality in the military, but they are a concern. Current military policy requires human involvement in any potentially lethal action. As Deputy Defense Secretary Robert O. Work commented in March 2016, the military “will not delegate lethal authority to a machine to make a decision.” Retaining some human involvement in armed military robots remains a “line in the sand.” But this is a policy restraint, not a technological one. That restraint may give way easily if another hostile nation or terrorist group decides to use lethal autonomous robots against American soldiers.
Translating these developments in military robotics to domestic policing requires little effort. The companies developing and producing robots can (and do) adapt them for military or law enforcement uses. Robots used for surveillance, investigation, and coercive force in Iraq and Afghanistan could easily be adapted to New York, Chicago, and Los Angeles.
Should a Taser or firearm-enabled police robot require human input before using force against a suspect? While a vice president at iRobot, which partnered with Taser, once stated, there is “no way [we are] giving the robot the capability to use force on its own,” that decision, like the Pentagon’s, is dictated by policy, not technology. Instead, we should ask whether a robot capable of assessing a dangerous situation and enabled to use a Taser or other weapon should be able to decide to do so without human input. As with the military, a ban on such police robots makes sense until policies are developed to address matters of control, security, and accountability.
Not only do we need to decide how much human beings should be involved in police robot decisionmaking, we also need to decide how heavily robots should be armed. From their beginnings in the nineteenth century, American police have acquired ever more sophisticated tools: first truncheons, then firearms, and now stun guns, pepper balls, tear gas, and long range acoustic device (LRAD) sound cannons. Police injure and sometimes kill people during their stops, arrests, and pursuits. When the circumstances warrant it, these are legally justifiable uses of force. If police authority rests in part on the authority to use coercive force, how much should coercive, even lethal force, should a police robot possess?
Should a robot be able to exercise the same deadly force as a police officer does now? In 2007, iRobot announced a “strategic alliance” with Taser International, to develop a Taser-equipped version of its popular Packbot. Electric stun guns, capable of transmitting a shock of up to 50,000 volts, can temporarily disable suspects deemed dangerous and noncompliant. Tasers are considered less-than-lethal substitutes for guns. Indeed, police departments often view Taser adoption as a measure of reform.
If police robots can carry Tasers, can they also carry firearms or other lethal weapons? Any proposal to adopt the regular use of lethally armed robots—whether semiautonomous or remotely controlled—is likely to meet initial public resistance. There is already a public campaign against “killer robots” in the military, and the discomfort many people feel about lethal robotic soldiers will probably be more pronounced in local policing. Uncertainty about partially or fully autonomous lethal robots will likely be more pronounced if that possibility is introduced for police within the United States.
There are at least two reasons to be skeptical that a prohibition against lethal police robots would persist. First, the line between lethal and nonlethal arms is not always clear. Even a Taser-enabled robot is one capable of lethal force. While relatively uncommon, people have died in incidents in which police use Tasers: at least forty-eight people in 2015. Some cases involve improper Taser use, others involve unknown physical vulnerabilities of the victims. Once we authorize police robots to use some degree of coercive force, we implicitly acknowledge some uses of even less-than-lethal force will be lethal in practice.
A second reason to be skeptical about any prohibition on the regular use of lethally armed police robots is the future role of robots more generally. In the future, we will be surrounded by robots of all kinds at work (as coworkers), at home (as caregivers), and in leisure (as social or sexual companions). That world will also include robots involved in crime. Just as robots in the military reduce the need for soldiers to put themselves at risk, robots can provide the same safety and anonymity for someone interested in committing crime. An armed robot drug dealer could act as an autonomous vending machine able to defend itself against attack and destroy evidence in the event of discovery.
Once the first crimes are committed by robots armed with lethal force, police in the United States will almost certainly balk at any prohibitions on lethally armed police robots. Such prohibitions may find police support in countries like New Zealand and Britain, where most police are unarmed, as are most civilians. In the United States, however, lethally armed robots may become just another point in the development of weapons that the police will want to use.
The kinds of weapons police robots might adopt are matters of technology and policy, but the circumstances in which robots could use force against human suspects are legal ones. Imagine that a suspect temporarily detained by a police robot decides to start shooting at the robot. If the robot shoots back—and injures or kills the suspect—would that be legally defensible?
The answer will depend in part on how we classify robots under the law. Human police may legally resort to force, even deadly force, in the appropriate circumstances. Claims of excessive force against the police, whether in the context of an arrest, stop, or other detention, are judged by a standard of “objective reasonableness” under the Fourth Amendment. Deadly force may be used in situations where a suspect “poses a threat of serious physical harm, either to the officer or to others.”
Distinguishing between legally permissible and impermissible uses of force by the police is not always easy. The U.S. Supreme Court has avoided requiring any exclusive list of factors in assessing reasonableness. Rather, the Court has emphasized that the use of force analysis requires courts to “slosh” through the “factbound morass of ‘reasonableness.’” Moreover, considerable deference is accorded to the police, as the “calculus of reasonableness must embody allowance for the fact that police officers are often forced to make split-second judgments—in circumstances that are tense, uncertain, and rapidly evolving.” That reasonableness “must be judged from the perspective of a reasonable officer on the scene, rather than with the 20/20 vision of hindsight.” Finally, that assessment asks “whether the officers’ actions are ‘objectively reasonable’ in light of the facts and circumstances confronting them, without regard to their underlying intent or motivation.” The result is a “notoriously opaque and fact-dependent” doctrine that has become difficult for courts to articulate and police to incorporate into their training.
Even if the Fourth Amendment’s use of force doctrine were clearer, it still would not translate easily to the world of robotics. First, the high degree of deference given to police in the use of force context takes into account the fallible nature of human judgment in volatile situations with high degrees of stress and emotion. As a result, police decisions to use force, even deadly force, do not have to be correct, only objectively reasonable. Artificially intelligent machines capable of coercive force do not feel fear, disgust, anger, or take offense. In this respect, robots might be more reliable than human beings in making split second decisions about whether to draw a weapon or use a stun gun. Does that mean we should expect a narrower set of circumstances for robotic reasonableness than we do for humans?
Second, and perhaps more importantly, the usual legal standards governing the use of force by the police assume the perspective of officers who fear for their lives or safety. Too little deference to the police may inhibit their decisions and result in more injuries to the police; too much deference, and the police may injure or kill people when such deaths could be avoided. In practice, courts have given considerable deference to officers’ stated beliefs that they felt their lives were in danger, even if those fears turn out to be mistaken after deadly force has been used.
What do these legal standards mean when a police robot confronts a person who appears to intend it harm? The law values life over property, yet robots might occupy a legal category that is neither purely property nor human. How we think of robots in these situations may depend on whatever analogies courts will adopt to characterize them: Are robots like wild animals, slaves, children, or something else?
If robots are treated merely as property, then police robots should not be permitted to defend themselves against human attack, even when acting in a policing role. But that conclusion is not obvious. An animated debate already exists as to whether cruelty to robots might be criminalized in the same way we criminalize cruelty to animals. The dismemberment of the hitchhiking “Hitchbot” in 2015 prompted new concerns about a future of unchecked robot abuse. While we might have many reasons to criminalize animal abuse, one reason may be to promote social values and to deter antisocial violence. We disavow deliberate unjustified violence to animals in part because to do otherwise would condone the human infliction of pain and suffering against other beings.
But permitting a robot to launch a counterpunch against an armed human poses its own tricky questions. Perhaps a robot could exert proportional nonlethal force to defend itself from destruction. Let us further assume that a human must authorize such a use of force. In an optimal case, the suspect posing a threat might be temporarily disabled by nonlethal force until in handcuffs (or their future equivalent).
Consider too, that one day, police robots might be sent to confront criminal robots. What amount of force is permissible then? Can a police robot “kill” another robot? Because the robot’s actions would be attributable to the government, the permanent disabling of a threatening civilian robot would be a Fourth Amendment “seizure.” But what would a “reasonable” robotic seizure of another robot look like?
To be sure, police robots might remove some of the problems raised in confrontations between the police and the public. The use of robots in these situations may reduce the risk of harm to the police, if they can safely subdue a dangerous suspect at a distance. And if police lives are not at immediate risk, it may be that detentions and arrests could be conducted with less risk to individuals.
Robots have the potential to make policing safer. That possibility, however, must be balanced against the mistakes, hacks, and malfunctions that will inevitably occur. Who will bear the responsibility for these mistakes, either because the threat was misjudged, or the force disproportionate? Should we blame the robot, the manufacturer, the software engineer, or the human operator (assuming there is one)?
Whether or not police robots use coercive force and do so with some degree of autonomy, the manner of their deployment raises a separate policy question. Robots may decrease dangers for the police and those detained by the police, but will they increase perceptions of policing unfairness? If arming robots is already becoming an active topic of debate, an equally important but less visible one is whether police robots will worsen ties to communities where relationships with the police are already strained.
We can see these questions of technology and policing already being put to the test with predictive policing software. Defined broadly, predictive policing applies artificial intelligence to data in order to detect patterns of crime. Using the vast amount of digitized information today, predictive policing programs try to predict criminal risks in individuals or in geographic locations. In the case of locational prediction, predictive policing programs—using historical crime data and other factors—identify geographic locations where crime is more likely to occur in the future. Police departments can use that information to redistribute patrol resources. Cities including Los Angeles, New York, and Seattle have purchased predictive software programs of this type. In the future, predictive policing programs may further guide the allocation of police resources and hiring.
These predictive policing programs may appear neutral but in fact hide biases. Consider the data used by predictive policing algorithms. If arrests are a significant factor, then we cannot discount the fact that many arrests are the product of highly discretionary decisions by police officers. If high arrest rates influence the algorithm to direct the geographic focus of police, then we should not be surprised if more arrests occur in those places as a result. Similarly, if reported crimes constitute another factor taken into account by a predictive policing algorithm, then crimes not usually reported or not always reported will be omitted from the decision about where to direct police attention. Robbery will make the cut, but domestic violence and financial crime may not. These hidden sources of bias have prompted concerns that predictive policing programs will justify a heavy police presence in poor minority communities that have historically been targets of over-policing.
Predictive policing programs only provide guidance, not action. What would happen, though, if patrol robots use this predictive analysis? If predictive policing programs fail to address issues of human bias, some communities would be flooded with police robots, while others would not be. Imagine further that police robots would be afforded less enforcement discretion than human officers. We worry about human police discretion because it is difficult if not impossible to know if impermissible factors influence enforcement. But would we live in a better world if police patrol robots enforced minor offenses much more frequently than human officers would in neighborhoods accustomed to aggressive policing because they were directed to do so by their own artificial intelligence?
The future may be one in which robots of all kinds are so ubiquitous that even the heavy presence of police robots in some places would be unnoticeable. That seems unlikely. Instead, it seems plausible that the introduction of an all-seeing, interconnected, and tireless army of police robots into a neighborhood could feel like an occupation or virtual imprisonment.
The regulation of police robots will prove to be especially difficult because robots will likely be adopted and regulated much like other recent police technologies: in an ad hoc and decentralized manner. Most of this can be attributed to the structure of policing in the United States, which is largely a matter for state and local agencies.
Consider the immense interest in body camera adoption after a string of highly publicized and controversial deaths of African Americans in police custody, beginning with the 2014 fatal shooting of Michael Brown in Ferguson. Before 2014, body cameras had been adopted by a handful of police departments. Today, it seems all but inevitable that body cameras will become a standard patrol tool. The federal government has encouraged that adoption. In its discussion of new police technologies, the President’s Task Force on 21st Century Policing acknowledged in its 2015 report that body cameras could “build community trust and legitimacy” with appropriate regulations. That same year, the Department of Justice made more than $20 million available to local police departments to purchase body cameras.
Eager to present them as tools of accountability, police departments around the country have embraced the adoption of body cameras. Yet many police departments have adopted body cameras without detailed policies on their use, public access, and data storage. Those body camera policies that do exist can vary considerably: Seattle posted all of its pilot project body camera footage on YouTube, while other departments have declined to release footage unless required by court order.
The story of police body camera adoption thus far has been: use first, regulate later. Without planning, the use of police robots will develop in the same way, with more serious consequences. Should the arming of police robots, for example, be left to local departments? Should the type of artificial intelligence used by police robots depend on the access of private vendors to police departments? Machines that may be equipped with artificial intelligence and capable of coercive force are poor candidates for highly variable local control.
Instead, uniform national policies should dictate the regulation of robotic policing. Even if robots of the sort described here have yet to arrive, we can anticipate the sorts of questions that robotics will bring to policing. The degree of human involvement in robotic decisionmaking, whether and how to arm police robots, and how to evaluate the legal responsibility of a police robot: These are all normative judgments about law and policy. In the absence of uniform policies, we are likely to address these questions in a piecemeal fashion: a mix of unenforceable internal policies, hesitant state legislatures, possibly conflicting federal agency decisions, and court cases in which judges cannot agree as to the appropriate characterization of robots.
We could begin with a national body to develop robotics expertise that could advise federal, state, and local lawmakers. A “federal robotics commission,” for instance, could identify important legal and policy questions raised by robotics in a variety of areas—including policing—with specific substantive recommendations.
More concretely, the federal government can wield its considerable resources to influence how local police departments use robots. While the federal government cannot force state and local police to adopt particular policies, the Department of Justice can and has influenced the adoption of new strategies and technologies through the use of federal funding. For example, the widespread interest in and adoption of body-worn cameras by local police departments in 2015 has been prompted in part by the availability of federal funding for body camera purchases. Likewise, the Department of Justice offers funding to local police departments in order to purchase predictive policing systems.
The federal government could condition the receipt of federal funds upon the adoption of regulations by grantees. Police departments could receive funding for robots so long as they, for instance, did not enable the robots to use deadly force without specific guidelines already in place. No police department would be forced to accept a robot under these conditions, but every department that sought federal funding would be obliged to follow these conditions. A top-down form of strong encouragement by the federal government could be effective in setting uniform policies for police robots.
The future introduction of artificially intelligent robots capable of conducting human-like tasks is likely to change policing in the same way it will change other fields. Assuming even some of the traditional tasks of policing by robots, however, will pose special problems. We will have to address how much coercive force police robots should possess, and to what degree they should be permitted to operate independently. In some cases, the use of police robots may increase the safety of policing, for both officers and the public. In other cases, however, the use of police robots to detain or subdue a suspect may raise challenges to the conventional ways in which we have regulated the police. While we cannot anticipate every issue that this technology raises, we can address many of them now, well before these hypotheticals find their way to our streets.
. Andrea Peterson, In an Apparent First, Dallas Police Used a Robot to Deliver Bomb That Killed Shooting Suspect, Wash. Post (July 8, 2016), https://www.washingtonpost.com/news/the-switch/wp/2016/07/08/dallas-police-used-a-robot-to-deliver-bomb-that-killed-shooting-suspect [https://perma.cc/EE7K-ADUH] (quoting Brown as stating police “saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the subject was”).
. Joel Achenbach et al., Five Dallas Police Officers Were Killed by a Lone Attacker, Authorities Say, Wash. Post (July 8, 2016), https://www.washingtonpost.com/news/morning-mix/wp/2016/07/08/like-a-little-war-snipers-shoot-11-police-officers-during-dallas-protest-march-killing-five [https://perma.cc/3UVS-FAJF].
. W.J. Hennigan & Brian Bennett, Dallas Police Used a Robot to Kill a Gunman, a New Tactic That Raises Ethical Questions, L.A. Times (July 8, 2016, 2:22 PM), http://www.latimes.com/nation/la-na-dallas-robot-20160708-snap-story.html [https://perma.cc/93GC-2YS5] (quoting Ryan Calo as stating robot use is “a creative solution to a very challenging problem”).
. Dpdpio, Investigative Update Regarding the Deadly Attack on Police Officers, DPD Beat, https://dpdbeat.com/2016/07/08/investigative-update-regarding-the-deadly-attack-on-police-officers [https://perma.cc/MG7W-M77K] (last updated July 10, 2016).
. Jon Swaine, Dallas Police Reveal Details of Bomb-Carrying Robots It Used as a ‘Last Resort’, Guardian (July 10, 2016, 3:12 PM), https://www.theguardian.com/us-news/2016/jul/10/dallas-police-reveal-details-of-bomb-carrying-robot-it-used-as-last-resort [https://perma.cc/PAQ8-SQPB].
. Jack Nicas, Dallas Police Believed to Be First to Use Robot Lethally, Wall Street J. (July 8, 2016, 6:34 PM), http://www.wsj.com/articles/dallas-police-believed-to-be-first-to-use-robot-lethally-1468001810 [https://perma.cc/3MJ9-BPHB]; Sam Thielman, Use of Police Robots to Kill Dallas Shooting Suspect Believed to Be First in U.S. History, Guardian (July 8, 2016, 12:31 PM), https://www.theguardian.com/technology/2016/jul/08/police-bomb-robot-explosive-killed-suspect-dallas [https://perma.cc/CU68-JSNC].
. See Peterson, supra note 1 (“But bomb disposal robots typically work like advanced remote-controlled vehicles, featuring camera feeds that are transmitted back to operators so that they can direct the units in potentially dangerous situations from afar.”).
. Patrick Tucker, Military Robotics Makers See a Future for Armed Police Robots, Def. One (July 11, 2016), http://www.defenseone.com/technology/2016/07/military-robotics-makers-see-future-armed-police-robots/129769 [https://perma.cc/ZCF2-GH9M].
. Dan Gettinger & Arthur Holland Michel, Ctr. for the Study of the Drone, Law Enforcement Robots Datasheet (2016), http://dronecenter.bard.edu/files/2016/07/LEO-Robots-CSD-7-16-1.pdf [https://perma.cc/3BPW-NNUF] (using public records to document the hundreds of robots acquired by law enforcement agencies).
. Like “robots,” the definition of “artificial intelligence” has been subject to some debate, but generally the term refers to “a set of technologies that try to imitate or augment human intelligence.” See Om Malik, The Hype—and Hope—of Artificial Intelligence, New Yorker (Aug. 26, 2016), http://www.newyorker.com/business/currency/the-hype-and-hope-of-artificial-intelligence [https://perma.cc/Y69E-Y4QE].
. For instance, the Rand Corporation reported that in a workshop attended by law enforcement officials and academics, participants “envisioned the emergence of automated or robotic policing.” Richard Silberglitt et al., Visions of Law Enforcement Technology in the Period 2024–2034, at 24 (2015).
. In 2003, the Spartan Scout, a robotic boat mounted with a loudspeaker and microphone, inspected civilian boats in the Persian Gulf without anyone onboard. James Dunnigan, Robotic Ship Talks to Startled Sailors, StrategyPage (June 14, 2005), https://www.strategypage.com/dls/articles2005/200561415554.asp [https://perma.cc/27GE-U8GR].
. On the possibility of DNA “Terry” stops, see Elizabeth E. Joh, Maryland v. King: Policing and Genetic Privacy, 11 Ohio St. J. Crim. L. 281, 291–93 (2013).
. Such bug drones are being developed for use in the military. See, e.g., John Horgan, Unmanned Flight, Nat’l Geographic (Mar. 2013), http://ngm.nationalgeographic.com/2013/03/unmanned-flight/horgan-text [https://perma.cc/2LGL-4WEB].
. Albert J. Reiss, Jr., Police Organization in the Twentieth Century, in 15 Crime & Justice 51, 51–52 (1992).
. Monisha Rajesh, Inside Japan’s First Robot-Staffed Hotel, Guardian (Aug. 14, 2015, 2:00 PM), http://www.theguardian.com/travel/2015/aug/14/japan-henn-na-hotel-staffed-by-robots [https://perma.cc/6RU6-KUCP].
. Hugo Martin, Robots Deliver Fun With Hotel Room Service Orders, and They Don’t Expect a Tip, L.A. Times (Feb. 7, 2016, 3:00 PM), http://www.latimes.com/business/la-fi-hotel-robots-20160207-story.html [https://perma.cc/4REP-QPKD].
. Jonathan Holmes, AI Is Already Making Inroads Into Journalism but Could It Win a Pulitzer?, Guardian (Apr. 3, 2016, 1:13 PM), http://www.theguardian.com/media/2016/apr/03/artificla-intelligence-robot-reporter-pulitzer-prize?CMP=edit_2221 [https://perma.cc/W6EJ-VTW8].
. See P.W. Singer, Wired for War: The Robotics Revolution and Conflict in the Twenty-First Century 77 (2009) (quoting Sebastian Thrun, director of the Artificial Intelligence Laboratory at Stanford University, in defining artificial intelligence as “the ability of a machine to ‘perceive something complex and make appropriate decisions’”).
. Michael Schaub, Is the Future Award-Winning Novelist a Writing Robot?, L.A. Times (Mar. 22, 2016, 10:30 AM), http://www.latimes.com/books/jacketcopy/la-et-jc-novel-computer-writing-japan-20160322-story.html [https://perma.cc/VFS5-ZRAX].
. See, e.g., Peter Finn, A Future For Drones: Automated Killing, Wash. Post (Sept. 19, 2011), https://www.washingtonpost.com/national/national-security/a-future-for-drones-automated-killing/2011/09/15/gIQAVy9mgK_story.html [https://perma.cc/BMB8-PAVY] (noting that when military drones “are directly linked to human operators, these machines are producing so much data that processors are sifting the material to suggest targets, or at least objects of interest”); The Changing Shapes of Air Power, N.Y. Times (June 19, 2011), http://www.nytimes.com/interactive/2011/06/19/world/drone-graphic.html [https://perma.cc/3Y84-D22F] (describing Reaper as “hunter-killer” aircraft armed with Hellfire surface-to-air missiles).
. Aarian Marshall, The Robot Garbage Collectors Are Coming, Atlantic: Citylab (Mar. 1, 2016), http://www.citylab.com/tech/2016/03/the-robot-garbage-collectors-are-coming/471429 [https://perma.cc/F222-KAV5] (describing self-driving, garbage-collecting prototype developed by Volvo); Cat Zakrzeweski, Autonomous Delivery Vehicle Company Dispatch Drives $2M in Seed Funding, Wall Street J. (Apr. 6, 2016, 9:23 AM), http://blogs.wsj.com/venturecapital/2016/04/06/autonomous-delivery-vehicle-company-dispatch-drives-2m-in-seed-funding [https://perma.cc/P2LU-FKEP] (announcing funding for on-demand autonomous delivery robots).
. Sam Byford, This Cuddly Japanese Robot Bear Could Be the Future of Elderly Care, Verge (Apr. 28, 2015, 10:24 AM), http://www.theverge.com/2015/4/28/8507049/robear-robot-bear-japan-elderly [https://perma.cc/UM89-MM46]; Will Knight, Your Retirement May Include a Robot Helper, MIT Tech. Rev. (Oct. 27, 2014), https://www.technologyreview.com/s/531941/your-retirement-may-include-a-robot-helper [https://perma.cc/4R94-497X].
. Rebecca Linke, Meet Pepper, the Dancing Robot, Computerworld: Emerging Tech. (Mar. 18, 2016, 7:37 AM), http://www.computerworld.com/article/3045642/emerging-technology/meet-pepper-the-dancing-robot.html [https://perma.cc/A74N-297W].
. See Joseph George, ‘Robot Cop’ May Be on Patrol by 2020, Emirates24/7 (June 1, 2016), http://www.emirates247.com/news/robot-cop-may-be-on-patrol-by-2020-2016-06-01-1.631688 [https://perma.cc/8WQG-2499]; Real Robocops of Dubai: UAE to Introduce Police Robots ‘Within Two Years,’ RT (Apr. 27, 2015, 8:36 PM), https://www.rt.com/news/253529-police-robot-dubai-robocop [https://perma.cc/YV3A-CHDW].
. Liang Jun, China’s First Intelligent Security Robot Debuts in Chongqing, People’s Daily Online (Apr. 26, 2016, 7:27 AM), http://en.people.cn/n3/2016/0426/c90000-9049431.html [https://perma.cc/WNY3-HMS6]; see also Stephen Chen, Meet China’s RoboCop: The Robot Police Officer Who Doesn’t Tire—or Second-Guess Commands, S. China Morning Post (May 5, 2016, 11:20 AM), http://www.scmp.com/news/china/policies-politics/article/1941394/meet-chinas-robocop-robot-police-officer-who-doesnt [https://perma.cc/HY26-SM6X].
. Nicky Woolf, RoboCop Is Real—and Could Be Patrolling a Mall Near You, Guardian (May 20, 2016, 6:30 PM), https://www.theguardian.com/us-news/2016/may/20/robocop-robot-mall-security-guard-palo-alto-california [https://perma.cc/R8VL-HLG3] (“They are completely autonomous, navigating like self-driving cars.”).
. The emerging scholarship on the regulation of robots has already suggested that robotics law could learn from the mistakes and successes of early cyberlaw scholarship. See, e.g., Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513, 514–516 (2015) (observing similarities between two bodies of scholarship).
. I assume that capability would be no different than human officers who are entitled to use coercive force, even lethal force, when the circumstances warrant such use.
. See Tucker, supra note 8 (quoting Endeavor Robotics CEO Sean Bielat as saying: “We aren’t the ones who are going to think of these end use cases. It’s going to be the end users as they get closer to the technology, as it gets more capable and less expensive. It’s going to be the end user who says, ‘wow, this additional capability would really make a difference and would really make my job safer if it had some level of armament on it.’”).
. See, e.g., Singer, supra note 2067, 423 (observing that “much of the funding for robotics research comes from the military”).
. See, e.g., Andrew Guthrie Ferguson, Predictive Policing and Reasonable Suspicion, 62 Emory L. Rev. 259, 269 (2012) (noting local police department adoption of predictive policing programs purchased federal funding from the Department of Justice); Justice Department Awards Over $23 Million in Funding for Body Worn Camera Pilot Program to Support Law Enforcement Agencies in 32 States, U.S. Dep’t Just. (Sept. 21, 2015), https://www.justice.gov/opa/pr/justice-department-awards-over-23-million-funding-body-worn-camera-pilot-program-support-law [https://perma.cc/83WS-Q69N].
. See Zoltan Istvan, The Second Amendment Isn’t Prepared for a 3D-Printed Drone Army, Motherboard (Mar. 25, 2016, 8:00 AM), http://motherboard.vice.com/read/the-second-amendment-isnt-prepared-for-a-3d-printed-drone-army?utm_source=mbtwitter [https://perma.cc/9RAJ-SUL2].
. See, e.g., Robert Shrimsley, Are We Ready to Live With Robots?, Fin. Times (May 6, 2016), http://www.ft.com/cms/s/2/51d964c6-11dd-11e6-91da-096d89bd2173.html#axzz4FAV3fTBU [https://perma.cc/7LRP-GMVW] (referring to Lang’s “machine-human”).
. Louise Chan, Not Quite ‘The Jetsons’ Rosie but Researchers Are Working on Building Robot Maids of the Future, Tech Times (Jan. 16, 2016, 10:11 AM), http://www.techtimes.com/articles/124943/20160116/not-quite-the-jetsons-rosie-but-researchers-are-working-oxxpn-building-robot-maids-of-the-future.htm [https://perma.cc/HB2A-SUTV].
. See, e.g., Michael Froomkin, Introduction to Robot Law x, xi (Ryan Calo et al. eds., 2016) (“There is not yet a consensus regarding what should count as a robot.”).
. See Singer, supra note 20, at 67 (offering this definition); Calo, supra note 29, at 529–32 (same); see also Neil M. Richards & William D. Smart, How Should the Law Think About Robots?, in Robot Law, supra note 37, at 3, 6 (“A robot is a constructed system that displays both physical and mental agency, but is not alive in the biological sense.”). But see Adrienne Lafrance, What Is a Robot?, Atlantic (Mar. 22, 2016), http://www.theatlantic.com/technology/archive/2016/03/what-is-a-human/473166 [https://perma.cc/237A-6L3L] (observing that there is no clear consensus on defining a robot).
. See, e.g., Cyrus Farivar, Dark Web Drug-Buying Bot Returned to Swiss Artists After Police Seizure, ArsTechnica (Apr. 15, 2015, 12:50 PM), http://arstechnica.com/tech-policy/2015/04/dark-web-drug-buying-bot-returned-to-swiss-artists-after-police-seizure [https://perma.cc/TKH6-SYMF].
. See, e.g., Knight, supra note 24.
. Singer, supra note 20, at 92.
. Calo, supra note 29, at 532.
. See, e.g., Chris Bryant & Richard Waters, Worker at Volkswagen Plant Killed in Robot Accident, Financial Times (July 1, 2015), http://www.ft.com/cms/s/0/0c8034a6-200f-11e5-aa5a-398b2169cf79.html#axzz4LUrufb9p (describing technician killed by industrial robot in Germany).
. Ryan Calo describes this kind of action as “emergence,” meaning “unpredictably useful behavior.” Calo, supra note 29, at 532.
. See Singer, supra note 20, at 74.
. Abby Ohlheiser, Trolls Turned Tay, Microsoft’s Fun Millennial AI Bot, Into a Genocidal Maniac, Wash. Post (Mar. 25, 2016), https://www.washingtonpost.com/news/the-intersect/wp/2016/03/24/the-internet-turned-tay-microsofts-fun-millennial-ai-bot-into-a-genocidal-maniac/ [https://perma.cc/5J3G-2JEK].
. A related but distinct issue in big data analytics is that artificial intelligence is becoming so sophisticated that while we may understand the results achieved by algorithms, we may not understand the “whyness” of the process leading to the result. See, e.g., Ahmed Ghappour, Machine Generated Culpability (unpublished manuscript) (summarized at HTNM Lecture—Ahmed Ghappour—“Machine Generated Culpability, Berkeley Ctr. for New Media (Nov. 9, 2016), http://bcnm.berkeley.edu/events/event/htnm-lecture-ahmed-ghappour-machine-generated-culpability/http://bcnm.berkeley.edu/events/event/htnm-lecture-ahmed-ghappour-machine-generated-culpability) (explaining how “the use of machine-generated culpability add the issue of ‘whyness’”).
. See, e.g., Elizabeth E. Joh, Policing by Numbers: Big Data and the Fourth Amendment, 89 Wash. L. Rev. 35, 42–48 (2014).
. Ryan Calo describes this as the “social valence” of robots. Calo, supra note 29, at 545–46.
. See, e.g., Tim Radford, Touching Robots Can Arouse Humans, Study Finds, Wash. Post (Apr. 5, 2016. 5:00 EDT), https://www.theguardian.com/technology/2016/apr/05/touching-robots-can-arouse-humans-study-finds [https://perma.cc/Y6YV-2VYL] (experiment demonstrates “a touch where the robot’s buttocks or genitals would be produced a measurable response of arousal in the volunteer human”).
. Kate Darling suggests that we may want to prevent the abuse of “social” robots because doing so would protect social values. Kate Darling, Extending Legal Protection to Social Robots, in Robot Law, supra note 37, at 213, 223–24.
. See, e.g., James Temperton, Campaign Calls for Ban on Sex Robots, Wired (Sept. 15, 2015), http://www.wired.co.uk/news/archive/2015-09/15/campaign-against-sex-robots [https://perma.cc/3Q7K-DGVM].
. See Calo, supra note 29, at 549.
. See Singer, supra note 20, at 78 (observing that the “military sets the agenda in AI”).
. In 1999, the Army introduced an ambitious multiyear and multibillion dollar program called Future Combat Systems (FCS), which was intended to revolutionize warfare by developing new manned and unmanned systems linked by modernized communications networks. See Andrew Feickert, Cong. Research Serv., The Army's Future Combat System (FCS): Background and Issues for Congress 1–2 (2009), https://www.fas.org/sgp/crs/weapons/RL32888.pdf. The FCS was effectively shut down in 2009 by Secretary of Defense Gates, yet robotics still remain a part of the Army’s planning, albeit not in its original form. Id. at 3.
. Cf. Singer, supra note 20, at 78 (observing that “the U.S. military funds as much as 80 percent of all AI research in the United States”).
. Id. at 32–38.
. See, e.g., Ian Kerr & Katie Szilagyi, Asleep at the Switch? How Killer Robots Become a Force Multiplier of Military Necessity, in Robot Law, supra note 37, at 333, 362 n.139 (noting that the term refers “to a factor that significantly enhances the effectiveness or strategic advantage of a particular force”).
. Singer, supra note 20, at 133.
. Matthew Power, Confessions of a Drone Warrior, GQ (Oct. 22, 2013, 8:00 PM), http://www.gq.com/story/drone-uav-pilot-assassination [https://perma.cc/N4KW-LG4P].
. See, e.g., Kelsey D. Atherton, Robot Submarine Launches Drone at Command of Autonomous Navy Ship, Popular Sci. (Sept. 28, 2016), http://www.popsci.com/underwater-robot-launches-drone-at-command-other-robot-ship [https://perma.cc/5KJE-C9NE] (describing robot-to-robot command as part of Navy technology exercise).
. Singer, supra note 20, at 112.
. See, e.g., Richard A. Friedman, Why Are We Drugging Our Soldiers?, N.Y. Times (Apr. 21, 2012), http://nyti.ms/1DyTRLA [https://perma.cc/DA7R-8JD7] (discussing “significant increase in the use of stimulant medication” for soldiers).
. Singer, supra note 20, at 298 (discussing psychological effects of robot soldiers on opposing forces).
. Id. at 308–09 (discussing possible counterintuitive psychological effects of robot soldiers on opposing forces).
. Id. at 230–34 (describing how robot swarms work).
. Albert J. Reiss, Jr., Police Organization in the Twentieth Century, 15 Crime & Justice 51, 80 (1992) (noting late nineteenth century adoption of “the basic hierarchical rank organization of the military to insure internal discipline and control”).
. See, e.g., Robert Salonga, San Jose Police Accepting Military Service in Place of College in Attempt to Boost Anemic Recruiting, Mercury News (Mar. 21, 2016, 11:17 AM), http://www.mercurynews.com/crime-courts/ci_29666424/san-jose-police-accepting-military-service-place-college [https://perma.cc/7TYH-ZTBR].
. See, e.g., Ed Vulliamy, Nixon’s “War on Drugs” Began 40 Years Ago, and the Battle Is Still Raging, Guardian (July 23, 2011), https://www.theguardian.com/society/2011/jul/24/war-on-drugs-40-years [https://perma.cc/X6RR-7FF3] (noting that President Nixon in a speech before Congress on July 17, 1971 ushered in “war on drugs” era).
. 18 U.S.C. § 1385 (2012) (prohibiting the unauthorized use of the Army or Air Force as a posse comitatus).
. National Defense Authorization Act for Fiscal Year 1997 § 1033, 10 U.S.C. § 2576(a) (2012) (“The Secretary of Defense, under regulations prescribed by him, may sell to State and local law enforcement, firefighting, homeland security, and emergency management agencies, at fair market value, pistols, revolvers, shotguns, rifles of a caliber not exceeding .30, ammunition for such firearms, gas masks, personal protective equipment, and other appropriate equipment which . . . have been determined to be surplus property . . . .”).
. See, e.g., Benjamin Preston, Police Are Getting the Military’s Leftover Armored Trucks, N.Y. Times: Wheels (Oct. 11, 2013, 6:00 AM), http://wheels.blogs.nytimes.com/2013/10/11/police-are-getting-the-militarys-leftover-armored-trucks/?_r=0 [https://perma.cc/VP3B-JXWZ]. The Law Enforcement Support Office, responsible for coordinating the 1033 program, states on its website that more than 8000 law enforcement agencies have enrolled. See Law Enforcement Support Office, Def. Logistics Agency, http://www.dla.mil/DispositionServices/Offers/Reutilization/LawEnforcement.aspx [https://perma.cc/UXX9-TJY6].
. A Congressional Research Service report noted that the St. Louis county police received twelve M16 rifles under the 1033 program as of July 2013. See Valerie Bailey Grasso, Cong. Research Serv., Defense Surplus Equipment Disposal, Including the Law Enforcement 1033 Program 6 (2014).
. See, e.g., Taylor Wofford, How America’s Police Became an Army: The 1033 Program, Newsweek (Aug. 13, 2014, 10:47 PM), http://www.newsweek.com/how-americas-police-became-army-1033-program-264537 [https://perma.cc/GN4J-QMVP]. In 2015, the Obama administration announced that it would ban transfers of certain types of equipment under the 1033 program because of concerns about the increasing militarization of local police forces. See David Nakamura & Wesley Lowery, Obama Administration Bans Some Military-Style Assault Gear From Local Police Departments, Wash. Post (May 18, 2015), https://www.washingtonpost.com/news/post-politics/wp/2015/05/18/obama-to-visit-camden-n-j-to-tout-community-policing-reforms [https://perma.cc/3EQ7-BQKM]; see also Law Enforcement Equipment Working Group, Recommendations Pursuant to Executive Order 13688: Federal Support for Local Law Enforcement Equipment Acquisition (2015), https://www.whitehouse.gov/sites/default/files/docs/le_equipment_wg_final_report_final.pdf [https://perma.cc/T9YQ-3KPQ]. In July 2016, however, the White House agreed to review each banned item from the program. See Julia Edwards, Exclusive: White House to Review Ban on Military Gear for Police-Police Leaders, Reuters (July 21, 2016), http://www.reuters.com/article/us-usa-police-gear-exclusive-idUSKCN1012KW [https://perma.cc/L43M-BNT3].
. See, e.g., Radley Balko, Cato Institute, Overkill: the Rise of Paramilitary Police Raids in America 6 (2006), http://object.cato.org/sites/cato.org/files/pubs/pdf/balko_whitepaper_2006.pdf [https://perma.cc/J463-CB3H] (attributing creation of SWAT teams to LAPD Chief Daryl Gates in 1966).
. Such robots may not be as necessary in a future of self-driving cars. Law enforcement agencies may in those situations have direct access to a vehicle’s computer in order to forcibly stop it. See generally John S. Hollywood et al., Using Future Internet Technologies to Strengthen Criminal Justice (2015), http://www.rand.org/content/dam/rand/pubs/research_reports/RR900/RR928/RAND_RR928.pdf [https://perma.cc/F9UX-2QZS] (describing one future consideration by panel of criminal justice experts for self-driving cars—though considered a low priority—as “an interface for officers to directly take control of unmanned vehicles”).
. Paul Marks, Packs of Robots Will Hunt Down Uncooperative Humans, New Scientist (Oct. 22, 2008, 6:00 PM), https://www.newscientist.com/blogs/shortsharpscience/2008/10/packs-of-robots-will-hunt-down.html [https://perma.cc/W6Y3-FMBG].
. See, e.g., E.B. Boyd, Is Police Use of Force About to Get Worse—With Robots?, Politico Mag. (Sept. 22, 2016), http://www.politico.com/magazine/story/2016/09/police-robots-ethics-debate-214273 [https://perma.cc/H3NY-W5D4] (discussing current “man in the loop” limits).
. A distinct but equally important question is the degree to which the police will be able to control citizen’s robots without consent. Thanks to Ryan Calo for this observation.
. As computer science professor David Gelertner points out, superhuman artificial intelligence is likely to happen not because today’s AI is so advanced—it is not—but because such a technological leap requires a “breakthrough,” which, as Gelertner argues, is not hard to imagine. See David Gelernter, Machines That Will Think and Feel, Wall Street J. (Mar. 18, 2016, 10:36 AM), http://www.wsj.com/articles/when-machines-think-and-feel-1458311760.
. Kris Osborn, Pentagon Project Seeks to Build Autonomous Robots, Def. Tech (June 13, 2013), http://www.defensetech.org/2013/06/13/pentagon-launches-pilot-to-build-autonomous-robots [https://perma.cc/Z2PJ-LGCR]. Peter Singer notes that the U.S. military funds as much as 80 percent of all AI research in the United States. See Singer, supra note 20, at 78.
. See Gelernter, supra note 81.
. See Singer, supra note 20, at 30 (“Unlike the PackBot, the [armed Talon] SWORDS . . . is remote-controlled from afar by either radio or a spooled-out fiber optic wire.”).
. Id. at 36.
. Id. at 128 (“So, despite what one article called ‘all the lip service paid to keeping a human in the loop,’ autonomous armed robots are coming to war.”).
. Id. at 75 (“If robots don’t get higher on the autonomy scale, they don’t yield any cost or manpower savings.”).
. Id. at 127 (“The very best human fighter pilot needs at least .3 seconds to respond to any simple stimulus and twice as long to make a choice between several possible consequences. A robotic pilot needs less than a millionth of a second.”).
. See Osborn, supra note 82.
. Ghappour, supra note 47.
. Kavita Iyer, Engineers Unable to Understand the Workings of Google’s Search AI, TechWorm (Mar. 10, 2016), http://www.techworm.net/2016/03/engineers-unable-understand-working-googles-search-ai.html [https://perma.cc/GUC4-33SE].
. See, e.g., Andy Greenberg, The Jeep Hackers Are Back to Prove Car Hacking Can Get Much Worse, Wired (Aug. 1, 2016, 3:30 PM), https://www.wired.com/2016/08/jeep-hackers-return-high-speed-steering-acceleration-hacks [https://perma.cc/7Q29-BM56]; Alexandra Ossola, Hacked Medical Devices May Be the Biggest Cyber Security Threat in 2016, Popular Sci. (Nov. 23, 2015), http://www.popsci.com/hackers-could-soon-hold-your-life-ransom-by-hijacking-your-medical-devices [https://perma.cc/SR8Q-HYZM].
. Dan Lamothe, The Killer Robot Threat: Pentagon Examining How Enemy Nations Could Empower Machines, Wash. Post (Mar. 30, 2016), https://www.washingtonpost.com/news/checkpoint/wp/2016/03/30/the-killer-robot-threat-pentagon-examining-how-enemy-nations-could-empower-machines [https://perma.cc/NXX5-K25B].
. See Singer, supra note 20, at 124.
. See id. (reporting the Pentagon’s concern on this issue).
. Mark Jewell, Taser, iRobot Team up to Arm Robots, Wash. Post (June 28, 2007, 11:04 PM), http://www.washingtonpost.com/wp-dyn/content/article/2007/06/28/AR2007062801338.html [https://perma.cc/7KWK-UVZG].
. See, e.g., Roberto Baldwin, What Is the LRAD Sound Cannon?, Gizmodo (Aug. 14, 2014, 11:40 AM), http://gizmodo.com/what-is-the-lrad-sound-cannon-5860592 [https://perma.cc/W6VF-F74L] (describing LRAD as a device intended to “broadcast messages and pain-inducing ‘deterrent’ tones over long distances”).
. iRobot and Taser Team to Deliver New Robot Capabilities for Military, Law Enforcement, iRobot (June 28, 2007), http://investor.irobot.com/phoenix.zhtml?c=193096&p=irol-newsArticle&ID=1334071 [https://perma.cc/MSJ5-SGXT].
. USA: Excessive and Lethal Force? Amnesty International’s Concerns About Deaths and Ill-Treatment Involving Police Use of Tasers, Amnesty Int’l USA (Nov. 29, 2004), http://www.amnestyusa.org/node/55449 [https://perma.cc/BW9C-XWAW].
. See, e.g., Laura Crimaldi & Jan Ransom, Civil Rights Groups Express Concern Over State Police Turning to Tasers, Bos. Globe (Mar. 31, 2016), https://www.bostonglobe.com/metro/2016/03/31/state-police-troopers-outfitted-with-tasers/uY2RvuJfMYiZOWxDgjAGCN/story.html [https://perma.cc/92GQ-QJLW] (discussing adoption of Tasers by Massachusetts State Police as “part of a larger effort to equip troopers to ratchet down potentially violent confrontations”); Vivian Ho, SF’s Next Police Chief Faces Mountain of Challenges, S.F. Chronicle (May 23, 2016, 7:00 AM), http://www.sfchronicle.com/crime/article/SF-s-next-police-chief-faces-mountain-of-7938997.php [https://perma.cc/PJ2H-HELJ] (discussing SFPD consideration of adopting Tasers as part of departmental changes).
. Cheryl W. Thompson & Mark Berman, Improper Techniques, Increased Risks, Wash. Post (Nov. 26, 2015), http://www.washingtonpost.com/sf/investigative/2015/11/26/improper-techniques-increased-risks [https://perma.cc/G6KN-9859].
. Noel Sharkey et al., The Coming Robot Crime Wave, 43 Computer 116, 114 (2010) (describing such a robot).
. Cf. James B. Jacobs, Exceptions to a General Prohibition on Handgun Possession: Do They Swallow Up the Rule?, 49 L. & Contemp. Probs. 5, 20 (1986) (“Like the international arms race, there seems to be an inexorable drive to accumulate more powerful weaponry.”).
. Rick Noack, 5 Countries Where Most Police Officers Do Not Carry Firearms—and It Works Well, Wash. Post (July 8, 2015), https://www.washingtonpost.com/news/worldviews/wp/2015/02/18/5-countries-where-police-officers-do-not-carry-firearms-and-it-works-well [https://perma.cc/VY65-YCV8] (noting that police on patrol are unarmed in Britain, Ireland, Norway, Iceland, and New Zealand).
. I assume here that a robot shooting would constitute a Fourth Amendment “seizure,” although robotic Fourth Amendment seizures may not always be obvious. If a robot that is entirely remotely controlled detains a person, that action may constitute “a governmental termination of freedom of movement through means intentionally applied.” See Brower v. County of Inyo, 489 U.S. 593, 596–97 (1989). On the other hand, is an autonomous robot’s “accidental” infliction of injury or death a Fourth Amendment seizure? See generally Jason Green, Report: Security Robot at Stanford Shopping Center Runs Over Toddler, Mercury News (July 12, 2016, 8:25 AM), http://www.mercurynews.com/ci_30118119/report-security-robot-at-stanford-shopping-mall-injures [https://perma.cc/L888-P88R] (discussing incident in which autonomous 300 pound Knightscope private security robot reportedly ran over toddler); Woolf, supra note 28 (noting Knightscope robots are “completely autonomous, navigating like self-driving cars”).
. The Fourth Amendment guarantees “the right of the people to be secure in their persons, houses, papers, and effects, from unreasonable searches and seizures.” U.S. Const. amend. IV.
. Tennessee v. Garner, 471 U.S. 1, 11–12 (1985).
. Scott v. Harris, 550 U.S. 372, 383 (2007).
. Graham v. Connor, 490 U.S. 386, 396–97 (1989).
. Id. at 396.
. Id. at 397.
. For an insightful critique of the existing Fourth Amendment use of force doctrine, see generally Brandon Garrett & Seth Stoughton, A Tactical Fourth Amendment, 102 Va. L. Rev. (forthcoming 2017) (manuscript at 4), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2754759.
. Cf. Graham, 490 U.S. at 396–97 (“The calculus of reasonableness must embody allowance for the fact that police officers are often forced to make split-second judgments—in circumstances that are tense, uncertain, and rapidly evolving—about the amount of force that is necessary in a particular situation.”).
. See, e.g., Kathryn R. Urbonya, Dangerous Mispercetions: Protecting Police Officers, Society, and the Fourth Amendment Right to Personal Security, 22 Hastings Const. L.Q. 623, 654 (1995) (noting courts perceive “scrutiny as [an] impermissible second-guessing of split-second judgments made by police officers”); cf. Seth W. Stoughton, Policing Facts, 88 Tulane L. Rev. 847, 852 (2014) (“With regard to uses of force, the [U.S. Supreme] Court believes that officers use violence in an environment that demands ‘split second judgments,’ justifying significant deference to an officer’s decision of whether to use force and what force to use. However, only a very small percentage of use-of-force incidents resemble the Court’s intuitions, suggesting that the standard used to review police violence may not often fit the circumstances of the incident itself.”).
. Nolan Bushnell, the founder of Atari, stated in 1984 that the ultimate role of robots would be “slaves.” Iver Peterson, World’s Most Expensive Pet Seeks a Role in Life, N.Y. Times (Apr. 17, 1984), http://www.nytimes.com/1984/04/17/us/world-s-most-expensive-pet-seeks-a-role-in-life.html [https://perma.cc/T7PQ-HHK3].
. They would be entitled, presumably, to protect the lives of others from harm. See Graham, 490 U.S. at 396 (noting the factor of “whether the suspect poses an immediate threat to the safety of the officers or others”).
. See Richard Fisher, Is It OK to Torture or Murder a Robot?, BBC (Nov. 27, 2013), http://www.bbc.com/future/story/20131127-would-you-murder-a-robot [https://perma.cc/69J4-5TX7].
. Daniela Hernandez, Hitchbot “Murder” Has Researchers Worry About Robot Cruelty, Fusion (Aug. 3, 2015, 5:17 PM), http://fusion.net/story/176834/hitchbot-murder-robot-cruelty [https://perma.cc/4K8P-JRKM].
. See, e.g., Andrew Guthrie Ferguson, Predictive Policing and Reasonable Suspicion, 62 Emory L.J. 259, 265 (2015). As Ferguson notes, “predictive policing” can sometimes be used to describe any crime fighting approach involving analysis of a large set of data, but here I use the term to refer specifically to the use of predictive analytics.
. See, e.g., Predictive Policing, Nat’l Inst. Just., http://www.nij.gov/topics/law-enforcement/strategies/predictive-policing/Pages/welcome.aspx [https://perma.cc/525K-ZGMX] (last updated June 9, 2014).
. Nate Berg, Predicting Crime, LAPD-Style, Guardian (June 25, 2014, 5:19 AM), https://www.theguardian.com/cities/2014/jun/25/predicting-crime-lapd-los-angeles-police-data-analysis-algorithm-minority-report [https://perma.cc/2FEC-4TCS]; Somini Sengupta, In Hot Pursuit of Numbers to Ward Off Crime, N.Y. Times: Bits (June 19, 2013, 10:48 PM), http://bits.blogs.nytimes.com/2013/06/19/in-hot-pursuit-of-numbers-to-ward-off-crime [https://perma.cc/7HAU-BVF7].
. See, e.g., Peter Moskos, The Better Part of Valor: Court-Overtime Pay as the Main Determinant for Police Discretionary Arrests, 8 Law Enforcement Executive F. 77, 81 (2008) (“Overall, the literature establishes that police exercise considerable discretion in their day-to-day arrest decisions.”).
. See, e.g., Ferguson, supra note 121, at 317 (noting that “not all crime is reported, not all crime is recorded, and thus, not all crime is included in crime databases to be used for predictions”).
. See, e.g., id. at 317.
. See, e.g., id. at 322 (observing that “data-driven law enforcement can have a disproportionate effect on certain communities that perceive it as discriminatory”).
. President’s Task Force on 21st Century Policing, Final Report of the President’s Task Force on 21st Century Policing, Community Oriented Policing Servs. 31–32 (May 2015), http://www.cops.usdoj.gov/pdf/taskforce/taskforce_finalreport.pdf [https://perma.cc/4KQE-4HWL].
. See Justice Department Awards Over $23 Million in Funding for Body Worn Camera Pilot Program to Support Law Enforcement Agencies in 32 States, U.S. Dep’t Just. (Sept. 21, 2015), https://www.justice.gov/opa/pr/justice-department-awards-over-23-million-funding-body-worn-camera-pilot-program-support-law [https://perma.cc/J2FA-FBV5].
. See, e.g., Elizabeth E. Joh, Beyond Surveillance: Data Control and Body Cameras, 14 Surveillance & Soc’y 133 (2016), http://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/cdebate4/bc4.
. Wylie Wong & Phil Goldstein, Seattle Shares Body-Cam Footage on YouTube, StateTech (Jan. 21, 2016), http://www.statetechmagazine.com/article/2016/01/seattle-shares-body-cam-footage-youtube [https://perma.cc/SWN3-3KUL].
. See, e.g., Police Body Camera Policies: Retention and Release, Brennan Ctr. for Just. (Aug. 3, 2016), https://www.brennancenter.org/analysis/police-body-camera-policies-retention-and-release [https://perma.cc/X3AU-V8H9] (collecting available policies).
. See, e.g., Ryan Calo, Robots in American Law (Univ. of Wash. Sch. of Law Legal Studies Research Paper, Paper No. 2016-04, 2016), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2737598 (discussing history of robot characterization in American law).
. Ryan Calo, Brookings, The Case for a Federal Robotics Commission (2014), https://www.brookings.edu/wp-content/uploads/2014/09/RoboticsCommissionR2_Calo.pdf [https://perma.cc/87LX-3667].
. New York v. United States, 505 U.S. 144, 188 (1992) (“The Federal Government may not compel the States to enact or administer a federal regulatory program.”).
. See, e.g., John Eligon & Timothy Williams, Police Program Aims to Pinpoint Those Most Likely to Commit Crimes, N.Y. Times (Sept. 24, 2015), http://www.nytimes.com/2015/09/25/us/police-program-aims-to-pinpoint-those-most-likely-to-commit-crimes.html (noting predictive policing models typically are “financed by the federal government”).
. South Dakota v. Dole, 483 U.S. 203, 206 (1987) (“Congress may attach conditions on the receipt of federal funds, and has repeatedly employed the power ‘to further broad policy objectives by conditioning receipt of federal moneys upon compliance by the recipient with federal statutory and administrative directives.’” (quoting Fullilove v. Klutznick, 448 U.S. 448, 474 (1980))).