Abstract
Criminal defendants increasingly face the risks of digital evidence. These risks include intentional manipulation, accidental alteration, and even the threat that visual displays like footage or data visualizations lure viewers into an unquestioning acceptance of events as they seem to have unfolded. Some scholars have deemed the U.S. evidence system responsive to issues like intentional manipulation or visual prejudice, and others have emphasized the system’s unmitigated vulnerabilities. But the literature writ large has yet to emphasize entire classes of risk flowing from digital evidence. Do the Federal Rules of Evidence (the Rules) mitigate the danger of technological displays that combine visual prejudicial effect on jurors with the impairment of jurors’ ability to assess underlying datasets? Do the Rules guard against video footage that has undergone data loss and morphed into a depiction of a nanny battering rather than coddling an infant? (This latter evidence led to the conviction of a working-class Latina only for prosecutors to find it “worthless” years later.) Even granting that the Rules are protective, can advocates wield them to ensure fair outcomes in an adversarial system without grasping the issues arising from digital evidence?
U.S. legal scholarship has also not yet conceived of digital evidentiary risk as racially differentiated. The criminal justice system disproportionately charges and sentences racial and ethnic minorities reliant on public defenders in a system that struggles to mitigate the many risks of digital evidence. From this backdrop surges an inequitable distribution of possible harms. Prosecutors have greater access to digital evidence and expert testimony, while public defenders are short on resources and may lack the skills to contest admissibility and weight. Left in the wreckage are criminal defendants, overwhelmingly from vulnerable communities, facing perilous consequences amid hesitant advocacy in an adversarial system.
This Comment underscores the threat that digital evidence poses to racial justice. Part I examines how the risks of digital evidence are disproportionately borne by racial minorities. As this type of evidence becomes increasingly prevalent, it is more frequently introduced against racial and ethnic minorities, who are overrepresented as criminal defendants. This heightens the need for public defenders to counteract its risks, but some evidence shows that they need greater training to do so. In light of these disparities, Part II outlines particular categories of digital evidentiary risks—complex technological prejudice and unintentional modification. Part III conducts a race-sensitive analysis of Rule 902(14)’s efficacy in protecting litigants against such risks. Finally, Part IV suggests open-source intelligence (OSINT) investigative human rights coursework as one powerful remedy to equip public defenders and other advocates to understand, identify, and contest the admissibility and weight of digital evidence, thereby helping mitigate its risks to racial justice.
