The Problem With Nostalgia (or In Defense Of Alternative Facts)


Language lives in the present, though we often approach it as though it was settled in the past.  These days, the late Justice Scalia’s originalism persists in the courts, Steven Miller lectures about the Statue of Liberty from the White House press room, and Arturo di Modica threatens to sue New York for allowing Fearless Girl to complicate his iconic Charging Bull’s message of strength.  Each effort, in its own way, insists that meanings were set long ago, that meanings are fixed and immutable.  But those yearning for the meanings of some bygone era, like those endeavoring to deduce a single, correct meaning from the words on a page, are deluded.  The intractable problem of induction scuttles these projects, and reveals that we cannot ask “What does it mean?” without also asking, “To whom?”

I. Nostalgic Bull

On the eve of International Women’s Day in March 2017, the statue of a plucky, high-topped girl with fists on her hips was installed across the square from Arturo Di Modica’s Charging Bull statue, the three-and-a-half-ton bronze mascot of Lower Manhattan’s Financial District.  Di Modica saw the new statue as a threat to his Bull’s legacy, so the artist held interviews denouncing Fearless Girl, and ultimately declared he would take legal action.  As his lawyer explained at a press conference: “The placement of the statue of the young girl in opposition to ‘Charging Bull’ has undermined the integrity [of] and modified the ‘Charging Bull.’”1  Where once the Bull carried “a positive, optimistic message,” he went on, now it had “been transformed into a negative force and a threat.”2

The history of the Bull helps put Di Modica’s irritation in context.  Charging Bull arrived—unannounced—on the steps of the New York Stock Exchange in December of 1989.  It was an instant sensation.  As the New York Times reported the day after its sudden appearance: “Hundreds of people walked around, gawked at, admired and stroked the long-horned whip-tailed bull, the image of a surging market in the lore of high finance.”3  Concocted and installed against the backdrop of Black Monday, the 1987 market crash in which the Dow Industrial and S&P 500 indexes shed about 20 percent of their value in one day,4 Di Modica announced the Bull to be a symbol of the “strength and power of the American people.”5  He saw it as a “symbol of virility and courage.”6  To him, it stood as a bold assertion of American economic resilience, an indefatigable force.  It was muscular, focused, and committed.

But then time did what time does.  Charging Bull stared down the recession of the early 1990s, then weathered another economic crisis beginning in 2007, then stared blankly through the Great Recession.  The Bull stood sentinel outside the citadels of finance even as they lagged other sectors in addressing the racial and gender inequalities in their ranks.  The Bull headlined the Adbuster’s poster announcing Occupy Wall Street, marking it as ground zero for the encampments that surged across the country in protest against an ever-growing wealth gap.  And the Bull had a front row seat to the 2016 presidential election where three self-described New Yorkers battled for the White House: a socialist decrying economic injustice, a trailblazing woman with a lifelong commitment to public service, and a billionaire businessman with a public history of chauvinism.

After thirty years, Di Modica’s original mission statement for the Bull held less sway in the public conversation than it once did.  The Bull meant something different for millennials and baby boomers, for feminists and socialists, for traders and tourists—and those meanings were beginning to crowd out his original vision.  By the time Fearless Girl squared off against Di Modica’s Bull (bringing its own mission statement and troublesome context with it), the story of the Bull had become considerably more complicated, and the institutions to which Di Modica had dedicated his work held a different—or at least more contested—place in the public conscience.

Apparently aware of the changing backdrop against which his Bull was set, Di Modica raged, raged against the shifting meaning of his statue and reached for the perfect weapon to fight the battle—his lawyer.

II. A Living Language

Language is a trip.

The definition of “literally” has long allowed for hyperbolic use, which means statements like “it was literally raining cats and dogs” can be proper and correct.  In the 1970 two-volume Webster’s second edition I have within my reach, the entry for “literal” lists this exaggerative meaning immediately below what is often considered the correct use:

  1. real; not going beyond the actual facts; accurate; unvarnished.
  2. virtual; used as an intensive.7

So: “Literally,” according to the dictionary, can sometimes mean “figuratively.”

This chafes many that discover it.  Rule-following grammarians long for better guidelines.  It is silly, they protest, that a word is allowed both to set a rule and to serve as the exception to that rule—and that a word can do such an about-face in the span of neighboring entries in the dictionary.  But on that front, “literally” is far from the only offender.  English is littered with so-called Janus words that allow for contradictory application.  In the same dictionary, “sanction” is said to mean both “support, encouragement, approval,” and “the inflicting of punishment upon a criminal offender” (definition nos. 2 and 6).8  You might be bound to Europe on vacation or bound like a prisoner to a chair; oversight might involve guarding against mistakes or making one; a strike can be a direct hit, or a swing and a miss.

As the good folks at Merriam-Webster noted in response to the perpetual trolling over their definition of literally, “[A] living language is a language that is always changing; this change may be lovely, and it may be ugly.  As lexicographers we are in the business of defining language, rather than judging it.”9

The “always changing” aspect of language can lead to head-scratching for other reasons, too.  Take the word “phone.”  The paradigm of the modern phone in the United States is the iPhone.  But at first blush, an iPhone has little in common with its namesake, the kind of phone now used as a prop on the sets of Victorian dramas.  For starters, they are used for reading and watching media as much or even more than for listening to the voice of a caller in real time.  We still “dial” numbers and make “calls” even though the dial and the switchboard operator it replaced had been abandoned long before the iPhone was introduced.  We “hang up” on people even though phones no longer rest on hangers.

The story of how the iPhone got its common name, then, is a story about incremental shifts in technology which challenged consumers to adapt, slowly expanding the use of “phone” to include something that has little in common with the first devices that enjoyed that nametag.  But even with the benefit of hindsight, it is not obvious why “phone” won out over other words in the language-use version of natural selection.  Televisions were also on the technological march toward smaller, battery-operated, cordless things.  Why don’t we call our iPhones “TVs”?  Personal computers seem like the closest fit, functionally speaking, to the modern phone’s capabilities, but “pocket PC” never caught on.

Now consider the question: Would Alexander Graham Bell recognize an iPhone as a “phone” had he been handed one in 1876?

All of which is to say, when it comes to the Big Question—what does it mean?—absolute certainty isn’t really on the table.  Whether interpreting a work of art or words on a menu, any certainty of meaning we may feel is only relative to our own position and perspective, like the flat ground beneath our feet.  Zoom out just a little, and that flat ground isn’t so flat; in fact, it is not even standing still.  Just as navigating landscapes involves choosing maps with a scale and key that fits your purpose—By land or by sea?  Uphill or downhill?  Around the corner or across the street?—navigating meanings involves selecting scales of history and community.

And no map is accurate forever.

III. A Simple Question

A few months after Di Modica’s lawyers pled for the preservation of Charging Bull’s meaning and for the expulsion of the Fearless Girl, White House aide Stephen Miller took to the podium to announce a new bill aimed at curbing legal immigration into the United States.  There he was asked a pointed question about whether the policy was “trying to change what it mean[t] to be an immigrant coming into this country.”10  The question, lobbed by CNN’s Jim Acosta, came adorned with a quote from the poem that is inscribed on the pedestal of the Statue of Liberty: “Give me your tired, your poor, your huddled masses yearning to breathe free.”11

Miller breezily dismissed the notion that the new policy somehow went against what the Statue of Liberty stood for by noting, “The poem that you’re referring to, that was added later, is not actually part of the original Statue of Liberty.”12

Whatever your take on the immigration bill or the poem, the retort seems to assume that the Statue of Liberty’s meaning was fixed when it was erected, and that a historical anecdote could provide an answer to a question about what it means.  The kind of meaning drift against which Di Modica had raged, Miller simply looked past.  And instead of answering the question posed, Miller’s response serves as a good illustration of just how empty that gaze is.  The time between the installation of the Statue of Liberty and the addition of Lazarus’ poem to its base is about the same amount of time between the development of modern peanut butter and the invention of jelly.13  But their separate development doesn’t rebut the fact that “peanut butter and jelly” is a readily-identifiable, relatable, and long-lived cultural phenomenon.  It’s as though a CNN correspondent asked someone at Kellogg, “Doesn’t your new decision to only make mustard-flavored ‘jelly’ kind of go against the whole idea of peanut butter and jelly sandwiches?” only to get the response, “Well, peanut butter was developed years before jelly, so it really has nothing to do with it.”

Sorry, Steve, that’s not how meaning works.

After sparring with Acosta, Miller doubled-down on the histo-rhetorical approach, later defending the Administration’s “zero tolerance policy” regarding illegal entry—a policy that notoriously resulted in the separation of thousands of immigrant children from their families in a span of months—as “a simple decision” based on existing law.14  Laws, he suggested, are meant to be followed and enforced.  That is what law means.  On his reading of it, the Trump Administration’s zero tolerance policy was not so much a new policy as it was correcting a misappropriation of an earlier, set meaning.

There were, of course, detractors.15

But Miller is far from the only person making headlines—or law—to trot out such interpretive methods.  The late Justice Scalia was famous for his “constitutional originalism,” a judicial philosophy worn as a badge of honor by a number of his intellectual followers.  Originalists view the Constitution in much the way Miller seemed to view the Statute of Liberty, as an amber-encapsulated fossil, a complete statement frozen in time.  As Justice Scalia famously described the philosophy:

[T]he Constitution that I interpret and apply is not living but dead—or, as I prefer to put it, enduring.  It means today not what current society (much less the Court) thinks it ought to mean, but what it meant when it was adopted.16

Miller and the originalists suffer from a kind of nostalgia similar to, though perhaps a good deal stronger, than Di Modica’s.  They pine not for the way things used to be so much as for what things used to mean, as though that was a set and knowable and singular thing.  But while it may be easy—and even a little romantic—to imagine that words and works can be crafted to preserve their original intent indefinitely, it is an illusion.

Di Modica’s press conference with his lawyers revealed him to be a man lost in a changed landscape.  Miller and the originalists claim they can navigate using ancient maps, but they are in uncharted territory.

IV. A Subversive Dictionary

Once, Justice Scalia sparred with a Janus word, or at least something close to one.

In 1994, AT&T and MCI, two giants of the dawning Information Age, battled before the U.S. Supreme Court about the definition of “modify.”17  AT&T insisted that modify had to mean a trifling change, something relatively small or on the periphery.  MCI, though, found an entry in one dictionary that suggested modify could sometimes mean a sweeping or fundamental change.  Justice Scalia railed against MCI’s suggestion.  “Webster’s Third,” he complained of the dictionary cited by MCI, “defines ‘modify’ to connote both (specifically) major change and (specifically) minor change.”  He surveyed six dictionaries and found only one that one supported the latter usage.  And he would have none of it.  “When the word ‘modify’ has come to mean both ‘to change in some respects’ and ‘to change fundamentally,’” he declared, “it will in fact mean neither of those things.”  Writing with the confidence of a simple logical syllogism, Justice Scalia ruled for AT&T—and directed lexicographers to get their act together.

You can imagine Di Modica hoping his lawyers could employ the same trick.  Point at the Fearless Girl, he demands, and pronounce it to be a step too far, an aberration, an errant whimsy that must be struck down.  Call it a lie; call it a misuse, he pleads.  Say the meaning Fearless Girl implies for his Charging Bull just isn’t allowed.

And you can see why Justice Scalia might have picked such a fight against the two-faced “modify.”  Janus words present something of a problem for anyone with a fixed-and-sure theory of meaning.  Words that can bluntly contradict themselves demand an appeal to context:

Q: Was his action authorized by his superiors?

A: He was sanctioned.

Um, okay—is that a yes or a no?  Guess I better zoom out a little.

But letting context waltz into the interpretive endeavor is no small thing for someone wanting to get at capital-T Truth.  Because, you know, who is really to say what the relevant context is?  Justice Souter—ever the functionalist—penned a dissent that argued the definition of “modify” did not matter so much one way or the other for the case, and that interpreters of the law should examine different kinds of evidence—like the operation of the law in specific scenarios and with specific consequences.

In a New York Times Magazine piece that followed AT&T’s Supreme Court victory, one commentator with a penchant for etymology proclaimed Justice Scalia won his fight with the definition of “modify.”18  But the fight itself is evidence of an insecurity, or at least shows Scalia tugging at the dangling thread that could unravel his originalism.  Even if Scalia was able to strike down ambiguity in “modify,” the episode raises the possibility that he might, in a different case, strike out.  And those dictionaries that Scalia dragooned for his certainty: Are they reflections of the words’ usage, or are they prescriptions for it?19

In the end, “[c]ontext counts,” as Justice Souter later wrote.20  It has to.

So Justice Scalia may have won the battle, but he lost the war.


V. The Impossibility of Permanence

Janus words are like porthole views on the problem.  The real issue for Miller’s answer and the originalists’ method is rooted in their theory of meaning, and it is a problem that stretches further and runs deeper than the ambiguity of any one word.

Meaning is, of course, a slippery subject—one on which tomes have been written, tenure has been awarded, and intellectual factions have splin-tered.  But a central part of answering the question “How do we know what something means?” begins with addressing the predicate question: “How do we know things?”  And one enduring issue for the pursuit of knowledge is the problem of induction, a problem that has hounded philosophers and scientists for centuries.

Names like David Hume and Nelson Goodman are generally associated with the problem of induction,21 but Bertrand Russell perhaps presented the problem best with his parable of the chicken:

Domestic animals expect food when they see the person who usually feeds them. [But] [w]e know that all these rather crude expectations of uniformity are liable to be misleading.  The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.22

Russell’s chicken enjoyed regular feedings, but that routine could have been evidence of benevolence (the kindly keeper tending to the chicken’s needs) or malevolence (the farmer fattening it for the market).  The chicken’s story illustrates the perils of trying to distill a rule of conduct from observations and evidence.  In mathematical terms, the data underdetermines the function, which is to say, more than one rule of conduct could have produced all of the evidence at hand.  As such, it was impossible to tell from the chicken’s vantage point—till too late—which rule was governing its experience.  And without access to the rule that determines the next result, making predictions—even with the comfort of the past as your guide—is an endeavor shaded with uncertainty and doubt.

The chicken’s dilemma and other riddles like it sparked decades of work to shore up the efforts of science, to describe and justify its inductive process and the methods we use to select certain theories over others.  Why, for example, do we employ the theory of gravity instead of a theory of fairies which would explain all the evidence at hand just as well?  Sure, Newton’s got some interesting ideas, but invisible, winged beasts playing an elaborate joke on humanity could also explain every bit of data we’ve collected so far—and they could change their game at any moment.  Where pure deduction proves an impossible goal, things like falsifiability, simplicity, and consistency with adjacent theories play a role in the ways that science progresses toward claims of knowledge.  Community-elected heuristics are used to fill in what pure reason cannot.

Behind the chicken’s difficulty interpreting the rules of her circumstance lurked a more troubling problem for meaning.  That problem—a problem presented generations ago by the likes of Wittgenstein23 and Quine24 and Kripke25—is that words themselves look a lot like the kinds of rules of conduct that caused the chicken its headache.  A word’s meaning can be viewed as a rule by which the word can be appropriately employed.  Where a mathematical function (say, for all integers n, f(n) = 2n) might aim to describe a rule for the derivation of all members of a set ({even numbers}), a definition (say, for the word “game”) strives to set forth out the rules by which a language user could use a word properly (all things English speakers would consider games).  If everything is going according to plan, the rules established in a definition would, in practice, only return comprehensible, socially approved uses.  The problem is, who is to say which should budge—the rule or the social use—when the set of things covered by a definition and the set of things that word describes in the real world do not match up?

If using words is a kind of rule-following, then each use—whether in a dictionary or in everyday speech—serves as our only evidence of that rule.  In fact, that is just the way that the people who write the dictionaries look at it.  When the folks at Webster’s defended their “virtual” entry for “literal,” they described “a strong impulse among lexicographers to catalog the language as it is used,” and pointed the finger at the “considerable body of evidence indicating that literally has been used in this fashion [to mean virtually] for a very long time.”26  Blame Joyce, they said.  And Fitzgerald.  And Dickens.  Those great writers used “literally” to mean ”figuratively,” and everyone knew what they meant.  Unwilling to stand up to the canon, the lexicographers opted to change the rule for literally instead, accepting our collective understanding as evidence that the rule for was, in practice, a bit more nuanced.

But when you recognize usage is our only evidence of meaning, Justice Scalia’s guiding principle that the Constitution means now what it meant when it was written boils down to a tautology: Constitutional phrases were used back then in the ways they were used back then.  That is an empty premise.  In the end, language users are in the same position as the chicken: Past usage does not dictate future use, and observed usage does not determine unobserved use.  The problem of induction scuttles the originalist’s project.  We can ask what the framers would have said about applying their words to modern situations, but we cannot do so expecting to get the right—verifiable—answer, any more than we can expect to resolve the question of what A.G. Bell would have thought of an iPhone.  The data we have at hand will always underdetermine the assignments of that word we’ll see going forward.

VI. Our Responsibility For Meanings

We can, as a community, take up Justice Scalia’s question—what would the framers think?—and we can knight certain perspectives on what they might have said about modern problems as better or worse, more likely or less, just as we can speculate about what Bell might have thought about an iPhone.  But settling on such a theory requires us to navigate the various options in the present day and select the one that we like best for reasons other than its verifiable rightness, just as we select gravity over fairies.  That is where we face the thorny questions that meaning inquiries must face, questions of which perspectives and what kinds of evidence should inform communities as they resolve how to use words in novel contexts.  As David Foster Wallace once put it in a footnote:

If words’ meanings depend on transpersonal rules and these rules on community consensus, language is . . . irreducibly public, political, and ideological.  This means that questions about our national consensus on grammar and usage are actually bound up with every last social issue that millennial America’s about—class, race, gender, morality, tolerance, pluralism, cohesion, equality, fairness, money: You name it.27

There’s the rub: The present tense of the interpretive endeavor brings all the present-day concerns with it.  And here in the present, we know that our usage norms, as social constructs, not only serve to organize social interactions and identify communities (think: the sophisticated diaereses of the New Yorker versus the breathless exclamation points of HuffPost), but they can also inflict social harm, like painting immigrants in the language of criminal law, the othering speech of racial segregation, or the gender-norming of calling little girls pretty and little boys clever.

And, as with smoking, habits of use and description can be hard to kick, even long after their damaging consequences have been diagnosed and laid bare.  So an interpretive approach such as originalism that pushes aside modern concerns to apply some sense of earlier meanings is a bit like pushing the Surgeon General to remove the warning label on cigarettes so smokers can go back to being as cool as James Dean.  It is not only an odd pretense, it assumes smoking was a contributing cause of Dean’s coolness rather than just a symptom of it, that being cool is more important than avoiding cancer, and that James Dean would have smoked had he been rebelling without a cause in 2015 rather than in 1955.  Since “What would James Dean do?” is an unanswerable question by itself, justifying these other assumptions—and why we care in the first place—are the meaty parts of any effort to provide an answer to it.

As a matter of logic, past usage cannot determine modern uses.  That means the originalists’ endeavor cannot really be to deduce how the framers would have resolved modern constitutional disputes.  Answering the question, “What does it say,” will never fully address the question, “What does it mean?”28  Rather, originalists must set forth and justify those other heuristics that inform their decisions, the gap-filling tools they use to decide what sorts of evidence and what sorts of communities get to weigh in on constitutional disputes.29

This isn’t just an issue for vague legal phrases.  Ask what “diner” meant in the 1950s, and you are likely to get conflicting answers.  It was a place you were allowed to go or it wasn’t; it was a place to get a sandwich or it was a place to get humiliated.  The answer depends on the community you are interrogating and the evidence you prioritize as worthy of informing the decision.  This doesn’t make the meaning inquiry a question-begging exercise, but it certainly makes it a value-laden one.

Miller and the originalists appear to hide behind a notion that they can find and apply some original meaning—as though deducing the correct use of words and phrases across all contexts and times was possible, and as though that possibility was reason enough to do it.  Neither are very well-thought-out propositions.  In a time when derision for “alternative facts” is ascendant,30 it is actually the rejection of alternatives that runs their reasoning aground.

Ultimately, to long for and attempt to restore meaning from another time and place is little more than an attempt to evade responsibility.  Originalists on the bench cannot escape their agency in the interpretive process, nor can they get out of performing their core duty as judges—rendering judgment—no matter how much they would like to blame the past for dictating their present-day decisions.  Di Modica cannot shout his mission statement for the Bull loudly enough to drown out the question: What does his Bull mean now?  And Miller cannot get away from the question: Would his policy change what it means to be an immigrant in the United States?  Answering that question involves describing the communities and values and experiences that go into determining what it used to mean, and to whom, and what he hopes it means now, and for whom—and that’s the heart of the matter, isn’t it?

[1].        Karen Matthews, Sculptor of Wall Street’s Bull Wants Fearless Girl Removed, [small-caps]Chi. Tribune[end-small-caps] (Apr. 12, 2017),

[2].        James Barron, Wounded by ‘Fearless Girl,’ Creator of ‘Charging Bull’ Wants Her to Move, [small-caps]N.Y. Times[end-small-caps] (Apr. 12, 2017),

[3].        Robert D. McFadden, SoHo Gift to Wall St.: 3½-Ton Bronze Bull, [small-caps]N.Y. Times[end-small-caps] (Dec. 16, 1989),

[4].          See, e.g., Ben Eisen, Ken Jimenez, & Tom Destefano, Dow’s Climb Above 23000 Comes 30 Years After Black Monday, [small-caps]Wall Street J.[end-small-caps] (Oct. 18, 2017, 4:35 PM),

[5].        Sapna Maheshwari, Statute of Girl Confronts Bull, Captivating Manhattanites and Social Media, [small-caps]N.Y. Times[end-small-caps] (Mar. 8, 2017),

[6].        History, [small-caps]Charging Bull[end-small-caps], [].

[7].        Literally, [small-caps]Webster’s New Twentieth Century Dictionary[end-small-caps] (unabr., 2d ed. 1970).

[8].        Id. at 1603.

[9].        Did We Change the Definition of ‘Literally’?, [small-caps]Merriam-Webster[end-small-caps], [].

[10].       Philip Bump, Under Trump’s New Immigration Rule, His Own Grandfather Likely Wouldn’t Have Gotten In, [small-caps]Wash. Post[end-small-caps] (Aug. 3, 2017),

[11].     Kyle Swenson, Acosta vs. Miller: A Lurking Ideological Conflict About the Statue of Liberty, [small-caps]Wash. Post[end-small-caps] (Aug. 3, 2017),

[12].     Id.

[13].    The Statute of Liberty was dedicated on October 28, 1886.  Emma Lazarus wrote “The New Colossus” in 1883, but it was not inscribed on the Statue until 1903, seventeen years after opening day.  See Emma Lazarus, [small-caps]Nat’l Park Serv.[end-small-caps], [].  John Harvey Kellogg’s peanut butter patent is dated in 1898, Process of Producing Alimentary Products, U.S. Patent No. 604,493 (issued May 24, 1898), while “Welch’s created modern jam in 1918 for World War I rations, calling it ‘Grapelade.’”  The History, [small-caps]Concord Grape Ass’n[end-small-caps], [].

[14].     Julie Hirschfeld Davis & Michael D. Shear, How Trump Came to Enforce a Practice of Separating Migrant Families, [small-caps]N.Y. Times[end-small-caps] (June 16, 2018),

[15].       See, e.g., Former U.S. Attorneys, Bipartisan Group of Former United States Attorneys Call on Sessions to End Family Separation, [small-caps]Medium[end-small-caps] (June 18, 2018), [] (“[A]s former United States Attorneys, we [] emphasize that the Zero Tolerance policy is a radical departure from previous Justice Department policy, and that it is dangerous, expensive, and inconsistent with the values of the institution in which we served.”).

[16].       Antonin Scalia, God’s Justice and Ours, [small-caps]First Things[end-small-caps] (May 2002), [].

[17].     MCI Telecomms. Corp. v. Am. Tel. & Tel. Co., 512 U.S. 218 (1994).

[18].     William Safire, ON LANGUAGE; Scalia v. Merriam-Webster, [small-caps]N. Y. Times Mag.[end-small-caps] (Nov. 20, 1994),

[19].     Perhaps a little of both.  After all, “define” can mean either to “explain . . . the essential qualities” or to “fix or lay down clearly.”  To avoid quoting Webster’s back at Justice Scalia—which would have been an unnecessary insult atop injury—these entries are from  Modify, [small-caps][end-small-caps], (last visited July 22, 2018).

[20].     Envtl. Def. v. Duke Energy Corp., 549 U.S. 561, 576 (2007).

[21].     Most important for our purposes would be [small-caps]Nelson Goodman[end-small-caps], New Riddle of Induction, in [small-caps]Fact, Fiction, and Forecast[end-small-caps] (Harvard Univ. Press 4th ed. 1983).

[22].     [small-caps]Bertrand Russell, The Problems of Philosophy[end-small-caps], at vi (1912).  Russell’s book is freely available from Project Gutenberg at

[23].     The one-and-only Ludwig Wittgenstein—but the latter of his two philosopher-selves, as reflected intermittently, in [small-caps]Ludwig Wittgenstein, The Blue and Brown Books: Preliminary Studies for the ‘Philosophical Investigations’[end-small-caps] (Harper Torchbooks 1960)(1958), or impenetrably, in [small-caps]Ludwig Wittgenstein, Philosophical Investigations[end-small-caps] (throwing abstraction atop abstraction, but describing a “beetle” in a box at § 293, which isn’t a terrible introduction on this point).

[24].     See, e.g., [small-caps]Willard Van Orman Quine, Word and Object[end-small-caps] (MIT Press 1960).

[25].     Even the best Wittgenstein is pretty dense, so Saul Kripke earns parallel billing for his interpretive exposition. [small-caps]Saul A. Kripke, Wittgenstein: On Rules And Private Language[end-small-caps] (Harvard Univ. Press 2002).

[26].    [small-caps]Merriam-Webster[end-small-caps], supra note 8.

[27].     [small-caps]David Foster Wallace[end-small-caps], Authority and American Usage, in [small-caps]Consider the Lobster and Other Essays[end-small-caps] 88 n.32 (2005).

[28].     This is a fly in the ointment for textualism, too.  Textualism, both broader in vision and less disciplined in practice—originalism’s little brother—would apply a kind of determinacy to all statutory and regulatory utterances.  You can get a sense of this effort in Justice Gorsuch’s early U.S. Supreme Court writings for the majority where he begins, “as [he] must” with the statute’s language, Henson v. Santander Consumer USA Inc., 137 S. Ct. 1718, 1721 (2017), follows the “clues” the authors left behind, id. at 1723, and ultimately has the losing party “in retreat” from his superior intellect.  Id. at 1724; see also Murphy v. Smith, 138 S. Ct. 784, 787–88 (2018) (starting “as always . . . with the specific statutory language,” collecting three “clues” about the statute’s meaning, and finding the losing litigant “retreating” from Justice Gorsuch’s arguments).  Though he casts himself as a Holmes figure in these efforts, it comes off a bit naïve, and perhaps a bit desperate.  In any event, here, for example, is what the folks in cognitive science think of the idea that the words on the page are all you need:

The doctrine of determinacy [or, that there is a unique and precise mapping between words and meanings] belongs to a broader conception of language, mind, and meaning, which holds that language is a separate mental “module,” that syntax is autonomous, and that semantics is well-delimited and fully compositional.  This broader conception is not however well-founded.  Over the last few decades, research in cognitive linguistics has demonstrated that grammar is not autonomous from semantics, that semantics is neither well-delimited nor fully compositional, and that language draws on more general cognitive systems and mental capacities from which it cannot be neatly separated.

[small-caps]Ronald W. Langacker, Investigations in Cognitive Grammar[end-small-caps] (Cognitive Linguistic Research) 40 (2009).  This isn’t meant to pass judgment on the, well, final judgments of Justice Gorsuch’s majority opinions.  Rather, it just suggests that his story is not the whole story for justifying those decisions.

[29].     That effort can get uncomfortable for the card-carrying originalist.  As Jamal Greene wrote in what is literally the most important critique of originalism ever penned in a law review article:

Constitutional methodology translates, between word and deed, hope and reality, authority and violence . . . .  A racially sensitive constitutionalism must always, therefore, hold out the possibility of legitimate dissent from history.  Originalism denies that possibility, and so for me, as I suspect for many African-Americans, it speaks in a foreign tongue.

Jamal Greene, Originalism’s Race Problem, 88 [small-caps]Denv. U.L. Rev.[end-small-caps] 517, 522 (2011).

[30].     See, e.g., Nicholas Fandos, White House Pushes ‘Alternative Facts.’ Here Are the Real Ones., [small-caps]N.Y. Times[end-small-caps] (Jan. 22, 2017),

About the Author

Keeps a day job as a lawyer in D.C.  His opinions are his own, though, and do not necessarily reflect those of his clients or employer.

By uclalaw