a brief Thanksgiving meditation

One night in the week following the attacks on Paris, Los Angeles had a small windstorm.  It wasn’t an emergency, by any means, but at one point the winds got fast and violent enough to shake the thin, loose windows of our aged building back and forth so that they rattled hard in their frames.  The sound woke me up, and as has happened dozens of times since I returned from Iraq, I jolted awake, nerves electric, brain already conjuring an explanation for the sound — an explosion, maybe the building is coming down, maybe the windows are about to spray inward, showering us with glass.

Nothing like that is happening, of course.  This is Los Angeles in 2015, not Iraq in 2008.

But still, any loud sound when I’m sleeping — the clang of the machines being unloaded at a nearby construction site, the sudden whoop of a police siren, a motorcycle’s ripping throttle — can do that to me.  It’s just something I learned one morning on FOB Warhorse, after a long night shift, trying to get some sleep in a noisy, cavernous tent.  Just after I’d gotten into the precious deep sleep, a long, metallic whine slowly penetrated my consciousness, growing louder and louder — it sounded exactly the way it sounds when a plane goes down in a cartoon:

nnnnnnnyyyyyeeeeeeeeeeeeeeeeeeeeeeeeoooooooow! BOOM!

I flipped off my cot and onto the floor, knowing all at once that lower was better.  I looked over to see my bunkmate on the floor, too, both of us laughing and scrambling for Kevlar helmets and body armor.  There were a couple more, and then silence.

Rocket and mortar attacks in Iraq were not really a substantial source of casualties, I think — they were harassing at best, a way for the enemy to fuck with us.  Very shortly after we arrived in country, someone found a dud rocket lodged at the base of my office’s cinderblock wall. EOD was called; life went on.

There were enough attacks that the details run together now — was the one that hit the “movie theater” (a small warehouse with a little screen and a video projector) the same as the one that destroyed the housing unit of someone I knew?  (No one was in it at the time.)  Which one was the attack where I actually hid in one of those roadside concrete shelters?  Then there are other memories — while in transit once, I got to stay for several days at Balad Air Base/LSA Anaconda, which was comparatively fancy and had not only a proper movie theater but a swimming pool as well.  It was also nicknamed “Mortaritaville” by the people stationed there, and once while I was at the pool the mortar attack alarm went off, and we all jumped up out of the pool and ran into the changing rooms.  As we stood there in our “ARMY” shorts, excited and shivering, I looked around at the large glass windows all around the tops of the walls, and I envisioned them blowing inward, shrapnel and glass flying toward us.

But for some reason it’s the attack that woke me up out of a dead sleep –and that loud Snoopy’s-going-down-with-the-Sopwith-Camel sound — that stayed with me.  For some reason it’s that experience I live over and over again: first the loud sound, and then the certainty of impending disaster.

Rarely, now, do I actually suspect I’m under attack.  Last week was an exception — maybe brought on by thinking about Paris (and, yes, Lebanon, and Kenya, and Mali, and now let’s add Tunisia to the list).  Sometimes my brain imagines an earthquake, the whole side of our building falling away.  Sometimes it gets very large-scale indeed, as I lie in bed in the dark wondering if it’s possible for the Earth to just fall suddenly out of its orbit and roll into the sun like a marble into a drain.  I wonder if I would have time to comfort my son before we all die.


My time in Iraq wasn’t that bad, and no one should feel sorry for me. As I’ve written before,

I’ve never, as we say in this line of work, engaged the enemies of the United States of America in close combat. Had some rockets and mortars lobbed at me, but they never got closer than a hundred yards away.

And as I’ve also written before,

I wasn’t one of the soldiers kicking in doors . . . .

I didn’t do the killing myself. I never put anyone in jail myself.

[Iraq] didn’t give me PTSD; I don’t have flashbacks and I’m not depressed.

And that’s true.  I found my time in Iraq deeply annoying and frustrating, and I spent much of my time there feeling incredibly angry at the pointless war and ambivalent about my own part in it.  (See previous link for more on that score.)  But on the other hand, things were not that bad for me.  I was pretty much exclusively a “fobbit” — quite by design, I never left the FOB except to travel to another FOB.  I was not out there looking for roadside bombs, and the worst thing I faced on a day-to-day basis was shitty food, an uncomfortable environment, loneliness and the anxiety that comes with being kind of bad at your job.  (I got better over time.)  There was a gym.  There was internet access.  I even had time to record an album of weird electronic music while I was there.  It was almost certainly the best war experience any soldier in history has ever had, in terms of amenities.   Comparatively, I was fine.

But I do still carry this one little thing with me, this inability to recover gracefully from being woken up by a loud sound.  It’s a bit of a nuisance — I live in the densest and noisiest part of a dense and noisy city.  But it’s only one small scar.

It’s the smallest amount of damage one could possibly have from war.  But I suspect no one leaves without some damage.  And if there are thousands and thousands of well-off fobbits like me, with some tiny bit of damage, some little bit of weakness we didn’t have before, you can only imagine the cumulative damage of the door-kickers and war-fighters, the EOD guys who went bomb-hunting in their big armored trucks, the low-level electronic surveillance folks who went out into the towns to hunt the bad guys on foot, the transpo contractors who drove supplies along some of the world’s most dangerous roads, the helicopter pilots, the translators.  And that’s to say nothing of millions of civilians, for whom life under Saddam had perhaps not been great, but upon whom we unleashed a living hell, as they were repeatedly victimized by all the competing armies that arose in the power vacuum after the invasion, battered by our attacks on what we hoped were terrorists, and destroyed by the ruination of their local economy.

I  carry the smallest possible scar from war.  But I’m on the ledger, and that ledger is long — miles long.  And we should read it, the whole thing, before we listen to rich politicians talk about how tough they’d be if they got a chance to sit behind the war machine and pull the trigger.


This Thanksgiving, I’m thankful that I got to come home.  I’m thankful that almost everyone I knew got to come home, too.  But mostly I’m thankful that I live in a country that, 14 years after 9/11, seems genuinely wiser, more cautious about war, and less eager to be fooled by banner-waving salesmen.

Happy Thanksgiving to you and yours.  And for fun, here’s the OTHER, probably obligatory rumination on war and Thanksgiving.  “You wanna end war and stuff, you gotta sing loud!”

Posted in Uncategorized | 1 Comment

brief follow-up on the previous post

One thing I didn’t mention in my post about calls to lift the gun carry ban on military bases is the little-discussed link between guns and suicide. This is important, because the military has, since the start of our recent wars, experienced a sharp increase in servicemember suicides.

Whatever you think of the evidence linking gun control to crime prevention, the link between gun control and suicide prevention seems pretty solid to me. For example, the Washington, D.C. handgun ban and the Australian National Firearms Agreement were associated with significant drops in the suicide rate, and the Brady Act was associated with a drop in suicide among people over 55.

I’m never quite sure what to do with this information. Suicides by gun far outnumber murders, yet suicide is routinely left out of the gun control debate. Maybe it should be! There’s a perfectly legitimate argument that suicide is the individual’s prerogative. As a general matter, we should avoid telling people what to do with their bodies. And life is hard, and not everyone is equipped to meet the challenge. I see no reason to insist that people live in a prolonged state of existential misery to satisfy someone else’s sense that suicide is “wrong,” in some mystical sense.

On the other hand, it also seems obvious that at least some people who commit suicide might otherwise get past their life-grief and go on to have a later life that is, on balance, worth living. And it seems like this might particularly be the case for soldiers, who are often young, who may be suffering a variety of service-related (or not) mental illnesses that are strong predictors of suicide, and who live, at least temporarily, in a culture that values toughing it out over seeking help or admitting weakness (often assumed to be the same thing).

How that should factor into any discussion of a plan to make guns more freely available on military bases, I don’t know. But it’s something that, unsurprisingly, I haven’t seen mentioned by any of the congressional representatives pushing this idea.

Posted in Uncategorized | 1 Comment

Congress eyes interesting experiment re gun violence

Following a series of mass shootings at military facilities, some members of Congress have proposed allowing soldiers to carry weapons on base:

Congressional leaders said Friday they will direct the Pentagon to allow troops to carry guns on base for personal protection following a deadly shooting rampage in Tennessee that killed four Marines and seriously wounded a sailor at a recruiting center . . . .

Gun proponents have been calling for the Defense Department to lift its current policy, which allows only security and law enforcement to carry loaded guns on military facilities outside of war zones, since Army Maj. Nidal Malik Hasan killed 13 people and wounded more than 30 in a shooting spree at Fort Hood, Texas, in 2009.

I don’t know how much effect this would have on premeditated mass shootings, which are relatively rare and don’t seem to be prevented by open carry laws off-base. But it’s an interesting proposal for another reason: it would provide an opportunity to test theories of gun violence in what is likely to be as close to a controlled environment as possible.

As noted above, currently servicemembers are generally not allowed to carry weapons on base, except under certain limited circumstances. Guns typically must be registered and, if the servicemember lives in the barracks, they must be stored in the unit armory, not in one’s room. Servicemembers who live in family housing are usually allowed to keep weapons in the home, but, for example, at Camp Pendleton

[a]ll weapons and ammunition [must] be stored in approved containers. Weapons containers must be capable of being locked. All weapons will be fitted with a trigger lock.

Moreover, such regulations have, presumably, a somewhat higher compliance rate than their civilian analogs (e.g., the regulations challenged in D.C. v. Heller). Soldiers are trained to prize the safe handling of firearms; they are more invested in following, and more informed about, applicable regulations than civilians typically are; and military housing is, at least in principle, subject to occasional inspection.

In short, military servicemembers on base live under a pretty strong and reasonably effective gun control regime.

They also live in a fairly closed environment. Although servicemembers can bring friends and acquaintances onto military bases with them, and a number of civilians work on every base, most bases are closed to the general public. Perhaps as a result, and because the people who do have access to military bases are heavily invested in and strongly identify with the military as a community, crime on military bases is quite low. (It also surely can’t hurt that by definition everyone on a military base is meaningfully employed or is the dependent of someone who is meaningfully employed.)

Removing the gun controls, then, would provide a nice opportunity for an observational study to determine whether an increased gun presence in a stable environment would make the base community safer or less safe. And this might provide some insight into whether guns in the civilian community make people safer or not — a hotly contested question.

I can think of a few possible confounding factors that might either muddy the data or make it hard to extrapolate the results to civilian society. For one thing, soldiers and Marines (and to a lesser extent airmen and sailors) are already trained in and comfortable with the use and carry of weapons. Many have, of course, carried extensively overseas; during deployment, a soldier’s weapon is nearly always on his body. Despite the intense pressure of the combat environment, there are few non-mission-related shootings. This suggests that military discipline is pretty effective in creating people who use guns professionally but not out of passion. (Alternatively, it could also be that the military is skewed toward people less likely to commit violent crimes to begin with — for example, a Heritage Foundation study found that military recruits as a population are wealthier and better educated than the populace as a whole.)

The flip side of that is that exposure to intense combat experiences seems to be linked to an elevated risk of violent criminal behavior after one returns to the U.S., presumably due to untreated psychological trauma. How that would affect the study is unclear. Would it artificially elevate levels of gun crime? Or, now that the wars have wound down and those suffering trauma have begun to rotate out of the military, will there be a concomitant drop in violent crime unrelated to the change in on-base gun regulations? I don’t know the answer to that, although I would think a carefully-designed study could take it into account.

Finally, of course, military bases are full of, well, military-aged males — i.e., the demographic that commits the overwhelming mass of violent crime. Fighting is not uncommon on military bases, and drinking is a heartily-embraced pastime. Mostly soldiers go home and sleep it off, but if young, single men in the barracks had access to weapons during their off-hours, there’s the potential for drunken brawling to become something more. (This makes the stateside military base quite a different environment from the bases in Iraq and Afghanistan, where soldiers are constantly armed but there is little unsupervised downtime and alcohol is hard to come by.) That demographic skew would make it hard to port statistics directly to the population at large, though one assumes apples-to-apples comparisons could be made.

It should also be said that this is only proposed legislation. Still, should it become law, we’d have an opportunity to closely observe what, if anything, happens to community crime levels when gun control is suddenly radically curtailed and guns become more common in shared spaces.

Posted in Uncategorized | 1 Comment

facial challenges and the Fourth Amendment

My friend Matt Price (a filmmaker here in L.A. — check out trailers for his latest horror-comedy) sent me this interesting Volokh Conspiracy post-mortem on Los Angeles v. Patel, a case involving an L.A. ordinance requiring hotel operators to keep records about their guests and make those records available to the LAPD. Nicholas Quinn Rosenkranz suggests that the Court missed an opportunity to articulate a simple standard for determining when “facial” challenges” to a law are appropriate, as opposed to “as-applied” challenges:

The idea here is that one can determine whether a facial or as-applied challenge is appropriate by determining which government actor is bound by the relevant clause and thus who allegedly violated the Constitution. The First Amendment begins “Congress shall make no law …” and so its subject is obviously Congress. A First Amendment claim is inherently a claim that Congress exceeded its power and violated the Constitution by making a law, on the day that it made a law. For this reason, it makes perfect sense that the Court is much more amenable to “facial challenges” in the First Amendment context. A First Amendment claim cannot be “factbound,” to use Scalia’s formulation, because the alleged constitutional violation, the making of a certain law, is completed by Congress before any enforcement facts arise.

But the first clause of the Fourth Amendment is entirely different. It does not say “Congress shall make no law…,” like the First Amendment. It does not, by its terms, forbid legislative action. Rather, it forbids unreasonable searches and seizures — which are paradigmatically executive actions. Here, enforcement facts are relevant to the constitutional analysis; indeed, here, the enforcement facts, the facts of the search or seizure, are the constitutional violation. This is why Alito’s parenthetical for Sibron is so apt: “[t]he constitutional validity of a warrantless search is pre-eminently the sort of question which can only be decided in the concrete factual context of the individual case” (emphasis added). In this context, it is the execution of a search (by the executive), not the making of a law (by the legislature), that allegedly violates the Constitution. This is why, in the parenthetical for the next citation, Alito chooses to quote the penultimate sentence of the Manhattan Institute brief: “A constitutional claim under the first clause of the Fourth Amendment is never a ‘facial’ challenge, because it is always and inherently a challenge to executive action”) (emphasis added).

This is an intriguing idea, but I’m not sure the reliance on “who” (i.e., the branch of government that acts) actually does the work we want it to do in all cases.

To start with, no one actually thinks that the First Amendment only applies to legislatures. If the L.A. Parks Department has an internal, written policy (not authorized by the Legislature) of never allowing socialists to demonstrate in the parks, that policy would be unconstitutional. It would probably also be subject to a facial challenge.

Similarly, it seems to me no big leap to say that a legislative body could pass a law that would have no purpose other than authorizing Fourth Amendment violations, and I see no reason why that could not be subject to a facial challenge as well. For example, a legislature could pass a law authorizing searches of houses according to normal warrant procedures when there is probable cause, and then pass a separate law authorizing warrants for house searches when there is not probable cause, as long as three officers agree that there is reasonable suspicion of a crime (a lesser standard of proof). (Let us assume that there is some mechanism to prevent use of the law in situations where it would be constitutional — perhaps the officers and the magistrate must both certify that there really is no probable cause before invoking the reasonable suspicion provision.) The latter law would serve no purpose except to evade the strictures of the Fourth Amendment, and I see no reason why it could not be subjected to a facial challenge and entirely invalidated.


Matt raised a second issue in his Facebook message to me: “It’s a loaded gun; why do we have to wait for them to pull the trigger before we take them to court?” That question does illustrate a quandary posed by my suggestion that some laws could be facially invalid under the Fourth Amendment: does anyone have standing to challenge my hypothetical law before their house is searched?

I discussed standing when I did my “Blogging Fed Courts” series a couple of years ago:

Article III gives federal courts jurisdiction over “cases” and “controversies.” The terms are not defined, but collectively “cases and controversies” have been taken to be a mere subset of “all things you might be upset about.” In other words, you can’t bring an action in federal court just because you’re pissed off about something. Even if you’re totally right about it. You have to present a “case or controversy,” involving yourself, which the court could lawfully resolve.

The ability to successfully bring suit is usually called “standing.” (It’s a noun — you “have standing” to bring the suit.)

Here are the rules for standing. You have to have been personally injured. Your injury has to be “cognizable” by the courts — there may be injuries that, though real, the court is not prepared to acknowledge. The injury has to be “fairly traceable” to the conduct of the party you want to sue. And it has to be “redressable” by the courts — that is, the remedy you’re seeking in the suit has to be something that would actually fix the problem.

So, to answer Matt’s “loaded gun” question, the answer is that generally you have to have suffered an “injury” before courts can hear your case. It turns out, though, that this is a rule more honor’d in the breach when it comes to the First Amendment. Consistent with Rosenkrantz’s notion that a First Amendment violation occurs the moment a legislature legislates, one can have standing to bring a First Amendment complaint even when one has not yet engaged in the protected speech. The idea is that if you wish to speak, but are afraid to do so because of legal consequences (if your desired to speak is “chilled,” as the cases often say), that is enough to count as an “injury” under the First Amendment.

But the same argument doesn’t necessarily hold for our hypothetical warrant law. There is no particular lawful activity being chilled. (Being chilled in one’s desire to keep contraband doesn’t count, because, according to the Supreme Court, “any interest in possessing contraband cannot be deemed ‘legitimate.'”) There is no constitutionally protected activity being prevented, and so it would be hard to make a case for standing prior to an actual search.

As to standing, then, I think Rosenkranz’s legislative/executive distinction makes sense — there really can be a “loaded gun” lying around, and we can’t do anything about it until the executive branch picks it up and uses it.

Posted in Uncategorized | Leave a comment

the prayers of both could not be answered

In the runup to Independence Day, Sam Goldman argues at Crooked Timber that the Declaration of Independence loses its force if you leave out God:

[T]here is no good reason to treat “Nature’s God” as the key to the Declaration’s theology. Jefferson avoided references to a personal deity in his draft. But either Franklin or Adams added a reference to the “Creator” in the natural rights section devised by the committee of five. And Congress inserted the phrases “Supreme Judge of the World” and “Divine Providence” in the conclusion. If we read as Allen proposes, these phrases should have equal weight to “Nature’s God”.

Taken together, these statements give a picture of God that is not so easily replaced by “an alternative ground for a maximally strong commitment to the right of other people to survive and to govern themselves.” They depict God as the maker of the universe, who cares for man’s happiness, gives him the resources to pursue it, and judges the manner in which he does so. Despite Allen’s assurances that Jefferson tried to avoid religious commitments, writing in a manner compatible with deism, theism, and everything in between, I do not see how the God that emerges from the entire process of composition, could be reconciled with a mere first cause or cosmic watchmaker. On the level of intention, the Declaration presumes a personal and providential deity.

I have always interpreted Congress’s pious insertions as “Flag! Troops!“-style pandering rather than sincere expressions of a belief in a “providential deity” who directs or judges the doings of secular governments. But are they, in fact, philosophically necessary to the project of the Declaration?

Over at the Volokh Conspiracy, Randy Barnett cites a sermon given on the eve of the constitutional convention that expressed a common view that the time that the laws of government, like the laws of mechanics, chemistry, and astronomy, were immutable principles of nature, established by God:

In his sermon, [Reverend Elizur] Goodrich explained that “the principles of society are the laws, which Almighty God has established in the moral world, and made necessary to be observed by mankind; in order to promote their true happiness, in their transactions and intercourse.” These laws, Goodrich observed, “may be considered as principles, in respect of their fixedness and operation,” and by knowing them, “we discover the rules of conduct, which direct mankind to the highest perfection, and supreme happiness of their nature.” These rules of conduct, he then explained, “are as fixed and unchangeable as the laws which operate in the natural world. Human art in order to produce certain effects, must conform to the principles and laws, which the Almighty Creator has established in the natural world.”

Goodrich goes on to explain that one who attempts to skirt these scientific laws of government “attempts to make a new world; and his aim will prove absurd and his labour lost.”

The Declaration opens by stating that “the Laws of Nature and of Nature’s God entitle” the colonies to “dissolve the[ir] bonds” with England and “assume among the powers of the earth, the separate and equal station” of independent states. So Reverend Goodrich’s theory of natural laws of government is quite relevant to the Declaration’s purposes.

Of course, even if you think that the Declaration depends on this natural law theory, you don’t need God to make it real. If there are immutable laws of nature, they can, analytically, exist without any “personal and providential deity” — they can be the work of a distant, uncaring God, a malevolent God, or no God at all.

Goodman points out, however, that the history of the world does not seem to actually support a theory that there are (at least straightforward) laws of government mechanics — respect these rights, or your government will fail:

Th[e] argument is perfectly coherent, given its premise that oppression is counterproductive. The problem is that this premise is likely false. Assertions of rights are often crushed, without much risk to the oppressors. Because they didn’t produce the forecast bad consequences, a purely naturalistic interpretation of the matter would lead us to conclude that these movements had no “right” to succeed.

That conclusion . . . would not be acceptable to Allen—or to the signers of the Declaration. Again, this is why “Nature’s God” is not good enough. In addition to the source of natural order, the Declaration’s good has to care how human events turn out—and perhaps to intervene to ensure that the results are compatible with justice. Otherwise, the signer’s pledge of their lives, fortunes, and sacred honor would be no more than a gamble—and a bad one at that.

But this argument, I think, undermines itself. If God is willing to intervene to ensure the successful outcome of legitimate assertions of rights, we should see a clear pattern of that in history. Whether it’s a law of nature, or God’s desire to create good outcomes for His creatures, if it is reliable enough to be depended upon when one is pledging lives, etc., there ought to be evidence of it. If the evidentiary record shows only chaos, then one should not depend on the intervening force, whether natural or supernatural, to secure the success of one’s revolution.

Goodman then turns to the practical effect of religious belief — as a motivation-multiplier — by considering the way Lincoln, during the Civil War, explicitly invoked God’s will to re-cast the “all men” of the Declaration’s most famous passage:

Individuals are capable of believing almost anything for almost any reason—and even of acting on that basis. But that is a matter for psychology. The political question is whether groups and peoples can be moved to take risks and make sacrifices if they do not think they are justified by a higher power. I am skeptical that this is the case . . . .

This is important because the Declaration is not, as Allen claims, “a philosophical argument”. Instead, it is a call to arms. People generally don’t fight for “commitments” and “grounds”. For better or for worse, they do fight for what they believe God demands.

The Declaration’s greatest interpreter, Lincoln, seems to have recognized this. Before the Civil War, Lincoln treated the Declaration as a work of secular reasoning. In a famous letter from 1859, Lincoln compared its argument to Euclidean geometry. According to Lincoln, “[t]he principles of Jefferson are the definitions and axioms of free society.” To understand politics, all one had to do was draw valid conclusions from certain first principles.

But definitions and axioms are terms of the seminar room, not the battlefield. Although [it] might have been suitable for peacetime, Lincoln’s scholarly account of politics was manifestly inadequate to a war that revolved around the meaning and authority of the Declaration of Independence. So in his second inaugural, he offered a different account of the same principles. This time, he appealed to a “living God” to achieve the right. You do not have to be a Christian to understand what Lincoln was saying. But I do not think you can be an atheist.

I think this is Goodman’s best argument. People don’t like to stick their necks out for uncertain change, and even though, as discussed above, it doesn’t actually make sense to depend on Providence when asserting one’s rights, still, people can be motivated by irrational appeals to a God who, this time, will definitely stretch out His mighty hand and propel your cause to victory.

But before we give too much ground here, let’s note the peculiar theology of Lincoln’s appeal in the Second Inaugural Address:

Both [the Union and the Confederacy] read the same Bible and pray to the same God, and each invokes His aid against the other. It may seem strange that any men should dare to ask a just God’s assistance in wringing their bread from the sweat of other men’s faces, but let us judge not, that we be not judged. The prayers of both could not be answered. That of neither has been answered fully. The Almighty has His own purposes. “Woe unto the world because of offenses; for it must needs be that offenses come, but woe to that man by whom the offense cometh.” If we shall suppose that American slavery is one of those offenses which, in the providence of God, must needs come, but which, having continued through His appointed time, He now wills to remove, and that He gives to both North and South this terrible war as the woe due to those by whom the offense came, shall we discern therein any departure from those divine attributes which the believers in a living God always ascribe to Him? Fondly do we hope, fervently do we pray, that this mighty scourge of war may speedily pass away. Yet, if God wills that it continue until all the wealth piled by the bondsman’s two hundred and fifty years of unrequited toil shall be sunk, and until every drop of blood drawn with the lash shall be paid by another drawn with the sword, as was said three thousand years ago, so still it must be said “the judgments of the Lord are true and righteous altogether.”

This is grim stuff. It is certainly not the “just world” theology of much of the Old Testament, which spends loads of time explaining that the moral ledger always comes out right. As this thoughtful exegesis by Michael Carasik notes, for example, the two most familiar slavery narratives — Joseph being sold into slavery by his brothers, and the enslavement of the Israelites in post-Joseph Egypt — depict enslavement as a punishment for wrongdoing: Jacob’s betrayal of Esau and Joseph’s oppressive dealing with the Egyptian people, respectively. Notably, in neither narrative is the oppressed person the actual wrongdoer — Joseph was not involved in Jacob’s swiping of Esau’s birthright, and the latter-generation Israelites had nothing to do with Joseph’s opportunistic enslavement of the Egyptians. They simply inherit the consequences of the original sin. For that reason, perhaps, Joseph and the Israelites both eventually rescue themselves from slavery, albeit with God’s help.

Lincoln’s framing of the issue is rather different. He does not suggest that African slaves are the inheritors of some sin that must be punished. They are innocent victims. It’s just that “slavery is one of those offenses which, in the providence of God, must needs come.” Nor does he suggest that God would ever have assisted the slaves in freeing themselves — and, indeed, history at the time was littered with the bodies of failed slave revolutionaries.

At best, the focus is on Pharaoh’s redemption — “the woe due to those by whom the offense came,” which is repaid in gold and blood (likely, despite Lincoln’s words, at a heavily discounted rate). But Lincoln’s formulation requires a number of odd turns. God allows slavery, for no good (or at least, no identifiable) reason. Humans make it happen, sure, and so it’s just that humans repay the debt. But that debt seems like a poor motivator to the average Northern foot soldier, who bore little individual responsibility for slavery. If we return to Goodman’s original proposition — that people fight for what God commands, trusting that He will render them victorious — the question is: why now? What has changed? Certainly not the Bible, nor Christian theology — all the Christian arguments for and against slavery existed at the founding of the Republic and remained in effect in 1865. Does God simply lure people and nations into evil and then demand the debt? What is going on with this God, anyway?

Lincoln’s God reminds me of a class taught at the University of Chicago in the 1990s called “The Radicalism of Job and Ecclesiastes.” I never got a chance to take it, but even reading the description in the catalog, I, as a then-religious person, immediately grasped its significance . . . and was afraid:

Both Job and Ecclesiastes dispute a central doctrine of the Hebrew Bible, namely, the doctrine of retributive justice. Each book argues that a person’s fate is not a consequence of his or her religio-moral acts and thus the piety, whatever else it is, must be disinterested. In brief, the authors of Job and Ecclesiates, each in his own way, not only “de-mythologizes,” but “de-moralizes” the world.

Lincoln was not an orthodox Christian, and he may well have felt that pious acts “must be disinterested” — that we must do what is right, utterly independent of reward. Indeed, the Second Inaugural, when read closely, does not promise victory — it promises toil and bloodshed that might last until the end of time. And it does not present any particular reason to trust in God as either a moral arbiter or an ally — God’s motives and sense of timing are utterly inscrutable to man, and hence no more comfort than the chaotic void. Lincoln’s theology is too sophisticated to promise, as the Battle Hymn of the Republic does, that God will “loose[] the fateful lightning of His terrible swift sword.” All Lincoln can offer is a resonant reflection that slavery is wrong and that those who fight to end it are in the right. Lincoln’s appeal, ultimately, is not to a “personal and providential god,” but to the internal compass, which demands action.


Anyway, those are this atheist’s thoughts. I don’t know that the Declaration would have convinced me God was on the side of the Revolution, and I’m not sure Lincoln would have convinced me (theologically, at least) that God had moved decisively to end slavery. But I grew up with religion, and my sense of justice is inextricably intertwined with faith. I believe — irrationally, and in my heart — in a kind of platonic Justice that exists even when no one is fully manifesting it. And I want to believe that elegant old metaphor about the moral arc of the universe being long but bending toward justice. On that level, I think, Lincoln could have gotten me.


In that vein, here’s Sister Odetta to sing us out.

Posted in Uncategorized | Leave a comment

thoughts on the threat to American democracy

The Supreme Court’s decision in Obergefell v. Hodges is being praised both for its practical outcome — making same-sex marriage, like opposite-sex marriage, a constitutionally-protected “fundamental right” — and for Justice Kennedy’s warm language celebrating and defending marriage:

No union is more profound than marriage, for it embodies the highest ideals of love, fidelity, devotion, sacrifice, and family. In forming a marital union, two people become something greater than once they were. As some of the petitioners in these cases demonstrate, marriage embodies a love that may endure even past death. It would misunderstand these men and women to say they disrespect the idea of marriage. Their plea is that they do respect it, respect it so deeply that they seek to find its fulfillment for themselves. Their hope is not to be condemned to live in loneliness, excluded from one of civilization’s oldest institutions.

The decision is also morally the right one. But there is, of course, still the question of whether the ruling makes sense as a matter of law. I think it does, based on the cases that have come before, the role of judges as interpreters of the Constitution, and the basic structure of the Constitution. Here are my thoughts.


The majority’s opinion is rooted primarily in the Due Process Clause of the Fourteenth Amendment, which reads:

No state shall . . . deprive any person of life, liberty, or property, without due process of law . . . .

Obviously, the primary function of this clause is to ensure that people subject to U.S. law get “due process of law” before the government works a deprivation on them — so, for example, the government cannot (subject to some exceptions in emergency situations) seize your car without a hearing. The police cannot take you into custody without probable cause, they cannot hold you for very long without a hearing, and the state cannot put you in prison or kill you (again, subject to certain exceptions) without a trial. There must be some formal (and fair) “process” before life, etc. can be taken from you — and the greater and more intrusive the deprivation, the more process is required.

However, as the majority opinion in Obergefell explains,

This Court has interpreted the Due Process Clause to include a “substantive” component that protects certain liberty interests against state deprivation “no matter what process is provided.” The theory is that some liberties are “so rooted in the traditions and conscience of our people as to be ranked as fundamental,” and therefore cannot be deprived without compelling justification.

Of course, some of these rights are already specifically enumerated in the Constitution: the rights to free speech, religious expression, arms-bearing, jury trial, and so on. But these enumerated rights are not the only rights that are typically understood to be fundamental, and constitutionally protected — nor, by the plain text of the Constitution, should they be:

The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.

Thus, the Court has occasionally found constitutionally protected such rights as the right to travel between states, the right to raise one’s children, the right to decline medical treatment, the right to use birth control, and the right to autonomy in one’s sexual behavior. (Somewhat more controversially, the Court has also found a constituional right to have an abortion — a right whose outer boundaries have contracted somewhat since the Court’s initial decision — and a right to contract free from interference by government regulators — a right which has subsequently evaporated altogether.)

The right to marry is another of these unenumerated constitutional rights. In Loving v. Virginia, the Court overturned a law criminalizing interracial marriage, in part because

Marriage is one of the “basic civil rights of man,” fundamental to our very existence and survival. To deny this fundamental freedom on so unsupportable a basis as the racial classifications embodied in these statutes, classifications so directly subversive of the principle of equality at the heart of the Fourteenth Amendment, is surely to deprive all the State’s citizens of liberty without due process of law. The Fourteenth Amendment requires that the freedom of choice to marry not be restricted by invidious racial discriminations. Under our Constitution, the freedom to marry, or not marry, a person of another race resides with the individual, and cannot be infringed by the State.

That very strong statement of the right — one of the strongest in the “fundamental rights” canon — announces a couple of important principles. First, the right to marry is important enough that it cannot be infringed on an “unsupportable basis” — that is, irrational reasons won’t do. Second, it is not enough that one be able to marry someone; a core part of the right to marry is the ability to marry the person of one’s choosing.

These principles undergird the Court’s decision in Obergefell, of course. Bans on gay marriage are logically “unsupportable,” in that proponents have been unable to advance any reason for the bans that would not also apply to some sizable portion of straight marriages. And they interfere with the fundamental right of personal choice in precisely the way that anti-miscegenation laws do.

The majority also points to two other cases affirming the right to marry — Turner v. Safley, which held that the Missouri Division of Corrections could not prevent inmates from getting married, and Zablocki v. Redhail, which invalidated a Wisconsin statute requiring persons owing child support to prove to a court that they were current on their payments before they could get married. Both cases underscore how serious the state interest must be — and how well-founded the rationale — to impinge on the individual’s interest in marriage.

In Zablocki, the state argued, among other things, that

the statute provides incentive for the applicant to make support payments to his children.

But the Court found that a state interest in collecting child support — a quite profound interest — did not justify the imposition on the right to marry, because the chosen method was unlikely to create the desired result:

First, with respect to individuals who are unable to meet the statutory requirements, the statute merely prevents the applicant from getting married, without delivering any money at all into the hands of the applicant’s prior children. More importantly, regardless of the applicant’s ability or willingness to meet the statutory requirements, the State already has numerous other means for exacting compliance with support obligations, means that are at least as effective as the instant statute’s and yet do not impinge upon the right to marry. Under Wisconsin law, whether the children are from a prior marriage or were born out of wedlock, court-determined support obligations may be enforced directly via wage assignments, civil contempt proceedings, and criminal penalties.

In Turner, a state corrections regulation forbade inmates to marry without permission from the superintendent of the prison, which would usually be given only for “a pregnancy or the birth of an illegitimate child.” The Court noted that prison regulation was an area where judges are particularly unqualified to second-guess the executive branch, which has special expertise in administration of the lives and needs of prisoners. The Court therefore announced that it would evaluate prison regulations with a much laxer constitutional measure than laws outside the prison walls: the regulation need only be “reasonably related to legitimate penological interests” to survive constitutional scrutiny. Nonetheless, the Court found that the Missouri regulation, supposedly predicated on security concerns, did not meet even that minimal standard:

There are obvious, easy alternatives to the Missouri regulation that accommodate the right to marry while imposing a de minimis burden on the pursuit of security objectives. See, e. g., 28 CFR § 551.10 (1986) (marriage by inmates in federal prison generally permitted, but not if warden finds that it presents a threat to security or order of institution, or to public safety) . . . . Moreover, with respect to the security concern emphasized in petitioners’ brief — the creation of “love triangles” — petitioners have pointed to nothing in the record suggesting that the marriage regulation was viewed as preventing such entanglements. Common sense likewise suggests that there is no logical connection between the marriage restriction and the formation of love triangles: surely . . . inmate rivalries are as likely to develop without a formal marriage ceremony as with one.

(Note, by contrast, that prisoners and even ex-prisoners can have many of their other fundamental rights abridged in quite drastic ways — they lose entirely the right to bear arms, for example, and those on parole or supervised release may lose the right to travel, at least without permission. But they still freely exercise the right to marry.)

The outcome in Obergefell flows, if not inexorably, then logically enough from these precedents. If the right to marry (and, specifically, marry the partner of one’s choosing) is fundamental, and if the state must present serious and not merely flimsy or pretextual arguments in favor of abridging that right, then it follows quite sensibly that the state may not abridge the right to marry a same-sex partner without solid arguments that some state interest requires it. Serious arguments that same-sex marriage would impinge on any legitimate state interest are exactly what have been lacking in the various cases litigated over the past decade, and so the Court held that such abridgment is improper.


Of course, because the announcement of “fundamental rights” is an exercise of judicial power, perhaps informed by but not determined by democratic political processes, the Court has consistently warned that the justices must

exercise the utmost care whenever we are asked to break new ground in this field, lest the liberty protected by the Due Process Clause be subtly transformed into the policy preferences of the members of this Court.

Chief Justice Rehnquist wrote that

[o]ur Nation’s history, legal traditions, and practices . . . provide the crucial “guideposts for responsible decisionmaking,” that direct and restrain our exposition of the Due Process Clause.

The dissenters in Obergefell therefore primarily rest their arguments on the notion that the Court, by deciding the issue unilaterally, has overstepped its bounds, arrogating to itself the policymaking power more properly invested in the legislature, and doing so without regard to history or our legal traditions.

Chief Justice Roberts, for example, notes that

There is no serious dispute that, under our precedents, the Constitution protects a right to marry and requires States to apply their marriage laws equally. The real question in these cases is what constitutes “marriage,” or—more precisely—who decides what constitutes “marriage”?

Who indeed? One could answer Roberts with Chief Justice Marshall’s maxim, from Marbury v. Madison, that “[i]t is emphatically the province and duty of the judicial department to say what the law is.” Deciding the definition of legal terms is the essence of the judicial function.

Beyond that, though: to leave that authority entirely in the hands of the legislature would be to allow legislators to exclude people from legal marriage in ways that would contravene the Court’s uncontroversial precedents. It cannot be the case, for example, that the legislature can define marriage as “the legal union of a man and a woman of the same race.” But why not? If the legislature gets to “define” marriage, shouldn’t Chief Justice Roberts want to overturn Loving as an unreasonable infringement on the powers of the people’s elected representatives? There is no indication that he does. Similarly, with regard to Turner, why can’t the legislature (or prison regulators) “define” marriage as “the legal union of one free man and one free woman”? That language is uncomfortably close to the historical precedent of excluding slaves from legal marriage, but if the legislature has carte blanche to decide “what constitutes marriage,” why not?

Of course, it should be obvious that judges are also not free to define marriage any just way they want. They cannot define marriage nonsensically — as, say, a physical welding of one toaster and one office chair. They also presumably cannot define marriage in a way that would substantially encumber the practice of marriage as we know it today — for example, by creating onerous requirements for divorce.

The dissenters ask whether judges can define marriage in ways that diverge from historical tradition. But their reading of history is peculiarly narrow. According to Roberts,

As the majority acknowledges, marriage “has existed for millennia and across civilizations.” For all those millennia, across all those civilizations, “marriage” referred to only one relationship: the union of a man and a woman.

That is not remotely true. First, as Roberts knows perfectly well, many cultures have had a view of marriage much broader than “a” man and “a” woman — the ancient Hebrews, for example, and the somewhat less ancient Muslims, and the fairly recent Mormons all embraced polygamy. (And that’s just cultures in the Abrahamic tradition.) Second, lurking in the background of many of the cultures of the past is a little-remarked-on tradition of acknowledged same-sex couplehood and marriage. Sometimes these occurred via a culturally-acceptable gender fluidity; e.g.:

We’wha was a key cultural and political leader in the Zuni community in the late nineteenth century, at one point serving as an emissary from that southwestern Native American nation to Washington, D.C. He was the strongest, wisest, and most esteemed member of his community. And he was a berdache, a male who dressed in female garb. Such men were revered in Zuni circles for their supposed connection to the supernatural, the most gifted of them called lhamana, spiritual leader. We’wha was the most celebrated Zuni lhamana of the nineteenth century. He was married to a man.

But sometimes not, too:

[T]here were societies in pre-colonial Africa that permitted women to marry other women. These marriages typically helped widowed women who didn’t want to remarry a man or return to their family or their husband’s family after the husband’s death . . . . Instead, the widow could pay a bride price and perform other rituals needed to marry another woman . . . .

(There are more things in heaven and earth, Horatio….)

More importantly, the development of American law and history supports the majority’s conclusion. Here is Roberts again:

Marriage did not come about as a result of a political movement, discovery, disease, war, religious doctrine, or any other moving force of world history—and certainly not as a result of a prehistoric decision to exclude gays and lesbians. It arose in the nature of things to meet a vital need: ensuring that children are conceived by a mother and father committed to raising them in the stable conditions of a lifelong relationship . . . .

The premises supporting this concept of marriage are so fundamental that they rarely require articulation. The human race must procreate to survive. Procreation occurs through sexual relations between a man and a woman. When sexual relations result in the conception of a child, that child’s prospects are generally better if the mother and father stay together rather than going their separate ways. Therefore, for the good of children and society, sexual relations that can lead to procreation should occur only between a man and a woman committed to a lasting bond.

As anthropology, this is not correct — at least, not as an absolute statement of the inevitable requirements of child-rearing. For example, in matrilineal societies, including a number of American Indian societies, is it common for a child’s primary male caregiver to be his maternal uncle, rather than his biological father. Fathers may be somewhat ancillary to the child’s life, even if an affectionate bond remains.

But it is also not true of either historical or modern American society or marriage. Historically, it paints an overly rosy, child-oriented picture and ignores a primary motivating factor in the promotion of marriage as the channel for sexual energy: the desire to preserve inheritance of family property to biological offspring of a particular set of partners, who, at least in early colonial history, would have been selected for one another by their families. There is also the Pauline/Augustinian tradition, strong in American religious traditions, of thinking about sex as something earthly and unfortunate, and marriage as the “release valve” institution designated by God to keep the weak from sin until the Lord returns.

Despite the church’s best efforts, though, it has never been the case that “sexual relations that can lead to procreation . . . occur only between a man and a woman committed to a lasting bond.” In the latter part of the 1700s, “more than one girl in three was pregnant when she walked down the aisle. In parts of Britain, 50 percent of brides were great with child.” And even if many of those pregnancies resulted in marriage in earlier periods of our history, they certainly don’t today. As early as the 1970s, one third of children were conceived, and one in five were born, out of wedlock. Today, among millenials, 64% of mothers have had at least one of their children without being married.

This is to say nothing, of course, of the marriages of those who cannot have children or do not want to have children. As marriages occur later in life and fertility falls, it is now not remotely uncommon for a marriage to have nothing whatever to do with child-rearing.

Marriage is also not what it used to be as a legal mechanism. The Obergefell majority mentions the law of coverture — which prevented a married woman from holding property or entering into contracts — as an example of a practice that has fallen away. Roberts insists that this change has not altered the “core” of marriage, but he is wrong. The abandonment of coverture and the development of modern marital property law, child support law and “no-fault” divorce all ensure that marriage is no longer guaranteed, or even likely, to result in “the stable conditions of a lifelong relationship.” If either party does not like a marriage, they can leave and forge a life without their former partner, unencumbered by either legal ties or, hopefully, economic dependency.

Nor is marriage any longer required to ensure one’s children will be able to inherit. In many states, now — thanks in part to the meddling of the Supreme Court — an illegitimate child can inherit routinely from the mother, and can inherit from the father if paternity is established during the father’s lifetime.

Thus, the Supreme Court’s description of marriage in 1888

Marriage is something more than a mere contract, though founded upon the agreement of the parties. When once formed, a relation is created between the parties which they cannot change, and the rights and obligations of which depend not upon their agreement, but upon the law, statutory or common. It is an institution of society, regulated and controlled by public authority.

— though still technically true, no longer accurately describes the practical effect of our law of marriage. The overwhelming trend has been away from social control and toward individual autonomy and liberty of choice.

Perhaps because the effects of these legal changes are so obvious, Chief Justice Roberts next turns to a kind of appeal to bottom-up-authority:

The majority observes that these developments “were not mere superficial changes” in marriage, but rather “worked deep transformations in its structure.” They did not, however, work any transformation in the core structure of marriage as the union between a man and a woman. If you had asked a person on the street how marriage was defined, no one would ever have said, “Marriage is the union of a man and a woman, where the woman is subject to coverture.”

Probably that’s true, but that is because they wouldn’t have had to — the law did the defining for them. If you asked a person on the street to define a “contract,” likely they could not come up with offer, acceptance, consideration, and mutuality, either. But the shape of their lives nonetheless depends on the law correctly identifying and applying the elements of a contract. Moreover, if you had asked people of the past specific questions about how marriage worked — “how easy is it to get a divorce?” “can a child born outside the confines of marriage inherit family assets?” — my suspicion is that most would have been able to give you an accurate answer.

But even if the Chief Justice were correct as to the universality and centrality of a certain purpose of marriage, and even if the law of marriage had not definitively moved away from the model he describes, that would still not provide a rational reason not to affirm gay marriage in a modern world where adoption and “blended families” are common. Procreation is important, yes, and a child’s “prospects” — however you care to define that — might well be better if he or she has two parents rather than one. (I could ask whether a child’s prospects increase linearly with the number of parents, and if so whether that is not a strong argument for plural/polyamorous marriage… but let’s let that lie.) But nothing in the social science we have supports the idea that those parents must be the child’s biological parents. To the degree that marriage is and should be about provided a loving, united home for a child to grow up in, then, it seems obvious that expanding the total possible number of marriages that could provide such a home can only redound to the benefit of children.

Given all this, the majority’s conclusion strikes me as in line with the historical trends regarding the role and purpose of marriage in our lives.


That leaves, I think, the real argument hidden behind all this law-office history and sociology: that the decision is undemocratic. As Justice Scalia writes, the Court is both unelected and unrepresentative:

[T]his Court . . . consists of only nine men and women, all of them successful lawyers who studied at Harvard or Yale Law School. Four of the nine are natives of New York City. Eight of them grew up in east- and west-coast States. Only one hails from the vast expanse in-between. Not a single South-westerner or even, to tell the truth, a genuine Westerner (California does not count). Not a single evangelical Christian (a group that composes about one-quarter of Americans), or even a Protestant of any denomination.

And that’s true. (One could ask whether the abortion litmus test for Republican appointees has skewed the Court Catholic in recent decades — all five Republican appointees currently sitting are Catholic. But it’s also possible that the religious distribution is just a statistical anomaly — historically the Court has been highly Protestant.) Courts are not representative bodies, and judges should exercise caution before plunging ahead where democracy has not acted, or remains divided.

On the other hand, there are good reasons to think this decision, even if handed down by a small group of judges, is not the end of democracy as we know it.

First, decisions by the Supreme Court are always made by this elite, unrepresentative group — a fact Justice Scalia does not bother to highlight when he is in the majority. Scalia attempts to distinguish between judges “functioning as judges, answering the legal question whether the American people had ever ratified a constitutional provision that was understood to proscribe the traditional definition of marriage” and judges answering the “policy question of same-sex marriage.” But of course, the selection of the “legal question” to be answered — and, especially, Scalia’s choice to frame it in such a way as to guarantee the outcome he wants — is, itself, a “policy” choice. The idea that judges can answer legal questions without interposing their personal preferences is a bit of a fiction.

This is why tradition requires judges — at least appellate judges, who “make” the law — to explain their reasoning. Judges are never going to just, in Chief Justice Roberts’ memorable phrase, “call balls and strikes,” and we should not expect them to leave their personal biases and opinions at the door. Rather, we should expect exactly what happens: thousands of judges around the country reading and critiquing each other’s reasoning — not to mention legal scholars, journalists, and the general public. Judicial opinions do not occur in a vacuum, are subject to review over time by other judges, and are (on some subjects) the focal point of intense public scrutiny.

Second, judges must have some freedom to, if I may put it indelicately, make some things up. There is much on which our system of democracy depends that simply appears nowhere in the text of the Constitution. I am not talking just about unenumerated fundamental rights, although that is probably the category of non-textual “constitutional” provisions people are most familiar with. Judicial review itself — the idea that judge can declare a law unconstitutional, and that the decision of the Supreme Court on matters of constitutionality (or even statutory interpretation) is final — is simply not present in the Constitution. Go read Article III — it’s short and plainly written. Nothing in it suggests that the Supreme Court is the final arbiter of constitutional limits, or that the other branches must respect the Court’s decisions. That principle — which we now accept as one of the core checks on executive and legislative power — was itself the creation of judges, based on their judgment of how the Constitution must work, what it must be saying, even though it doesn’t actually say anything of the kind explicitly.

Judges are therefore always filling in gaps and silences in the Constitution. (As another example, the Constitution grants Congress the power to “regulate commerce with” Indian tribes, but is silent about how Indian nations would function as governments if absorbed into the United States. Once that happened as a practical matter, judges developed an entire body of quasi-constitutional law to define the nature and limits of Indian tribal sovereignty.) This is nothing new, and nothing alarming.

Judges also resolve ambiguities in the Constitution. For example, the Fifth Amendment provides that

No person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a grand jury, except in cases arising in the land or naval forces, or in the militia, when in actual service in time of war or public danger[.]

Now, in that archaically written sentence, does “when in actual service in time of war or public danger” modify only “the militia,” or both “the land and naval forces” and “the militia”? In other words, can a soldier in the Army or a sailor in the Navy demand a grand jury indictment when tried during peacetime? Or is it only militia members who have the right to a grand jury during peacetime? Grammatically, the sentence is ambiguous; courts have had to resolve this question. And what does “arising in the land or naval forces” mean? Does the crime have to be connected to a soldier’s actual military service, or can he be tried without a grand jury indictment on any crime he commits during his term of service? Courts have had to answer that question, too.

As long as judges confine themselves to either resolving ambiguities or inferring necessary constitutional mechanisms and fundamental rights of the people (even rights not previously understood), there is no constitutional crisis. Judges should move cautiously and prudently, examine their biases, and base their opinions on logical arguments. Those opinions should then be analyzed and criticized, to see whether their reasoning is valid and in line with the country’s general values. Sometimes, a judicial opinion that purports to resolve an ambiguity will be found to be mistaken, or so obviously the result of intolerable bias that it must be overturned. Many judicial constitutional inventions, however, will stick, because they obviously further the cause of justice in a democracy. There is nothing disastrous to the republic in this ongoing development of constitutional principles.

Of course, it is significantly more dangerous when the Court outright defies the plain meaning of an explicit constitutional provision. In Korematsu v. United States, for example, the Court held that an executive order excluding Americans of Japanese ancestry from parts of the West Coast during World War II was constitutional as an exercise of the President and Congress’s war powers. Notably, the majority opinion does not mention the words “equal protection,” although the result of that opinion is plainly contrary to the Equal Protection Clause of the Fourteenth Amendment, as noted by one of the dissenting judges. Similarly, in Kelo v. City of New London, the Court held that the Fifth Amendment, which provides that private property may only be taken by the government for “public use,” did not bar a city from seizing people’s property and turning it over to a private developer.

Korematsu and Kelo have been heavily criticized over the years. It is probably safe to say that Korematsu is no longer good law, and its injustices would not be repeated by the Court today, although it has never officially been overturned. And many state legislatures reacted to Kelo by passing statutes preventing cities and counties from exercising eminent domain for the benefit of private parties.


But if later Courts and/or legislatures do not respond, there is still always a backstop against perceived judicial overreach. It is not, as Justice Scalia suggests in the final paragraph of his dissent, executive nullification:

The Judiciary is the “least dangerous” of the federal branches because it has “neither Force nor Will, but merely judgment; and must ultimately depend upon the aid of the executive arm” and the States, “even for the efficacy of its judgments.” With each decision of ours that takes from the People a question properly left to them—with each decision that is unabashedly based not on law, but on the “reasoned judgment” of a bare majority of this Court—we move one step closer to being reminded of our impotence.

That is a solution that only invites worse abuses, and by a far more powerful branch. (Just ask the Cherokee.) The executive branch — i.e., the branch that already has the power of armies, police, and intelligence services — should not unmoor itself from obedience to the courts, no matter how wrong-headed a particular decision might seem.

Rather, the solution that always remains available is popular sovereignty. The people are always free to amend the Constitution after a Supreme Court ruling that “gets it wrong.” Indeed, the very first amendment to the Constitution after the Bill of Rights was in response to a ruling of the Court. In Chisholm v. Georgia, the state of Georgia argued that, as a sovereign, it could not be sued by an individual. The Court held that Article III’s grant of federal jurisdiction over “controversies . . . between a State and citizens of another State” abrogated most claims Georgia might have to sovereign immunity. Congress promptly proposed an amendment to the Constitution to reverse Chisholm, the amendment was ratified by the states, and states have been immune from suit by citizens (unless they consent to be sued) ever since.

The “go amend the Constitution” argument is often thrown out by conservative judges — Scalia prominent among them — who believe, e.g., that the First Amendment largely forbids limits on campaign finance:

The principle of the First Amendment is the more the merrier; the more speech the better. False speech will be answered by true speech. That’s what we believe and maybe it’s a stupid belief, but if it is you should amend the First Amendment.

But there is no reason that argument should not also apply when the Court does something conservatives dislike. If gay marriage is bad enough that it seriously imperils our democracy — do something! Agitate for a convention of states, for example. Or, like the left, introduce an amendment in Congress. The process is cumbersome — too cumbersome, as Scalia himself frankly acknowledges. But when people hate something enough — when it is a clear policy disaster — the political will can be found to pass an amendment.

But if, as I suspect, the problem is not so severe as all that, then all the normal solutions are available: try to convince the Court to change its mind, test the limit of the holding with edge cases, engage in sustained public debate to convince jurists that the law means something else.

Or… just live with it. You don’t always get what you want — even if you are The People. A republic is like that sometimes.

Posted in Uncategorized | Leave a comment

the real reason we should put Harriet Tubman on the twenty

The group Women on 20s has recently gotten some pretty good press for the idea that Harriet Tubman should replace Andrew Jackson on the $20 bill. Personally I always favored John Ross for the twenty, just to really stick it in Ol’ Hickory’s eye, but the voters (well, internet voters) have spoken, and I approve of their choice. And anyway, Tubman has a distinct advantage over other historical figures whose names have been bandied about to replace Jackson: she is a legitimate badass.

So was Jackson, of course. But Jackson is best known for driving Indians off their land, helping to annex Florida, and for fighting the British (who were supporting the Indians) in a dumb war that did little to accomplish its ostensible goals but did, again, screw the Indians. Jackson’s military adventures are one face of American courage, but not its best face — rather, the face of America the Expansionist and Belligerent.

Tubman, on the other hand, represents a different kind of physical courage. It’s well-known, of course, that she put her life on the line again and again by returning to Maryland to help others escape slavery after her own daring escape. Less well-known, but at least as dramatic, is this spectacular episode from the Civil War in which she masterminded a Union raid into Southern territory to free slaves to join the fight:

It’s no exaggeration to say that the Combahee raid was unique in American history. All Union operations in slave territory, especially as the Emancipation Proclamation become well known, yielded the self-liberated by the hundreds. But the Combahee raid was planned and executed primarily as a liberation raid, to find and free those who were unable or unwilling to take the enormous risks to reach Union lines on their own. That’s how Tubman conceived of it. That, too, is unique – because for the first and only time in the Civil War, or for that matter any American conflict before this century, a woman (and a civilian at that) played a decisive role in planning and carrying out a military operation….

Tubman did not speak Gullah, a language common among coastal slaves. As Tubman herself says of a crucial moment in the raid: “They wasn’t my people … because I didn’t know any more about them than [a white officer] did.” And these were slaves who worked mostly in the fields, men and women who trusted “house” slaves as little as they trusted whites, even white Yankees.

In other words, the amazing thing about Tubman’s role during the raid was not that she was in her element, but that she was so far outside it.

Yet it’s clear that it was Tubman who visited the camps of liberated slaves along the coast and recruited the 10 scouts named in Union records, 9 of whom had escaped from nearby plantations. Lieutenant George Garrison, posted to one of the Northern-raised black regiments, said, “She has made it a business to see all contrabands escaping from the rebels, and is able to get more intelligence than anybody else….”

The Second South Carolina was not made up of veterans. The men had far more in common with Tubman than with their own officers. That’s why she went with them on the raid. Yet Tubman wasn’t a passenger. The intelligence she gathered, the soldiers she recruited, indicate that she actually planned the raid with Hunter and Montgomery: three landings on the right, one on the left….

As the troops finished their demolition work, the fleeing slaves started to reach the boats, many more slaves than there was space available. “When they got to the shore,” Tubman recalled later, “they’d get in the rowboat, and they’d start for the gunboat; but the others would run and hold on so they couldn’t leave the shore. They wasn’t coming and they wouldn’t let any body else come.”

That’s when a white officer told Tubman to sing to “your people.” Even decades later, when she would regale white audiences with the Combahee story, she said she resented that – a surprisingly modern sensitivity. But she did sing. And it worked. “Then they throwed up their hands and began to rejoice and shout, glory! And the rowboats would push off.”

It’s hard to understand how the song Tubman recalled singing – about how “Uncle Sam is rich enough to buy you all a farm” – could have persuaded those left behind to let the boats go. Did she intentionally omit the fact that she threatened to shoot anyone who tried to back out from escaping? Meanwhile, the Confederates set upon those on shore with dogs and guns; at least one young girl was killed. But hundreds escaped.


The difference I’m getting at is not about nonviolence, per se. The Combahee raid was an act of war and involved fighting. But Tubman replacing Jackson on the twenty could symbolically represent a shift in American thinking about honor and courage — away from the kind of courage it takes to take things from a weaker people, or to dominate those you define as “enemies,” and toward the kind of courage it takes to put yourself at risk so that people can be free.

I say this without personal critique. Jackson was a certain kind of violent, aggressive man, of which there will always be some among us, and that is fine. People are as they are, and Jackson had his good points: he was apparently a loyal friend and a large-hearted husband and father. (In one of those perversities that are forever wrinkling up neat historical narratives, Jackson even adopted an Indian son — after massacring most of his village.) He also took a more expansive view of suffrage and popular democracy than the prior generation of (largely well-to-do) revolutionary-era leaders had. But Jackson was a duellist, literally and otherwise, an irascible man who made enemies easily and held long grudges. Perhaps his natural tendencies toward conflict were brought to full flower by a bellicose Southern culture of honor. I don’t know. I don’t care. It’s not about a judgment of the man as an individual — something I care about less and less in these matters. It’s much more about the cultural forces that selected such a man, put him at the head of various armies and then at the head of the country, and gave him the power and authority to do some measure of evil.


We love stories of physical, direct heroism. And I think they do more than just scratch the itch we have for vicarious adventure. They provide models for thinking about, and feeling drawn to, acts of less concrete heroism. That’s a good thing. But this country has matured quite a bit in two hundred years. If Jackson — symbol of courage in the name of acquisitiveness and terrorizing your enemies — was one model of American badassery in our nation’s youth, he needn’t be the only one we ever have. Tubman’s model — courage as the taking of personal risks in pursuit of a truer, deeper, more equal liberty — could take a turn in the front for a while.

Jacksonian courage is the sort of courage that fueled investment banking and business culture for decades. That culture — intensely macho, fratty, willing to substitute bluster for facts and understanding, and determined to see the world as zero sum — set the stage for the financial crisis.

Maybe that’s not what we need so much of these days. Maybe what we need are politicians who buck their leadership, even at cost to themselves, when the really important things are on the line. Maybe what we need are public defenders and civil rights lawyers keeping the justice system honest, even if doing so more-or-less shuts them out of the legal profession’s positions of power. Maybe we need more whistleblowers. Maybe we need more citizen journalists. Maybe that’s the kind of courage we should celebrate.

Posted in Uncategorized | Leave a comment