ugly partisanship: the cause of, and solution to, all life’s problems!

“They fuck you up, your mum and dad.
They may not mean to, but they do.”

— Philip Larkin, “This Be The Verse”

This blog is dead as a doornail.*  But because, over a year ago, I made a case against Hillary Clinton as the Democratic nominee (and in favor of third-party voting if she was), I thought I’d come back for one more visit.  Here’s what I wanted to say, on this election eve: Larkin’s observation holds for countries, too. Whichever way this thing goes, one thing is clearer than it has ever been before: the Founding Fathers fucked us up.  They didn’t mean to, but they did.

I am not blaming them.  They were inventing something new; they did the best they could.  Their clearest model was antiquity–the pattern of democracy running to tyranny on the one hand, or chaos on the other.  They had read Locke, so they believed that power derived from a social contract among equals.  They had read Roman history, so they feared tyrants.  But they had also read Hobbes, so some of them worried that maybe people would be at each other’s throats without a strong, civilizing government.  And most of them were decidedly on the well-to-do side, and a bit suspicious of their lessers–even though they wanted to derive legitimacy from them.  In short, they had a lot of theory, few models, and a handful of somewhat contradictory worries and beliefs.

One can confidently say that the Founding Fathers fucked us up in a number of ways.  But the clearest way, it now seems to me, was in adopting a system of “checks and balances.”

I will pause so that the patriotic reader may be revived with smelling salts, as needed.


The theoretical underpinning of our constitutional system is that by having “three branches of government” and “a division of power between the states and the federal government” and “a representative democracy, or republic” and an electoral college–all that excellent stuff we learned about in civics/PoliSci 101/law school–the various institutions at play can “check” each other, so that none becomes too powerful, and both tyranny and mobocracy are thus avoided.  Moreover, joining the states together defeats partisanship: a large country could never be dominated by factions or parties, because

The smaller the society, the fewer probably will be the distinct parties and interests composing it; the fewer the distinct parties and interests, the more frequently will a majority be found of the same party; and the smaller the number of individuals composing a majority, and the smaller the compass within which they are placed, the more easily will they concert and execute their plans of oppression. Extend the sphere, and you take in a greater variety of parties and interests; you make it less probable that a majority of the whole will have a common motive to invade the rights of other citizens; or if such a common motive exists, it will be more difficult for all who feel it to discover their own strength, and to act in unison with each other.

Madison, for his part, certainly understood that partisanship was all but inevitable in human affairs:

The latent causes of faction are thus sown in the nature of man; and we see them everywhere brought into different degrees of activity, according to the different circumstances of civil society. A zeal for different opinions concerning religion, concerning government, and many other points, as well of speculation as of practice; an attachment to different leaders ambitiously contending for pre-eminence and power; or to persons of other descriptions whose fortunes have been interesting to the human passions, have, in turn, divided mankind into parties, inflamed them with mutual animosity, and rendered them much more disposed to vex and oppress each other than to co-operate for their common good. So strong is this propensity of mankind to fall into mutual animosities, that where no substantial occasion presents itself, the most frivolous and fanciful distinctions have been sufficient to kindle their unfriendly passions and excite their most violent conflicts.

He also foresaw that you could not rely on the “great men of history” to beat back partisanship and corruption–and he even understood the problem of concentrated, direct benefits and diffuse, indirect costs:

It is in vain to say that enlightened statesmen will be able to adjust these clashing interests, and render them all subservient to the public good. Enlightened statesmen will not always be at the helm. Nor, in many cases, can such an adjustment be made at all without taking into view indirect and remote considerations, which will rarely prevail over the immediate interest which one party may find in disregarding the rights of another or the good of the whole.

Yet, even understanding this–even as the great political genius of his age–Madison was not quite able to see (because he was operating in the abstract, without real-world data) what is now plain: cross-institutional checks, as established in the Constitution, don’t work. A divided government doesn’t work.  What actually works (better, anyway) is–wait for it–partisanship.  Naked, ugly factionalism.

Hear me out.


Suppose you support Hillary Clinton for president.  You likely have something to fear whether she wins or loses.  If she loses, of course, Donald Trump (someone you find abhorrent) becomes President, with all of the enormous powers of the federal executive at his command.  He will be the one to fill the political posts in every cabinet department, from ICE to the Civil Rights Division of the DOJ to Interior to Health and Human Services.  He will have the power to nominate Supreme Court Justices–perhaps as many as three or four.  He will have his finger the the metaphorical nuclear “button”–and even if the generals talk him out of that, he’ll be commander-in-chief of the most powerful military in the world.  For at least four years.

On the other hand, even if Clinton wins, she will be remarkably powerless to enact much of her agenda.  While she, too, will be at the helm of a powerful military and the entire executive branch, she will be stuck with a Republican-dominated House of Representatives–and maybe Senate, too.  She will not be able to pass meaningful legislation during her “honeymoon” period, and she may not even be able to get a Supreme Court justice confirmed.

The picture is similarly gloomy if you are a Trump supporter.  If he loses, Clinton takes charge of the government and puts in place her flacks, who are loyal to her.  A secretive and paranoid politician will run a secretive and paranoid White House, probably mostly to benefit Marc Rich.  She’ll make bad decisions about who to invade and how to fight ISIS; she will make bad deals with bad actors who get the better of her, whether in international relations or in trade.  Maybe she even gets a friendly Senate and nominates a bunch of gun-hating nanny-staters to the Supreme Court.

But even if Trump wins, how is he supposed to get anything done?  It’s unlikely he’ll have the votes to “build that wall,” or to spend what it would take to literally round up every illegal immigrant and deport them.  The Senate also has to approve treaties, so he can’t unilaterally be the savvy deal-maker he claims to be.  Also, while he can appoint political appointees to the various cabinet departments, he’s still stuck with the civil service, who a resistant to change.  And as for excluding Muslims?  Well, a large number of federal judges will likely thwart him at every turn.  In short, he’s more stuck with the status quo than his ebullient, “we’re gonna make it beautiful” campaign lets on.

Meanwhile, what if you would actually like to support someone else this season?  What if you were one of the large number of Bernie Sanders supporters?  Or suppose you really believe in freedom and would like a libertarian?  You will be told (correctly, given the constraints of our system) that your vote is at best wasted, at worst a “spoiler” the less-awful of the two mainstream candidates.


And all of this flows inexorably from the mechanisms that the Constitution puts in place–or fails to put in place.  It flows from the inherently and intentionally non-representative Senate explicitly commissioned to check and thwart the House, as well as the winner-take-all elections thereto.  It flows from the Constitution’s silence on how congressional representatives should be elected and allocated within a state, leading Congress to ultimately mandate single-member, winner-take-all districting.  It flows from the decision to allow state legislatures to determine how to draw up and allocate congressional districts, so that many legislators wind up in gerrymandered safe seats where the danger to their political ambitions comes from an aggressive primary challenge, not from another party.  It flows from the decision to stagger elections, so that there are poorly-attended “midterm” elections dominated by party loyalists.  It flows from the fact that the House and Senate make their own procedural rules.  It flows from the Constitution’s attempt to split the baby by having the President nominate Supreme Court justices and other important officials, with the “advice and consent” of the Senate.  And, perhaps most importantly, it flows from the creation of a powerful, nigh-irremovable executive who’s elected once every four years, separately from the legislature, in a nationwide winner-take-all process.

(You could also add the meta-flaw: that the Constitution is far too difficult to amend, and so all the other problems are as good as set in stone.)

These procedures, many of which were intended to create “checks and balances” between the institutions, have worked only too well.  On the one hand, Congress famously “can’t get anything done”–particularly post-1995, when Newt Gingrich had the brilliant insight that gridlock and scorched-earth resistance could actually be successful political strategies.  This is particularly the case now that many congressional districts have been gerrymandered to ensure that party loyalty and ideological rigor are a better strategy for getting elected than compromise and openness.  President Obama has chided Congress for “not doing the work of the American people,” but that rhetoric is weak, and the incentives not to cooperate are strong.

At the same time, the executive has arrogated more and more power to itself over time–partly in direct response to Congressional stonewalling.  And while liberals might cheer when President Obama takes unilateral action to allow various categories of deemed-worthy illegal immigrants to stay in the country and work legally(ish) via “deferred action,” they might be less thrilled by, say, his unilateral military invasion of Libya without Congressional authorization.

Can he do that?  Who’s to say he can’t?  Congress can’t really do anything about it–partly because any penalty they might impose on him as punishment for overstepping his bounds is probably something they’re already doing as part of planned gridlock; partly because Congress can do hardly anything on a bipartisan basis, let alone something that takes moral courage; but mostly because the Republican Party hopes to win the presidency again itself one day, and they want it to contain as much power as possible, given that Congress is dead.

Finally, the judiciary, which until recently had remained functional and was seen as semi-apolitical, seems in imminent danger of withering away in a Senatorial pissing contest, perhaps regardless of who wins the presidency.  And, indeed, sooner or later someone will realize that the Senate could easily hold hostage, not just the Supreme Court, but the lower courts as well, and for that matter key Cabinet posts.

Moreover, Article III, which creates the judicial branch, is intentionally vague as to exactly what powers the third branch has over the other two.  The Founders were suspicious of national courts and federal judges, and so, to avoid triggering their suspicions, Madison carefully drafted an article that tells you next to nothing–not how many justices there should be, nor how many or what kinds of lower courts, nor, crucially, what the scope of the courts’ authority actually is.  It’s a subject of tremendous debate, and there’s a whole legal doctrine devoted to courts’ avidly avoidance of telling the other branches what to do.

Yet that is not to say that the courts have no power–federal judges are appointed for life (one of the few things Article III says clearly) and have tremendous authority over many of the issues that matter to people’s lives, like policing, abortion, and free speech–not to mention more quotidian issues like trying to get your money back after a fraud.


So we have a useless legislature where the incentives are strong to block nearly everything and a judiciary that wields a lot of power over people’s lives, but has no clear mandate to rein in the other branches and in any event is currently a target on the political battlefield.  That leaves an executive that has real power (albeit thwarted in its usefulness), and the current state of politics: once every four years we work ourselves up into a spectacular lather about a single election, which will determine who gets to be the god-king-and-emperor-of-the-cannons-whose-legislation-shall-not-pass-but-who-gets-to-nominate-judges.  This is extraordinarily fraught, as the consequences are perceived to be staggeringly serious.  Lose, and Your Voice Will Not Be Heard for at least four years in the executive branch, and perhaps for decades to come in the judiciary.  No wonder we become susceptible to paranoia and apocalyptic thinking.  The country is constantly on the edge of handing over the entirety of what little governing power remains to a closet Stalinist and possible Muslim Brotherhood dupe, or else to a racist, sexist megalomaniac with poor impulse control and a fetish for Russian oligarchs.  IT’S ALL VERY HIGH-STAKES, YOU GUYS.  (Text here to donate to my campaign.)


It doesn’t have to be this way.  Other countries don’t do this to themselves.  Advanced democracies mostly use a parliamentary system, in which the executive arises out of the legislature.  That legislature is often elected through multi-member districting and/or proportional representation, so that more than two parties can emerge.  Consequently, voters have more options, and even the major parties must sometimes form coalitions with their ideological neighbors to get things done.

But, crucially, because all the institutions are in the hands of the same party (or coalition), there is no gridlock.  One party has power and passes and enacts legislation.  If the ruling party is managing things really badly, the other parties (and/or a faction within the ruling party) might well gang up on the existing government and throw them out, or force a new election.  (And nobody has to wait four years for it to happen!)  On the other hand, a corollary of this is that when leadership is working, it need not be subject to the artificial term limits that create such obvious turmoil in our own system every four to eight years.  So, for example, Margaret Thatcher was Prime Minister for 11 years, and Tony Blair for ten.  In Australia, John Howard was PM for 11 years.


A parliamentary system thus neatly answers several things Americans hate about their own political process: it abolishes gridlock; it makes the head of government more accountable and a good deal less important as an individual personality; and, most crucially, it allows for more and better representation of a diverse array of viewpoints.

This last part is really something I would urge Americans to think about.  We have always been a sprawlingly diverse country and are getting more so.  Whomever you’ve voted for by the end of tomorrow, on Wednesday morning you’re going to wake up next to people you disagree with in profound ways.  Despite mouthing words about tolerance, equality, and pluralism, however, the knee-jerk American response to encountering a disagreed-with viewpoint is, typically, that it must be squashed.  And in a winner-take-all, two-party system, this makes sense: it’s squash or be squashed out there.  There’s no room for the humanity of the people you disagree with.

But in a multi-party parliamentary system, where everyone from Black Lives Matter to gun nuts to greens to People With Strong Opinions About Abortion to our dear friends the libertarians could get some amount of direct representation in their government, you wouldn’t have to choose between the Rock Party and the Hard Place Party.  And you also wouldn’t have to get worked up into such a lather about defeating your enemies.  If your friend was an unrepentant BernieBro, or your aunt felt she couldn’t vote for anyone but a sincere Christian dominionist… there’s room for that.  Those people would not be “wasting their votes” or “helping Clinton/Trump get elected.”  They’d just be… voting.

And by doing so, they’d actually be involved in a great patriotic project.  They’d be preventing complete domination by, and antagonistic gridlock that plays into the hands of, two parties nobody likes very much, but that we nonetheless feel we have to get all sweaty about every few years.

In the end, I think Madison was quite right that partisanship is inherent in human nature.  Where I think he and the others went wrong was in failing to use that inherent tendency to factionalize as the check on power, as a parliamentary system does.  Instead, they tried to give multiple institutions the power to check on each other and hoped against hope that, for vague reasons, partisanship would be kept at bay.  That didn’t work: we got parties anyway, and the checks and balances just gave them tools to abuse each other with.  And in the process, the people got sucked into the (zero-sum) game, until you could hardly turn on the TV or log onto your favorite social medium without somebody declaring that 47 percent of Americans are morons and evildoers who must be crushed into submission.


The Founding Fathers fucked us up.  But, contra Larkin, I don’t believe that it’s inevitable that “Man hands on misery to man./It deepens like a coastal shelf.”  We could break out of it if we so chose.

Candidly, it’s time to rethink the Constitution.  We venerate it, and rightly so–it was a fine first draft of democracy.  But two hundred years would take a toll on anyone: it’s done its work, and it should be laid to rest.

This blog is dead as a doornail.*  But if I can end it on a last radical note: let’s have a convention.  Let’s start over again.  This isn’t working; everyone hates it; and the fear that something worse would come out of a constitutional convention seems to me to be overblown and without adequate historical or theoretical justification–a spook, created by those who don’t really like the people, except when they are The People, graven in marble on a dead monument.

Let’s start over.

Let’s start over.

—————————–

*I might have been inclined, myself, to regard a coffin-nail as the deadest piece of ironmongery in the trade. But the wisdom of our ancestors is in the simile; and my unhallowed hands shall not disturb it, or the Country’s done for.

Posted in Uncategorized | Leave a comment

a brief Thanksgiving meditation

One night in the week following the attacks on Paris, Los Angeles had a small windstorm.  It wasn’t an emergency, by any means, but at one point the winds got fast and violent enough to shake the thin, loose windows of our aged building back and forth so that they rattled hard in their frames.  The sound woke me up, and as has happened dozens of times since I returned from Iraq, I jolted awake, nerves electric, brain already conjuring an explanation for the sound — an explosion, maybe the building is coming down, maybe the windows are about to spray inward, showering us with glass.

Nothing like that is happening, of course.  This is Los Angeles in 2015, not Iraq in 2008.

But still, any loud sound when I’m sleeping — the clang of the machines being unloaded at a nearby construction site, the sudden whoop of a police siren, a motorcycle’s ripping throttle — can do that to me.  It’s just something I learned one morning on FOB Warhorse, after a long night shift, trying to get some sleep in a noisy, cavernous tent.  Just after I’d gotten into the precious deep sleep, a long, metallic whine slowly penetrated my consciousness, growing louder and louder — it sounded exactly the way it sounds when a plane goes down in a cartoon:

nnnnnnnyyyyyeeeeeeeeeeeeeeeeeeeeeeeeoooooooow! BOOM!

I flipped off my cot and onto the floor, knowing all at once that lower was better.  I looked over to see my bunkmate on the floor, too, both of us laughing and scrambling for Kevlar helmets and body armor.  There were a couple more, and then silence.

Rocket and mortar attacks in Iraq were not really a substantial source of casualties, I think — they were harassing at best, a way for the enemy to fuck with us.  Very shortly after we arrived in country, someone found a dud rocket lodged at the base of my office’s cinderblock wall. EOD was called; life went on.

There were enough attacks that the details run together now — was the one that hit the “movie theater” (a small warehouse with a little screen and a video projector) the same as the one that destroyed the housing unit of someone I knew?  (No one was in it at the time.)  Which one was the attack where I actually hid in one of those roadside concrete shelters?  Then there are other memories — while in transit once, I got to stay for several days at Balad Air Base/LSA Anaconda, which was comparatively fancy and had not only a proper movie theater but a swimming pool as well.  It was also nicknamed “Mortaritaville” by the people stationed there, and once while I was at the pool the mortar attack alarm went off, and we all jumped up out of the pool and ran into the changing rooms.  As we stood there in our “ARMY” shorts, excited and shivering, I looked around at the large glass windows all around the tops of the walls, and I envisioned them blowing inward, shrapnel and glass flying toward us.

But for some reason it’s the attack that woke me up out of a dead sleep –and that loud Snoopy’s-going-down-with-the-Sopwith-Camel sound — that stayed with me.  For some reason it’s that experience I live over and over again: first the loud sound, and then the certainty of impending disaster.

Rarely, now, do I actually suspect I’m under attack.  Last week was an exception — maybe brought on by thinking about Paris (and, yes, Lebanon, and Kenya, and Mali, and now let’s add Tunisia to the list).  Sometimes my brain imagines an earthquake, the whole side of our building falling away.  Sometimes it gets very large-scale indeed, as I lie in bed in the dark wondering if it’s possible for the Earth to just fall suddenly out of its orbit and roll into the sun like a marble into a drain.  I wonder if I would have time to comfort my son before we all die.


My time in Iraq wasn’t that bad, and no one should feel sorry for me. As I’ve written before,

I’ve never, as we say in this line of work, engaged the enemies of the United States of America in close combat. Had some rockets and mortars lobbed at me, but they never got closer than a hundred yards away.

And as I’ve also written before,

I wasn’t one of the soldiers kicking in doors . . . .

I didn’t do the killing myself. I never put anyone in jail myself.

[Iraq] didn’t give me PTSD; I don’t have flashbacks and I’m not depressed.

And that’s true.  I found my time in Iraq deeply annoying and frustrating, and I spent much of my time there feeling incredibly angry at the pointless war and ambivalent about my own part in it.  (See previous link for more on that score.)  But on the other hand, things were not that bad for me.  I was pretty much exclusively a “fobbit” — quite by design, I never left the FOB except to travel to another FOB.  I was not out there looking for roadside bombs, and the worst thing I faced on a day-to-day basis was shitty food, an uncomfortable environment, loneliness and the anxiety that comes with being kind of bad at your job.  (I got better over time.)  There was a gym.  There was internet access.  I even had time to record an album of weird electronic music while I was there.  It was almost certainly the best war experience any soldier in history has ever had, in terms of amenities.   Comparatively, I was fine.

But I do still carry this one little thing with me, this inability to recover gracefully from being woken up by a loud sound.  It’s a bit of a nuisance — I live in the densest and noisiest part of a dense and noisy city.  But it’s only one small scar.

It’s the smallest amount of damage one could possibly have from war.  But I suspect no one leaves without some damage.  And if there are thousands and thousands of well-off fobbits like me, with some tiny bit of damage, some little bit of weakness we didn’t have before, you can only imagine the cumulative damage of the door-kickers and war-fighters, the EOD guys who went bomb-hunting in their big armored trucks, the low-level electronic surveillance folks who went out into the towns to hunt the bad guys on foot, the transpo contractors who drove supplies along some of the world’s most dangerous roads, the helicopter pilots, the translators.  And that’s to say nothing of millions of civilians, for whom life under Saddam had perhaps not been great, but upon whom we unleashed a living hell, as they were repeatedly victimized by all the competing armies that arose in the power vacuum after the invasion, battered by our attacks on what we hoped were terrorists, and destroyed by the ruination of their local economy.

I  carry the smallest possible scar from war.  But I’m on the ledger, and that ledger is long — miles long.  And we should read it, the whole thing, before we listen to rich politicians talk about how tough they’d be if they got a chance to sit behind the war machine and pull the trigger.


This Thanksgiving, I’m thankful that I got to come home.  I’m thankful that almost everyone I knew got to come home, too.  But mostly I’m thankful that I live in a country that, 14 years after 9/11, seems genuinely wiser, more cautious about war, and less eager to be fooled by banner-waving salesmen.

Happy Thanksgiving to you and yours.  And for fun, here’s the OTHER, probably obligatory rumination on war and Thanksgiving.  “You wanna end war and stuff, you gotta sing loud!”

Posted in Uncategorized | 1 Comment

brief follow-up on the previous post

One thing I didn’t mention in my post about calls to lift the gun carry ban on military bases is the little-discussed link between guns and suicide. This is important, because the military has, since the start of our recent wars, experienced a sharp increase in servicemember suicides.

Whatever you think of the evidence linking gun control to crime prevention, the link between gun control and suicide prevention seems pretty solid to me. For example, the Washington, D.C. handgun ban and the Australian National Firearms Agreement were associated with significant drops in the suicide rate, and the Brady Act was associated with a drop in suicide among people over 55.

I’m never quite sure what to do with this information. Suicides by gun far outnumber murders, yet suicide is routinely left out of the gun control debate. Maybe it should be! There’s a perfectly legitimate argument that suicide is the individual’s prerogative. As a general matter, we should avoid telling people what to do with their bodies. And life is hard, and not everyone is equipped to meet the challenge. I see no reason to insist that people live in a prolonged state of existential misery to satisfy someone else’s sense that suicide is “wrong,” in some mystical sense.

On the other hand, it also seems obvious that at least some people who commit suicide might otherwise get past their life-grief and go on to have a later life that is, on balance, worth living. And it seems like this might particularly be the case for soldiers, who are often young, who may be suffering a variety of service-related (or not) mental illnesses that are strong predictors of suicide, and who live, at least temporarily, in a culture that values toughing it out over seeking help or admitting weakness (often assumed to be the same thing).

How that should factor into any discussion of a plan to make guns more freely available on military bases, I don’t know. But it’s something that, unsurprisingly, I haven’t seen mentioned by any of the congressional representatives pushing this idea.

Posted in Uncategorized | 1 Comment

Congress eyes interesting experiment re gun violence

Following a series of mass shootings at military facilities, some members of Congress have proposed allowing soldiers to carry weapons on base:

Congressional leaders said Friday they will direct the Pentagon to allow troops to carry guns on base for personal protection following a deadly shooting rampage in Tennessee that killed four Marines and seriously wounded a sailor at a recruiting center . . . .

Gun proponents have been calling for the Defense Department to lift its current policy, which allows only security and law enforcement to carry loaded guns on military facilities outside of war zones, since Army Maj. Nidal Malik Hasan killed 13 people and wounded more than 30 in a shooting spree at Fort Hood, Texas, in 2009.

I don’t know how much effect this would have on premeditated mass shootings, which are relatively rare and don’t seem to be prevented by open carry laws off-base. But it’s an interesting proposal for another reason: it would provide an opportunity to test theories of gun violence in what is likely to be as close to a controlled environment as possible.

As noted above, currently servicemembers are generally not allowed to carry weapons on base, except under certain limited circumstances. Guns typically must be registered and, if the servicemember lives in the barracks, they must be stored in the unit armory, not in one’s room. Servicemembers who live in family housing are usually allowed to keep weapons in the home, but, for example, at Camp Pendleton

[a]ll weapons and ammunition [must] be stored in approved containers. Weapons containers must be capable of being locked. All weapons will be fitted with a trigger lock.

Moreover, such regulations have, presumably, a somewhat higher compliance rate than their civilian analogs (e.g., the regulations challenged in D.C. v. Heller). Soldiers are trained to prize the safe handling of firearms; they are more invested in following, and more informed about, applicable regulations than civilians typically are; and military housing is, at least in principle, subject to occasional inspection.

In short, military servicemembers on base live under a pretty strong and reasonably effective gun control regime.

They also live in a fairly closed environment. Although servicemembers can bring friends and acquaintances onto military bases with them, and a number of civilians work on every base, most bases are closed to the general public. Perhaps as a result, and because the people who do have access to military bases are heavily invested in and strongly identify with the military as a community, crime on military bases is quite low. (It also surely can’t hurt that by definition everyone on a military base is meaningfully employed or is the dependent of someone who is meaningfully employed.)

Removing the gun controls, then, would provide a nice opportunity for an observational study to determine whether an increased gun presence in a stable environment would make the base community safer or less safe. And this might provide some insight into whether guns in the civilian community make people safer or not — a hotly contested question.

I can think of a few possible confounding factors that might either muddy the data or make it hard to extrapolate the results to civilian society. For one thing, soldiers and Marines (and to a lesser extent airmen and sailors) are already trained in and comfortable with the use and carry of weapons. Many have, of course, carried extensively overseas; during deployment, a soldier’s weapon is nearly always on his body. Despite the intense pressure of the combat environment, there are few non-mission-related shootings. This suggests that military discipline is pretty effective in creating people who use guns professionally but not out of passion. (Alternatively, it could also be that the military is skewed toward people less likely to commit violent crimes to begin with — for example, a Heritage Foundation study found that military recruits as a population are wealthier and better educated than the populace as a whole.)

The flip side of that is that exposure to intense combat experiences seems to be linked to an elevated risk of violent criminal behavior after one returns to the U.S., presumably due to untreated psychological trauma. How that would affect the study is unclear. Would it artificially elevate levels of gun crime? Or, now that the wars have wound down and those suffering trauma have begun to rotate out of the military, will there be a concomitant drop in violent crime unrelated to the change in on-base gun regulations? I don’t know the answer to that, although I would think a carefully-designed study could take it into account.

Finally, of course, military bases are full of, well, military-aged males — i.e., the demographic that commits the overwhelming mass of violent crime. Fighting is not uncommon on military bases, and drinking is a heartily-embraced pastime. Mostly soldiers go home and sleep it off, but if young, single men in the barracks had access to weapons during their off-hours, there’s the potential for drunken brawling to become something more. (This makes the stateside military base quite a different environment from the bases in Iraq and Afghanistan, where soldiers are constantly armed but there is little unsupervised downtime and alcohol is hard to come by.) That demographic skew would make it hard to port statistics directly to the population at large, though one assumes apples-to-apples comparisons could be made.

It should also be said that this is only proposed legislation. Still, should it become law, we’d have an opportunity to closely observe what, if anything, happens to community crime levels when gun control is suddenly radically curtailed and guns become more common in shared spaces.

Posted in Uncategorized | 1 Comment

facial challenges and the Fourth Amendment

My friend Matt Price (a filmmaker here in L.A. — check out trailers for his latest horror-comedy) sent me this interesting Volokh Conspiracy post-mortem on Los Angeles v. Patel, a case involving an L.A. ordinance requiring hotel operators to keep records about their guests and make those records available to the LAPD. Nicholas Quinn Rosenkranz suggests that the Court missed an opportunity to articulate a simple standard for determining when “facial” challenges” to a law are appropriate, as opposed to “as-applied” challenges:

The idea here is that one can determine whether a facial or as-applied challenge is appropriate by determining which government actor is bound by the relevant clause and thus who allegedly violated the Constitution. The First Amendment begins “Congress shall make no law …” and so its subject is obviously Congress. A First Amendment claim is inherently a claim that Congress exceeded its power and violated the Constitution by making a law, on the day that it made a law. For this reason, it makes perfect sense that the Court is much more amenable to “facial challenges” in the First Amendment context. A First Amendment claim cannot be “factbound,” to use Scalia’s formulation, because the alleged constitutional violation, the making of a certain law, is completed by Congress before any enforcement facts arise.

But the first clause of the Fourth Amendment is entirely different. It does not say “Congress shall make no law…,” like the First Amendment. It does not, by its terms, forbid legislative action. Rather, it forbids unreasonable searches and seizures — which are paradigmatically executive actions. Here, enforcement facts are relevant to the constitutional analysis; indeed, here, the enforcement facts, the facts of the search or seizure, are the constitutional violation. This is why Alito’s parenthetical for Sibron is so apt: “[t]he constitutional validity of a warrantless search is pre-eminently the sort of question which can only be decided in the concrete factual context of the individual case” (emphasis added). In this context, it is the execution of a search (by the executive), not the making of a law (by the legislature), that allegedly violates the Constitution. This is why, in the parenthetical for the next citation, Alito chooses to quote the penultimate sentence of the Manhattan Institute brief: “A constitutional claim under the first clause of the Fourth Amendment is never a ‘facial’ challenge, because it is always and inherently a challenge to executive action”) (emphasis added).

This is an intriguing idea, but I’m not sure the reliance on “who” (i.e., the branch of government that acts) actually does the work we want it to do in all cases.

To start with, no one actually thinks that the First Amendment only applies to legislatures. If the L.A. Parks Department has an internal, written policy (not authorized by the Legislature) of never allowing socialists to demonstrate in the parks, that policy would be unconstitutional. It would probably also be subject to a facial challenge.

Similarly, it seems to me no big leap to say that a legislative body could pass a law that would have no purpose other than authorizing Fourth Amendment violations, and I see no reason why that could not be subject to a facial challenge as well. For example, a legislature could pass a law authorizing searches of houses according to normal warrant procedures when there is probable cause, and then pass a separate law authorizing warrants for house searches when there is not probable cause, as long as three officers agree that there is reasonable suspicion of a crime (a lesser standard of proof). (Let us assume that there is some mechanism to prevent use of the law in situations where it would be constitutional — perhaps the officers and the magistrate must both certify that there really is no probable cause before invoking the reasonable suspicion provision.) The latter law would serve no purpose except to evade the strictures of the Fourth Amendment, and I see no reason why it could not be subjected to a facial challenge and entirely invalidated.


Matt raised a second issue in his Facebook message to me: “It’s a loaded gun; why do we have to wait for them to pull the trigger before we take them to court?” That question does illustrate a quandary posed by my suggestion that some laws could be facially invalid under the Fourth Amendment: does anyone have standing to challenge my hypothetical law before their house is searched?

I discussed standing when I did my “Blogging Fed Courts” series a couple of years ago:

Article III gives federal courts jurisdiction over “cases” and “controversies.” The terms are not defined, but collectively “cases and controversies” have been taken to be a mere subset of “all things you might be upset about.” In other words, you can’t bring an action in federal court just because you’re pissed off about something. Even if you’re totally right about it. You have to present a “case or controversy,” involving yourself, which the court could lawfully resolve.

The ability to successfully bring suit is usually called “standing.” (It’s a noun — you “have standing” to bring the suit.)

Here are the rules for standing. You have to have been personally injured. Your injury has to be “cognizable” by the courts — there may be injuries that, though real, the court is not prepared to acknowledge. The injury has to be “fairly traceable” to the conduct of the party you want to sue. And it has to be “redressable” by the courts — that is, the remedy you’re seeking in the suit has to be something that would actually fix the problem.

So, to answer Matt’s “loaded gun” question, the answer is that generally you have to have suffered an “injury” before courts can hear your case. It turns out, though, that this is a rule more honor’d in the breach when it comes to the First Amendment. Consistent with Rosenkrantz’s notion that a First Amendment violation occurs the moment a legislature legislates, one can have standing to bring a First Amendment complaint even when one has not yet engaged in the protected speech. The idea is that if you wish to speak, but are afraid to do so because of legal consequences (if your desired to speak is “chilled,” as the cases often say), that is enough to count as an “injury” under the First Amendment.

But the same argument doesn’t necessarily hold for our hypothetical warrant law. There is no particular lawful activity being chilled. (Being chilled in one’s desire to keep contraband doesn’t count, because, according to the Supreme Court, “any interest in possessing contraband cannot be deemed ‘legitimate.'”) There is no constitutionally protected activity being prevented, and so it would be hard to make a case for standing prior to an actual search.

As to standing, then, I think Rosenkranz’s legislative/executive distinction makes sense — there really can be a “loaded gun” lying around, and we can’t do anything about it until the executive branch picks it up and uses it.

Posted in Uncategorized | Leave a comment

the prayers of both could not be answered

In the runup to Independence Day, Sam Goldman argues at Crooked Timber that the Declaration of Independence loses its force if you leave out God:

[T]here is no good reason to treat “Nature’s God” as the key to the Declaration’s theology. Jefferson avoided references to a personal deity in his draft. But either Franklin or Adams added a reference to the “Creator” in the natural rights section devised by the committee of five. And Congress inserted the phrases “Supreme Judge of the World” and “Divine Providence” in the conclusion. If we read as Allen proposes, these phrases should have equal weight to “Nature’s God”.

Taken together, these statements give a picture of God that is not so easily replaced by “an alternative ground for a maximally strong commitment to the right of other people to survive and to govern themselves.” They depict God as the maker of the universe, who cares for man’s happiness, gives him the resources to pursue it, and judges the manner in which he does so. Despite Allen’s assurances that Jefferson tried to avoid religious commitments, writing in a manner compatible with deism, theism, and everything in between, I do not see how the God that emerges from the entire process of composition, could be reconciled with a mere first cause or cosmic watchmaker. On the level of intention, the Declaration presumes a personal and providential deity.

I have always interpreted Congress’s pious insertions as “Flag! Troops!“-style pandering rather than sincere expressions of a belief in a “providential deity” who directs or judges the doings of secular governments. But are they, in fact, philosophically necessary to the project of the Declaration?

Over at the Volokh Conspiracy, Randy Barnett cites a sermon given on the eve of the constitutional convention that expressed a common view that the time that the laws of government, like the laws of mechanics, chemistry, and astronomy, were immutable principles of nature, established by God:

In his sermon, [Reverend Elizur] Goodrich explained that “the principles of society are the laws, which Almighty God has established in the moral world, and made necessary to be observed by mankind; in order to promote their true happiness, in their transactions and intercourse.” These laws, Goodrich observed, “may be considered as principles, in respect of their fixedness and operation,” and by knowing them, “we discover the rules of conduct, which direct mankind to the highest perfection, and supreme happiness of their nature.” These rules of conduct, he then explained, “are as fixed and unchangeable as the laws which operate in the natural world. Human art in order to produce certain effects, must conform to the principles and laws, which the Almighty Creator has established in the natural world.”

Goodrich goes on to explain that one who attempts to skirt these scientific laws of government “attempts to make a new world; and his aim will prove absurd and his labour lost.”

The Declaration opens by stating that “the Laws of Nature and of Nature’s God entitle” the colonies to “dissolve the[ir] bonds” with England and “assume among the powers of the earth, the separate and equal station” of independent states. So Reverend Goodrich’s theory of natural laws of government is quite relevant to the Declaration’s purposes.

Of course, even if you think that the Declaration depends on this natural law theory, you don’t need God to make it real. If there are immutable laws of nature, they can, analytically, exist without any “personal and providential deity” — they can be the work of a distant, uncaring God, a malevolent God, or no God at all.

Goodman points out, however, that the history of the world does not seem to actually support a theory that there are (at least straightforward) laws of government mechanics — respect these rights, or your government will fail:

Th[e] argument is perfectly coherent, given its premise that oppression is counterproductive. The problem is that this premise is likely false. Assertions of rights are often crushed, without much risk to the oppressors. Because they didn’t produce the forecast bad consequences, a purely naturalistic interpretation of the matter would lead us to conclude that these movements had no “right” to succeed.

That conclusion . . . would not be acceptable to Allen—or to the signers of the Declaration. Again, this is why “Nature’s God” is not good enough. In addition to the source of natural order, the Declaration’s good has to care how human events turn out—and perhaps to intervene to ensure that the results are compatible with justice. Otherwise, the signer’s pledge of their lives, fortunes, and sacred honor would be no more than a gamble—and a bad one at that.

But this argument, I think, undermines itself. If God is willing to intervene to ensure the successful outcome of legitimate assertions of rights, we should see a clear pattern of that in history. Whether it’s a law of nature, or God’s desire to create good outcomes for His creatures, if it is reliable enough to be depended upon when one is pledging lives, etc., there ought to be evidence of it. If the evidentiary record shows only chaos, then one should not depend on the intervening force, whether natural or supernatural, to secure the success of one’s revolution.

Goodman then turns to the practical effect of religious belief — as a motivation-multiplier — by considering the way Lincoln, during the Civil War, explicitly invoked God’s will to re-cast the “all men” of the Declaration’s most famous passage:

Individuals are capable of believing almost anything for almost any reason—and even of acting on that basis. But that is a matter for psychology. The political question is whether groups and peoples can be moved to take risks and make sacrifices if they do not think they are justified by a higher power. I am skeptical that this is the case . . . .

This is important because the Declaration is not, as Allen claims, “a philosophical argument”. Instead, it is a call to arms. People generally don’t fight for “commitments” and “grounds”. For better or for worse, they do fight for what they believe God demands.

The Declaration’s greatest interpreter, Lincoln, seems to have recognized this. Before the Civil War, Lincoln treated the Declaration as a work of secular reasoning. In a famous letter from 1859, Lincoln compared its argument to Euclidean geometry. According to Lincoln, “[t]he principles of Jefferson are the definitions and axioms of free society.” To understand politics, all one had to do was draw valid conclusions from certain first principles.

But definitions and axioms are terms of the seminar room, not the battlefield. Although [it] might have been suitable for peacetime, Lincoln’s scholarly account of politics was manifestly inadequate to a war that revolved around the meaning and authority of the Declaration of Independence. So in his second inaugural, he offered a different account of the same principles. This time, he appealed to a “living God” to achieve the right. You do not have to be a Christian to understand what Lincoln was saying. But I do not think you can be an atheist.

I think this is Goodman’s best argument. People don’t like to stick their necks out for uncertain change, and even though, as discussed above, it doesn’t actually make sense to depend on Providence when asserting one’s rights, still, people can be motivated by irrational appeals to a God who, this time, will definitely stretch out His mighty hand and propel your cause to victory.

But before we give too much ground here, let’s note the peculiar theology of Lincoln’s appeal in the Second Inaugural Address:

Both [the Union and the Confederacy] read the same Bible and pray to the same God, and each invokes His aid against the other. It may seem strange that any men should dare to ask a just God’s assistance in wringing their bread from the sweat of other men’s faces, but let us judge not, that we be not judged. The prayers of both could not be answered. That of neither has been answered fully. The Almighty has His own purposes. “Woe unto the world because of offenses; for it must needs be that offenses come, but woe to that man by whom the offense cometh.” If we shall suppose that American slavery is one of those offenses which, in the providence of God, must needs come, but which, having continued through His appointed time, He now wills to remove, and that He gives to both North and South this terrible war as the woe due to those by whom the offense came, shall we discern therein any departure from those divine attributes which the believers in a living God always ascribe to Him? Fondly do we hope, fervently do we pray, that this mighty scourge of war may speedily pass away. Yet, if God wills that it continue until all the wealth piled by the bondsman’s two hundred and fifty years of unrequited toil shall be sunk, and until every drop of blood drawn with the lash shall be paid by another drawn with the sword, as was said three thousand years ago, so still it must be said “the judgments of the Lord are true and righteous altogether.”

This is grim stuff. It is certainly not the “just world” theology of much of the Old Testament, which spends loads of time explaining that the moral ledger always comes out right. As this thoughtful exegesis by Michael Carasik notes, for example, the two most familiar slavery narratives — Joseph being sold into slavery by his brothers, and the enslavement of the Israelites in post-Joseph Egypt — depict enslavement as a punishment for wrongdoing: Jacob’s betrayal of Esau and Joseph’s oppressive dealing with the Egyptian people, respectively. Notably, in neither narrative is the oppressed person the actual wrongdoer — Joseph was not involved in Jacob’s swiping of Esau’s birthright, and the latter-generation Israelites had nothing to do with Joseph’s opportunistic enslavement of the Egyptians. They simply inherit the consequences of the original sin. For that reason, perhaps, Joseph and the Israelites both eventually rescue themselves from slavery, albeit with God’s help.

Lincoln’s framing of the issue is rather different. He does not suggest that African slaves are the inheritors of some sin that must be punished. They are innocent victims. It’s just that “slavery is one of those offenses which, in the providence of God, must needs come.” Nor does he suggest that God would ever have assisted the slaves in freeing themselves — and, indeed, history at the time was littered with the bodies of failed slave revolutionaries.

At best, the focus is on Pharaoh’s redemption — “the woe due to those by whom the offense came,” which is repaid in gold and blood (likely, despite Lincoln’s words, at a heavily discounted rate). But Lincoln’s formulation requires a number of odd turns. God allows slavery, for no good (or at least, no identifiable) reason. Humans make it happen, sure, and so it’s just that humans repay the debt. But that debt seems like a poor motivator to the average Northern foot soldier, who bore little individual responsibility for slavery. If we return to Goodman’s original proposition — that people fight for what God commands, trusting that He will render them victorious — the question is: why now? What has changed? Certainly not the Bible, nor Christian theology — all the Christian arguments for and against slavery existed at the founding of the Republic and remained in effect in 1865. Does God simply lure people and nations into evil and then demand the debt? What is going on with this God, anyway?

Lincoln’s God reminds me of a class taught at the University of Chicago in the 1990s called “The Radicalism of Job and Ecclesiastes.” I never got a chance to take it, but even reading the description in the catalog, I, as a then-religious person, immediately grasped its significance . . . and was afraid:

Both Job and Ecclesiastes dispute a central doctrine of the Hebrew Bible, namely, the doctrine of retributive justice. Each book argues that a person’s fate is not a consequence of his or her religio-moral acts and thus the piety, whatever else it is, must be disinterested. In brief, the authors of Job and Ecclesiates, each in his own way, not only “de-mythologizes,” but “de-moralizes” the world.

Lincoln was not an orthodox Christian, and he may well have felt that pious acts “must be disinterested” — that we must do what is right, utterly independent of reward. Indeed, the Second Inaugural, when read closely, does not promise victory — it promises toil and bloodshed that might last until the end of time. And it does not present any particular reason to trust in God as either a moral arbiter or an ally — God’s motives and sense of timing are utterly inscrutable to man, and hence no more comfort than the chaotic void. Lincoln’s theology is too sophisticated to promise, as the Battle Hymn of the Republic does, that God will “loose[] the fateful lightning of His terrible swift sword.” All Lincoln can offer is a resonant reflection that slavery is wrong and that those who fight to end it are in the right. Lincoln’s appeal, ultimately, is not to a “personal and providential god,” but to the internal compass, which demands action.


Anyway, those are this atheist’s thoughts. I don’t know that the Declaration would have convinced me God was on the side of the Revolution, and I’m not sure Lincoln would have convinced me (theologically, at least) that God had moved decisively to end slavery. But I grew up with religion, and my sense of justice is inextricably intertwined with faith. I believe — irrationally, and in my heart — in a kind of platonic Justice that exists even when no one is fully manifesting it. And I want to believe that elegant old metaphor about the moral arc of the universe being long but bending toward justice. On that level, I think, Lincoln could have gotten me.


In that vein, here’s Sister Odetta to sing us out.

Posted in Uncategorized | Leave a comment

thoughts on the threat to American democracy

The Supreme Court’s decision in Obergefell v. Hodges is being praised both for its practical outcome — making same-sex marriage, like opposite-sex marriage, a constitutionally-protected “fundamental right” — and for Justice Kennedy’s warm language celebrating and defending marriage:

No union is more profound than marriage, for it embodies the highest ideals of love, fidelity, devotion, sacrifice, and family. In forming a marital union, two people become something greater than once they were. As some of the petitioners in these cases demonstrate, marriage embodies a love that may endure even past death. It would misunderstand these men and women to say they disrespect the idea of marriage. Their plea is that they do respect it, respect it so deeply that they seek to find its fulfillment for themselves. Their hope is not to be condemned to live in loneliness, excluded from one of civilization’s oldest institutions.

The decision is also morally the right one. But there is, of course, still the question of whether the ruling makes sense as a matter of law. I think it does, based on the cases that have come before, the role of judges as interpreters of the Constitution, and the basic structure of the Constitution. Here are my thoughts.


The majority’s opinion is rooted primarily in the Due Process Clause of the Fourteenth Amendment, which reads:

No state shall . . . deprive any person of life, liberty, or property, without due process of law . . . .

Obviously, the primary function of this clause is to ensure that people subject to U.S. law get “due process of law” before the government works a deprivation on them — so, for example, the government cannot (subject to some exceptions in emergency situations) seize your car without a hearing. The police cannot take you into custody without probable cause, they cannot hold you for very long without a hearing, and the state cannot put you in prison or kill you (again, subject to certain exceptions) without a trial. There must be some formal (and fair) “process” before life, etc. can be taken from you — and the greater and more intrusive the deprivation, the more process is required.

However, as the majority opinion in Obergefell explains,

This Court has interpreted the Due Process Clause to include a “substantive” component that protects certain liberty interests against state deprivation “no matter what process is provided.” The theory is that some liberties are “so rooted in the traditions and conscience of our people as to be ranked as fundamental,” and therefore cannot be deprived without compelling justification.

Of course, some of these rights are already specifically enumerated in the Constitution: the rights to free speech, religious expression, arms-bearing, jury trial, and so on. But these enumerated rights are not the only rights that are typically understood to be fundamental, and constitutionally protected — nor, by the plain text of the Constitution, should they be:

The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.

Thus, the Court has occasionally found constitutionally protected such rights as the right to travel between states, the right to raise one’s children, the right to decline medical treatment, the right to use birth control, and the right to autonomy in one’s sexual behavior. (Somewhat more controversially, the Court has also found a constituional right to have an abortion — a right whose outer boundaries have contracted somewhat since the Court’s initial decision — and a right to contract free from interference by government regulators — a right which has subsequently evaporated altogether.)

The right to marry is another of these unenumerated constitutional rights. In Loving v. Virginia, the Court overturned a law criminalizing interracial marriage, in part because

Marriage is one of the “basic civil rights of man,” fundamental to our very existence and survival. To deny this fundamental freedom on so unsupportable a basis as the racial classifications embodied in these statutes, classifications so directly subversive of the principle of equality at the heart of the Fourteenth Amendment, is surely to deprive all the State’s citizens of liberty without due process of law. The Fourteenth Amendment requires that the freedom of choice to marry not be restricted by invidious racial discriminations. Under our Constitution, the freedom to marry, or not marry, a person of another race resides with the individual, and cannot be infringed by the State.

That very strong statement of the right — one of the strongest in the “fundamental rights” canon — announces a couple of important principles. First, the right to marry is important enough that it cannot be infringed on an “unsupportable basis” — that is, irrational reasons won’t do. Second, it is not enough that one be able to marry someone; a core part of the right to marry is the ability to marry the person of one’s choosing.

These principles undergird the Court’s decision in Obergefell, of course. Bans on gay marriage are logically “unsupportable,” in that proponents have been unable to advance any reason for the bans that would not also apply to some sizable portion of straight marriages. And they interfere with the fundamental right of personal choice in precisely the way that anti-miscegenation laws do.

The majority also points to two other cases affirming the right to marry — Turner v. Safley, which held that the Missouri Division of Corrections could not prevent inmates from getting married, and Zablocki v. Redhail, which invalidated a Wisconsin statute requiring persons owing child support to prove to a court that they were current on their payments before they could get married. Both cases underscore how serious the state interest must be — and how well-founded the rationale — to impinge on the individual’s interest in marriage.

In Zablocki, the state argued, among other things, that

the statute provides incentive for the applicant to make support payments to his children.

But the Court found that a state interest in collecting child support — a quite profound interest — did not justify the imposition on the right to marry, because the chosen method was unlikely to create the desired result:

First, with respect to individuals who are unable to meet the statutory requirements, the statute merely prevents the applicant from getting married, without delivering any money at all into the hands of the applicant’s prior children. More importantly, regardless of the applicant’s ability or willingness to meet the statutory requirements, the State already has numerous other means for exacting compliance with support obligations, means that are at least as effective as the instant statute’s and yet do not impinge upon the right to marry. Under Wisconsin law, whether the children are from a prior marriage or were born out of wedlock, court-determined support obligations may be enforced directly via wage assignments, civil contempt proceedings, and criminal penalties.

In Turner, a state corrections regulation forbade inmates to marry without permission from the superintendent of the prison, which would usually be given only for “a pregnancy or the birth of an illegitimate child.” The Court noted that prison regulation was an area where judges are particularly unqualified to second-guess the executive branch, which has special expertise in administration of the lives and needs of prisoners. The Court therefore announced that it would evaluate prison regulations with a much laxer constitutional measure than laws outside the prison walls: the regulation need only be “reasonably related to legitimate penological interests” to survive constitutional scrutiny. Nonetheless, the Court found that the Missouri regulation, supposedly predicated on security concerns, did not meet even that minimal standard:

There are obvious, easy alternatives to the Missouri regulation that accommodate the right to marry while imposing a de minimis burden on the pursuit of security objectives. See, e. g., 28 CFR § 551.10 (1986) (marriage by inmates in federal prison generally permitted, but not if warden finds that it presents a threat to security or order of institution, or to public safety) . . . . Moreover, with respect to the security concern emphasized in petitioners’ brief — the creation of “love triangles” — petitioners have pointed to nothing in the record suggesting that the marriage regulation was viewed as preventing such entanglements. Common sense likewise suggests that there is no logical connection between the marriage restriction and the formation of love triangles: surely . . . inmate rivalries are as likely to develop without a formal marriage ceremony as with one.

(Note, by contrast, that prisoners and even ex-prisoners can have many of their other fundamental rights abridged in quite drastic ways — they lose entirely the right to bear arms, for example, and those on parole or supervised release may lose the right to travel, at least without permission. But they still freely exercise the right to marry.)

The outcome in Obergefell flows, if not inexorably, then logically enough from these precedents. If the right to marry (and, specifically, marry the partner of one’s choosing) is fundamental, and if the state must present serious and not merely flimsy or pretextual arguments in favor of abridging that right, then it follows quite sensibly that the state may not abridge the right to marry a same-sex partner without solid arguments that some state interest requires it. Serious arguments that same-sex marriage would impinge on any legitimate state interest are exactly what have been lacking in the various cases litigated over the past decade, and so the Court held that such abridgment is improper.


Of course, because the announcement of “fundamental rights” is an exercise of judicial power, perhaps informed by but not determined by democratic political processes, the Court has consistently warned that the justices must

exercise the utmost care whenever we are asked to break new ground in this field, lest the liberty protected by the Due Process Clause be subtly transformed into the policy preferences of the members of this Court.

Chief Justice Rehnquist wrote that

[o]ur Nation’s history, legal traditions, and practices . . . provide the crucial “guideposts for responsible decisionmaking,” that direct and restrain our exposition of the Due Process Clause.

The dissenters in Obergefell therefore primarily rest their arguments on the notion that the Court, by deciding the issue unilaterally, has overstepped its bounds, arrogating to itself the policymaking power more properly invested in the legislature, and doing so without regard to history or our legal traditions.

Chief Justice Roberts, for example, notes that

There is no serious dispute that, under our precedents, the Constitution protects a right to marry and requires States to apply their marriage laws equally. The real question in these cases is what constitutes “marriage,” or—more precisely—who decides what constitutes “marriage”?

Who indeed? One could answer Roberts with Chief Justice Marshall’s maxim, from Marbury v. Madison, that “[i]t is emphatically the province and duty of the judicial department to say what the law is.” Deciding the definition of legal terms is the essence of the judicial function.

Beyond that, though: to leave that authority entirely in the hands of the legislature would be to allow legislators to exclude people from legal marriage in ways that would contravene the Court’s uncontroversial precedents. It cannot be the case, for example, that the legislature can define marriage as “the legal union of a man and a woman of the same race.” But why not? If the legislature gets to “define” marriage, shouldn’t Chief Justice Roberts want to overturn Loving as an unreasonable infringement on the powers of the people’s elected representatives? There is no indication that he does. Similarly, with regard to Turner, why can’t the legislature (or prison regulators) “define” marriage as “the legal union of one free man and one free woman”? That language is uncomfortably close to the historical precedent of excluding slaves from legal marriage, but if the legislature has carte blanche to decide “what constitutes marriage,” why not?

Of course, it should be obvious that judges are also not free to define marriage any just way they want. They cannot define marriage nonsensically — as, say, a physical welding of one toaster and one office chair. They also presumably cannot define marriage in a way that would substantially encumber the practice of marriage as we know it today — for example, by creating onerous requirements for divorce.

The dissenters ask whether judges can define marriage in ways that diverge from historical tradition. But their reading of history is peculiarly narrow. According to Roberts,

As the majority acknowledges, marriage “has existed for millennia and across civilizations.” For all those millennia, across all those civilizations, “marriage” referred to only one relationship: the union of a man and a woman.

That is not remotely true. First, as Roberts knows perfectly well, many cultures have had a view of marriage much broader than “a” man and “a” woman — the ancient Hebrews, for example, and the somewhat less ancient Muslims, and the fairly recent Mormons all embraced polygamy. (And that’s just cultures in the Abrahamic tradition.) Second, lurking in the background of many of the cultures of the past is a little-remarked-on tradition of acknowledged same-sex couplehood and marriage. Sometimes these occurred via a culturally-acceptable gender fluidity; e.g.:

We’wha was a key cultural and political leader in the Zuni community in the late nineteenth century, at one point serving as an emissary from that southwestern Native American nation to Washington, D.C. He was the strongest, wisest, and most esteemed member of his community. And he was a berdache, a male who dressed in female garb. Such men were revered in Zuni circles for their supposed connection to the supernatural, the most gifted of them called lhamana, spiritual leader. We’wha was the most celebrated Zuni lhamana of the nineteenth century. He was married to a man.

But sometimes not, too:

[T]here were societies in pre-colonial Africa that permitted women to marry other women. These marriages typically helped widowed women who didn’t want to remarry a man or return to their family or their husband’s family after the husband’s death . . . . Instead, the widow could pay a bride price and perform other rituals needed to marry another woman . . . .

(There are more things in heaven and earth, Horatio….)

More importantly, the development of American law and history supports the majority’s conclusion. Here is Roberts again:

Marriage did not come about as a result of a political movement, discovery, disease, war, religious doctrine, or any other moving force of world history—and certainly not as a result of a prehistoric decision to exclude gays and lesbians. It arose in the nature of things to meet a vital need: ensuring that children are conceived by a mother and father committed to raising them in the stable conditions of a lifelong relationship . . . .

The premises supporting this concept of marriage are so fundamental that they rarely require articulation. The human race must procreate to survive. Procreation occurs through sexual relations between a man and a woman. When sexual relations result in the conception of a child, that child’s prospects are generally better if the mother and father stay together rather than going their separate ways. Therefore, for the good of children and society, sexual relations that can lead to procreation should occur only between a man and a woman committed to a lasting bond.

As anthropology, this is not correct — at least, not as an absolute statement of the inevitable requirements of child-rearing. For example, in matrilineal societies, including a number of American Indian societies, is it common for a child’s primary male caregiver to be his maternal uncle, rather than his biological father. Fathers may be somewhat ancillary to the child’s life, even if an affectionate bond remains.

But it is also not true of either historical or modern American society or marriage. Historically, it paints an overly rosy, child-oriented picture and ignores a primary motivating factor in the promotion of marriage as the channel for sexual energy: the desire to preserve inheritance of family property to biological offspring of a particular set of partners, who, at least in early colonial history, would have been selected for one another by their families. There is also the Pauline/Augustinian tradition, strong in American religious traditions, of thinking about sex as something earthly and unfortunate, and marriage as the “release valve” institution designated by God to keep the weak from sin until the Lord returns.

Despite the church’s best efforts, though, it has never been the case that “sexual relations that can lead to procreation . . . occur only between a man and a woman committed to a lasting bond.” In the latter part of the 1700s, “more than one girl in three was pregnant when she walked down the aisle. In parts of Britain, 50 percent of brides were great with child.” And even if many of those pregnancies resulted in marriage in earlier periods of our history, they certainly don’t today. As early as the 1970s, one third of children were conceived, and one in five were born, out of wedlock. Today, among millenials, 64% of mothers have had at least one of their children without being married.

This is to say nothing, of course, of the marriages of those who cannot have children or do not want to have children. As marriages occur later in life and fertility falls, it is now not remotely uncommon for a marriage to have nothing whatever to do with child-rearing.

Marriage is also not what it used to be as a legal mechanism. The Obergefell majority mentions the law of coverture — which prevented a married woman from holding property or entering into contracts — as an example of a practice that has fallen away. Roberts insists that this change has not altered the “core” of marriage, but he is wrong. The abandonment of coverture and the development of modern marital property law, child support law and “no-fault” divorce all ensure that marriage is no longer guaranteed, or even likely, to result in “the stable conditions of a lifelong relationship.” If either party does not like a marriage, they can leave and forge a life without their former partner, unencumbered by either legal ties or, hopefully, economic dependency.

Nor is marriage any longer required to ensure one’s children will be able to inherit. In many states, now — thanks in part to the meddling of the Supreme Court — an illegitimate child can inherit routinely from the mother, and can inherit from the father if paternity is established during the father’s lifetime.

Thus, the Supreme Court’s description of marriage in 1888

Marriage is something more than a mere contract, though founded upon the agreement of the parties. When once formed, a relation is created between the parties which they cannot change, and the rights and obligations of which depend not upon their agreement, but upon the law, statutory or common. It is an institution of society, regulated and controlled by public authority.

— though still technically true, no longer accurately describes the practical effect of our law of marriage. The overwhelming trend has been away from social control and toward individual autonomy and liberty of choice.

Perhaps because the effects of these legal changes are so obvious, Chief Justice Roberts next turns to a kind of appeal to bottom-up-authority:

The majority observes that these developments “were not mere superficial changes” in marriage, but rather “worked deep transformations in its structure.” They did not, however, work any transformation in the core structure of marriage as the union between a man and a woman. If you had asked a person on the street how marriage was defined, no one would ever have said, “Marriage is the union of a man and a woman, where the woman is subject to coverture.”

Probably that’s true, but that is because they wouldn’t have had to — the law did the defining for them. If you asked a person on the street to define a “contract,” likely they could not come up with offer, acceptance, consideration, and mutuality, either. But the shape of their lives nonetheless depends on the law correctly identifying and applying the elements of a contract. Moreover, if you had asked people of the past specific questions about how marriage worked — “how easy is it to get a divorce?” “can a child born outside the confines of marriage inherit family assets?” — my suspicion is that most would have been able to give you an accurate answer.

But even if the Chief Justice were correct as to the universality and centrality of a certain purpose of marriage, and even if the law of marriage had not definitively moved away from the model he describes, that would still not provide a rational reason not to affirm gay marriage in a modern world where adoption and “blended families” are common. Procreation is important, yes, and a child’s “prospects” — however you care to define that — might well be better if he or she has two parents rather than one. (I could ask whether a child’s prospects increase linearly with the number of parents, and if so whether that is not a strong argument for plural/polyamorous marriage… but let’s let that lie.) But nothing in the social science we have supports the idea that those parents must be the child’s biological parents. To the degree that marriage is and should be about provided a loving, united home for a child to grow up in, then, it seems obvious that expanding the total possible number of marriages that could provide such a home can only redound to the benefit of children.

Given all this, the majority’s conclusion strikes me as in line with the historical trends regarding the role and purpose of marriage in our lives.


That leaves, I think, the real argument hidden behind all this law-office history and sociology: that the decision is undemocratic. As Justice Scalia writes, the Court is both unelected and unrepresentative:

[T]his Court . . . consists of only nine men and women, all of them successful lawyers who studied at Harvard or Yale Law School. Four of the nine are natives of New York City. Eight of them grew up in east- and west-coast States. Only one hails from the vast expanse in-between. Not a single South-westerner or even, to tell the truth, a genuine Westerner (California does not count). Not a single evangelical Christian (a group that composes about one-quarter of Americans), or even a Protestant of any denomination.

And that’s true. (One could ask whether the abortion litmus test for Republican appointees has skewed the Court Catholic in recent decades — all five Republican appointees currently sitting are Catholic. But it’s also possible that the religious distribution is just a statistical anomaly — historically the Court has been highly Protestant.) Courts are not representative bodies, and judges should exercise caution before plunging ahead where democracy has not acted, or remains divided.

On the other hand, there are good reasons to think this decision, even if handed down by a small group of judges, is not the end of democracy as we know it.

First, decisions by the Supreme Court are always made by this elite, unrepresentative group — a fact Justice Scalia does not bother to highlight when he is in the majority. Scalia attempts to distinguish between judges “functioning as judges, answering the legal question whether the American people had ever ratified a constitutional provision that was understood to proscribe the traditional definition of marriage” and judges answering the “policy question of same-sex marriage.” But of course, the selection of the “legal question” to be answered — and, especially, Scalia’s choice to frame it in such a way as to guarantee the outcome he wants — is, itself, a “policy” choice. The idea that judges can answer legal questions without interposing their personal preferences is a bit of a fiction.

This is why tradition requires judges — at least appellate judges, who “make” the law — to explain their reasoning. Judges are never going to just, in Chief Justice Roberts’ memorable phrase, “call balls and strikes,” and we should not expect them to leave their personal biases and opinions at the door. Rather, we should expect exactly what happens: thousands of judges around the country reading and critiquing each other’s reasoning — not to mention legal scholars, journalists, and the general public. Judicial opinions do not occur in a vacuum, are subject to review over time by other judges, and are (on some subjects) the focal point of intense public scrutiny.

Second, judges must have some freedom to, if I may put it indelicately, make some things up. There is much on which our system of democracy depends that simply appears nowhere in the text of the Constitution. I am not talking just about unenumerated fundamental rights, although that is probably the category of non-textual “constitutional” provisions people are most familiar with. Judicial review itself — the idea that judge can declare a law unconstitutional, and that the decision of the Supreme Court on matters of constitutionality (or even statutory interpretation) is final — is simply not present in the Constitution. Go read Article III — it’s short and plainly written. Nothing in it suggests that the Supreme Court is the final arbiter of constitutional limits, or that the other branches must respect the Court’s decisions. That principle — which we now accept as one of the core checks on executive and legislative power — was itself the creation of judges, based on their judgment of how the Constitution must work, what it must be saying, even though it doesn’t actually say anything of the kind explicitly.

Judges are therefore always filling in gaps and silences in the Constitution. (As another example, the Constitution grants Congress the power to “regulate commerce with” Indian tribes, but is silent about how Indian nations would function as governments if absorbed into the United States. Once that happened as a practical matter, judges developed an entire body of quasi-constitutional law to define the nature and limits of Indian tribal sovereignty.) This is nothing new, and nothing alarming.

Judges also resolve ambiguities in the Constitution. For example, the Fifth Amendment provides that

No person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a grand jury, except in cases arising in the land or naval forces, or in the militia, when in actual service in time of war or public danger[.]

Now, in that archaically written sentence, does “when in actual service in time of war or public danger” modify only “the militia,” or both “the land and naval forces” and “the militia”? In other words, can a soldier in the Army or a sailor in the Navy demand a grand jury indictment when tried during peacetime? Or is it only militia members who have the right to a grand jury during peacetime? Grammatically, the sentence is ambiguous; courts have had to resolve this question. And what does “arising in the land or naval forces” mean? Does the crime have to be connected to a soldier’s actual military service, or can he be tried without a grand jury indictment on any crime he commits during his term of service? Courts have had to answer that question, too.

As long as judges confine themselves to either resolving ambiguities or inferring necessary constitutional mechanisms and fundamental rights of the people (even rights not previously understood), there is no constitutional crisis. Judges should move cautiously and prudently, examine their biases, and base their opinions on logical arguments. Those opinions should then be analyzed and criticized, to see whether their reasoning is valid and in line with the country’s general values. Sometimes, a judicial opinion that purports to resolve an ambiguity will be found to be mistaken, or so obviously the result of intolerable bias that it must be overturned. Many judicial constitutional inventions, however, will stick, because they obviously further the cause of justice in a democracy. There is nothing disastrous to the republic in this ongoing development of constitutional principles.

Of course, it is significantly more dangerous when the Court outright defies the plain meaning of an explicit constitutional provision. In Korematsu v. United States, for example, the Court held that an executive order excluding Americans of Japanese ancestry from parts of the West Coast during World War II was constitutional as an exercise of the President and Congress’s war powers. Notably, the majority opinion does not mention the words “equal protection,” although the result of that opinion is plainly contrary to the Equal Protection Clause of the Fourteenth Amendment, as noted by one of the dissenting judges. Similarly, in Kelo v. City of New London, the Court held that the Fifth Amendment, which provides that private property may only be taken by the government for “public use,” did not bar a city from seizing people’s property and turning it over to a private developer.

Korematsu and Kelo have been heavily criticized over the years. It is probably safe to say that Korematsu is no longer good law, and its injustices would not be repeated by the Court today, although it has never officially been overturned. And many state legislatures reacted to Kelo by passing statutes preventing cities and counties from exercising eminent domain for the benefit of private parties.


But if later Courts and/or legislatures do not respond, there is still always a backstop against perceived judicial overreach. It is not, as Justice Scalia suggests in the final paragraph of his dissent, executive nullification:

The Judiciary is the “least dangerous” of the federal branches because it has “neither Force nor Will, but merely judgment; and must ultimately depend upon the aid of the executive arm” and the States, “even for the efficacy of its judgments.” With each decision of ours that takes from the People a question properly left to them—with each decision that is unabashedly based not on law, but on the “reasoned judgment” of a bare majority of this Court—we move one step closer to being reminded of our impotence.

That is a solution that only invites worse abuses, and by a far more powerful branch. (Just ask the Cherokee.) The executive branch — i.e., the branch that already has the power of armies, police, and intelligence services — should not unmoor itself from obedience to the courts, no matter how wrong-headed a particular decision might seem.

Rather, the solution that always remains available is popular sovereignty. The people are always free to amend the Constitution after a Supreme Court ruling that “gets it wrong.” Indeed, the very first amendment to the Constitution after the Bill of Rights was in response to a ruling of the Court. In Chisholm v. Georgia, the state of Georgia argued that, as a sovereign, it could not be sued by an individual. The Court held that Article III’s grant of federal jurisdiction over “controversies . . . between a State and citizens of another State” abrogated most claims Georgia might have to sovereign immunity. Congress promptly proposed an amendment to the Constitution to reverse Chisholm, the amendment was ratified by the states, and states have been immune from suit by citizens (unless they consent to be sued) ever since.

The “go amend the Constitution” argument is often thrown out by conservative judges — Scalia prominent among them — who believe, e.g., that the First Amendment largely forbids limits on campaign finance:

The principle of the First Amendment is the more the merrier; the more speech the better. False speech will be answered by true speech. That’s what we believe and maybe it’s a stupid belief, but if it is you should amend the First Amendment.

But there is no reason that argument should not also apply when the Court does something conservatives dislike. If gay marriage is bad enough that it seriously imperils our democracy — do something! Agitate for a convention of states, for example. Or, like the left, introduce an amendment in Congress. The process is cumbersome — too cumbersome, as Scalia himself frankly acknowledges. But when people hate something enough — when it is a clear policy disaster — the political will can be found to pass an amendment.

But if, as I suspect, the problem is not so severe as all that, then all the normal solutions are available: try to convince the Court to change its mind, test the limit of the holding with edge cases, engage in sustained public debate to convince jurists that the law means something else.

Or… just live with it. You don’t always get what you want — even if you are The People. A republic is like that sometimes.

Posted in Uncategorized | Leave a comment

the real reason we should put Harriet Tubman on the twenty

The group Women on 20s has recently gotten some pretty good press for the idea that Harriet Tubman should replace Andrew Jackson on the $20 bill. Personally I always favored John Ross for the twenty, just to really stick it in Ol’ Hickory’s eye, but the voters (well, internet voters) have spoken, and I approve of their choice. And anyway, Tubman has a distinct advantage over other historical figures whose names have been bandied about to replace Jackson: she is a legitimate badass.

So was Jackson, of course. But Jackson is best known for driving Indians off their land, helping to annex Florida, and for fighting the British (who were supporting the Indians) in a dumb war that did little to accomplish its ostensible goals but did, again, screw the Indians. Jackson’s military adventures are one face of American courage, but not its best face — rather, the face of America the Expansionist and Belligerent.

Tubman, on the other hand, represents a different kind of physical courage. It’s well-known, of course, that she put her life on the line again and again by returning to Maryland to help others escape slavery after her own daring escape. Less well-known, but at least as dramatic, is this spectacular episode from the Civil War in which she masterminded a Union raid into Southern territory to free slaves to join the fight:

It’s no exaggeration to say that the Combahee raid was unique in American history. All Union operations in slave territory, especially as the Emancipation Proclamation become well known, yielded the self-liberated by the hundreds. But the Combahee raid was planned and executed primarily as a liberation raid, to find and free those who were unable or unwilling to take the enormous risks to reach Union lines on their own. That’s how Tubman conceived of it. That, too, is unique – because for the first and only time in the Civil War, or for that matter any American conflict before this century, a woman (and a civilian at that) played a decisive role in planning and carrying out a military operation….

Tubman did not speak Gullah, a language common among coastal slaves. As Tubman herself says of a crucial moment in the raid: “They wasn’t my people … because I didn’t know any more about them than [a white officer] did.” And these were slaves who worked mostly in the fields, men and women who trusted “house” slaves as little as they trusted whites, even white Yankees.

In other words, the amazing thing about Tubman’s role during the raid was not that she was in her element, but that she was so far outside it.

Yet it’s clear that it was Tubman who visited the camps of liberated slaves along the coast and recruited the 10 scouts named in Union records, 9 of whom had escaped from nearby plantations. Lieutenant George Garrison, posted to one of the Northern-raised black regiments, said, “She has made it a business to see all contrabands escaping from the rebels, and is able to get more intelligence than anybody else….”

The Second South Carolina was not made up of veterans. The men had far more in common with Tubman than with their own officers. That’s why she went with them on the raid. Yet Tubman wasn’t a passenger. The intelligence she gathered, the soldiers she recruited, indicate that she actually planned the raid with Hunter and Montgomery: three landings on the right, one on the left….

As the troops finished their demolition work, the fleeing slaves started to reach the boats, many more slaves than there was space available. “When they got to the shore,” Tubman recalled later, “they’d get in the rowboat, and they’d start for the gunboat; but the others would run and hold on so they couldn’t leave the shore. They wasn’t coming and they wouldn’t let any body else come.”

That’s when a white officer told Tubman to sing to “your people.” Even decades later, when she would regale white audiences with the Combahee story, she said she resented that – a surprisingly modern sensitivity. But she did sing. And it worked. “Then they throwed up their hands and began to rejoice and shout, glory! And the rowboats would push off.”

It’s hard to understand how the song Tubman recalled singing – about how “Uncle Sam is rich enough to buy you all a farm” – could have persuaded those left behind to let the boats go. Did she intentionally omit the fact that she threatened to shoot anyone who tried to back out from escaping? Meanwhile, the Confederates set upon those on shore with dogs and guns; at least one young girl was killed. But hundreds escaped.


The difference I’m getting at is not about nonviolence, per se. The Combahee raid was an act of war and involved fighting. But Tubman replacing Jackson on the twenty could symbolically represent a shift in American thinking about honor and courage — away from the kind of courage it takes to take things from a weaker people, or to dominate those you define as “enemies,” and toward the kind of courage it takes to put yourself at risk so that people can be free.

I say this without personal critique. Jackson was a certain kind of violent, aggressive man, of which there will always be some among us, and that is fine. People are as they are, and Jackson had his good points: he was apparently a loyal friend and a large-hearted husband and father. (In one of those perversities that are forever wrinkling up neat historical narratives, Jackson even adopted an Indian son — after massacring most of his village.) He also took a more expansive view of suffrage and popular democracy than the prior generation of (largely well-to-do) revolutionary-era leaders had. But Jackson was a duellist, literally and otherwise, an irascible man who made enemies easily and held long grudges. Perhaps his natural tendencies toward conflict were brought to full flower by a bellicose Southern culture of honor. I don’t know. I don’t care. It’s not about a judgment of the man as an individual — something I care about less and less in these matters. It’s much more about the cultural forces that selected such a man, put him at the head of various armies and then at the head of the country, and gave him the power and authority to do some measure of evil.


We love stories of physical, direct heroism. And I think they do more than just scratch the itch we have for vicarious adventure. They provide models for thinking about, and feeling drawn to, acts of less concrete heroism. That’s a good thing. But this country has matured quite a bit in two hundred years. If Jackson — symbol of courage in the name of acquisitiveness and terrorizing your enemies — was one model of American badassery in our nation’s youth, he needn’t be the only one we ever have. Tubman’s model — courage as the taking of personal risks in pursuit of a truer, deeper, more equal liberty — could take a turn in the front for a while.

Jacksonian courage is the sort of courage that fueled investment banking and business culture for decades. That culture — intensely macho, fratty, willing to substitute bluster for facts and understanding, and determined to see the world as zero sum — set the stage for the financial crisis.

Maybe that’s not what we need so much of these days. Maybe what we need are politicians who buck their leadership, even at cost to themselves, when the really important things are on the line. Maybe what we need are public defenders and civil rights lawyers keeping the justice system honest, even if doing so more-or-less shuts them out of the legal profession’s positions of power. Maybe we need more whistleblowers. Maybe we need more citizen journalists. Maybe that’s the kind of courage we should celebrate.

Posted in Uncategorized | Leave a comment

born in flames

I have more thoughts on presidents, war, and voting that I hope to get to soon. But at the moment the eruption of violence in Baltimore seems more urgent.

On Sunday I posted this well-intentioned Mic article to Facebook:

On Saturday, over 2,000 protesters marched to Baltimore’s City Hall to protest the death of 25-year-old black man Freddie Gray. Gray died on April 19 after suffering a spinal injury while in Baltimore Police Department custody a week earlier. At some point between when Gray was put into a police van and shackled and the time paramedics were called over half an hour later, something nasty happened.

According to Gray’s family attorney, his spine was 80% severed at the neck. Deputy Police Commissioner Jerry Rodriguez admits that Gray gave up without the use of force.

Since then, thousands of people have taken to the streets every day to demonstrate against the fate that befell Freddie Gray and countless other people who have been killed by police in America. But on Saturday night, a small minority of Baltimore’s residents decided to irresponsibly engage in property destruction and acts of violence — and the media lost its mind….

Despite the fact that there have been peaceful protests in Baltimore every day since Gray died on April 19, some folks seem determined to frame the narrative around the actions of a disgruntled minority.

The article makes some very good, worthwhile points about the way the violence is interpreted by the media, including all the usual notes about the difference between white riots (hijinks) and black riots (END TIMES!!!) and the fact that complex images are often misintepreted. This, for example:

riot drunk chair throw

went through the following permutation, as explained by journalist Brandon Soderberg:

here is a photo of me stopping a woman from going at protestors (she seemed very drunk) . . . saving her from herself.

that image is being sent around to suggest I was protecting her from protestors . . . .

you’ll also see images of @trustpunch and @ItsGiannaBitch trying to stop her. we were part of the protest. drunk lady was walking AT protest

another drunk woman threw a stool at me and someone else then kept yelling at protestors, walking at them hands up “Come at me”


But while the media criticism is invaluable, as of today it no longer seems tenable to say that this violence is the work of a rotten few. The rioters and window-smashers might still be a minority, but at this point they are powerful, driving force in what’s going on in Baltimore. This is not, for example, the brief, mild violence that interrupted demonstrations in Los Angeles after the Zimmerman verdict a couple of years ago. As yesterday’s photos show, what’s happening in Baltimore right now involves sustained, violent confrontation with police:

riot1

riot2

riot3

riot4

A slightly shell-shocked Shep Smith suggested that it looked like Palestinians fighting with the IDF. Which seems about right:

riot5

Of course, clashing with police is one thing; even white kids from the leafy suburbs did that, once upon a time:

whiteriot

Harder to take, for a squeamish white liberal at least, are the images of looting, which conjure the worst stereotypes imaginable and seem impossible to square with political protest:

riot6

riot6.5

riot7


In one sense — a strictly logical sense — it of course does not matter whether protesters are doing bad things. If they are correct about police violence and ongoing, systemic oppression, they are correct about that, independent of their own actions this week. But because humans are human, and because they have a hard time separating the message from the messenger, it is worth exploring where this violence — including the desire to commit property crimes against convenience stores and other businesses — really comes from.

This thoughtful post by blogger Radical Faggot attempts to locate all these acts in a logical strategy of disruption:

I’m overwhelmed by the pervasive slandering of protesters in Baltimore this weekend for not remaining peaceful. The bad-apple rhetoric would have us believe that most Baltimore protesters are demonstrating the right way—as is their constitutional right—and only a few are disrupting the peace, giving the movement a bad name….

Non-violence is a type of political performance designed to raise awareness and win over sympathy of those with privilege. When those on the outside of struggle—the white, the wealthy, the straight, the able-bodied, the masculine—have demonstrated repeatedly that they do not care, are not invested, are not going to step in the line of fire to defend the oppressed, this is a futile political strategy….

The political goals of rioters in Baltimore are not unclear—just as they were not unclear when poor, Black people rioted in Ferguson last fall. When the free market, real estate, the elected government, the legal system have all shown you they are not going to protect you—in fact, that they are the sources of the greatest violence you face—then political action becomes about stopping the machine that is trying to kill you, even if only for a moment, getting the boot off your neck, even if it only allows you a second of air. This is exactly what blocking off streets, disrupting white consumerism, and destroying state property are designed to do….

[W]hile I don’t believe that every protester involved in attacking police cars and corporate storefronts had the same philosophy, did what they did for the same reasons, it cannot be discounted that when there is a larger national outcry in defense of plate-glass windows and car doors than for Black young people, a point is being made….

A fine point. I also don’t think every person who smashed a convenience store window today (or even every person who threw a rock or set fire to a police vehicle) was motivated by political consciousness. But Baltimore and communities like it are sending us a very clear message, intended or not: they are in a permanent state of civic disaster:

Over the past four years, more than 100 people [in Baltimore] have won court judgments or settlements related to allegations of brutality and civil rights violations. Victims include a 15-year-old boy riding a dirt bike, a 26-year-old pregnant accountant who had witnessed a beating, a 50-year-old woman selling church raffle tickets, a 65-year-old church deacon rolling a cigarette and an 87-year-old grandmother aiding her wounded grandson ….

And in almost every case, prosecutors or judges dismissed the charges against the victims—if charges were filed at all. In an incident that drew headlines recently, charges against a South Baltimore man were dropped after a video showed an officer repeatedly punching him—a beating that led the police commissioner to say he was “shocked.”

And while the Department of Justice declined to pursue civil rights charges against Darren Wilson in the Mike Brown shooting, the Department’s investigation of the Ferguson city government and police department revealed a scheme of policing-as-revenue-collection that would make the Sheriff of Nottingham’s hair stand on end:

Ferguson’s law enforcement practices are shaped by the City’s focus on revenue rather than by public safety needs. This emphasis on revenue has compromised the institutional character of Ferguson’s police department, contributing to a pattern of unconstitutional policing, and has also shaped its municipal court, leading to procedures that raise due process concerns and inflict unnecessary harm on members of the Ferguson community. Further, Ferguson’s police and municipal court practices both reflect and exacerbate existing racial bias, including racial stereotypes. Ferguson’s own data establish clear racial disparities that adversely impact African Americans. The evidence shows that discriminatory intent is part of the reason for these disparities. Over time, Ferguson’s police and municipal court practices have sown deep mistrust between parts of the community and the police department, undermining law enforcement legitimacy among African Americans in particular.

The City budgets for sizeable increases in municipal fines and fees each year, exhorts police and court staff to deliver those revenue increases, and closely monitors whether those increases are achieved. City officials routinely urge Chief Jackson to generate more revenue through enforcement. In March 2010, for instance, the City Finance Director wrote to Chief Jackson that “unless ticket writing ramps up significantly before the end of the year, it will be hard to significantly raise collections next year. . . . Given that we are looking at a substantial sales tax shortfall, it’s not an insignificant issue.” Similarly, in March 2013, the Finance Director wrote to the City Manager: “Court fees are anticipated to rise about 7.5%. I did ask the Chief if he thought the PD could deliver 10% increase. He indicated they could try.” The importance of focusing on revenue generation is communicated to FPD officers. Ferguson police officers from all ranks told us that revenue generation is stressed heavily within the police department, and that the message comes from City leadership. The evidence we reviewed supports this perception.

The City’s emphasis on revenue generation has a profound effect on FPD’s approach to law enforcement. Patrol assignments and schedules are geared toward aggressive enforcement of Ferguson’s municipal code, with insufficient thought given to whether enforcement strategies promote public safety or unnecessarily undermine community trust and cooperation. Officer evaluations and promotions depend to an inordinate degree on “productivity,” meaning the number of citations issued. Partly as a consequence of City and FPD priorities, many officers appear to see some residents, especially those who live in Ferguson’s predominantly African-American neighborhoods, less as constituents to be protected than as potential offenders and sources of revenue.

It’s also worth pointing out, perhaps not quite as a side note, that the businesses that were looted are convenience stores and check cashing stores, not Whole Foods and Bank of America. That, in itself, is part of the message.

Because even to the extent that it is not intended as a political message, what’s happening in Baltimore, like what happened in Ferguson, is a message. It is telling us what happens when you put your fellow human beings in, not merely difficult situations, but situations that are devoid of hope. It is not mere poverty that we are talking about here, but the disastrous consequences of centuries of personal experience teaching people that the civil society is not for them, does not protect them, and indeed is actively out to weaken, confine, rob, and ultimately destroy them.


Back when Ferguson exploded, I bookmarked this nice bit of writing from a few years ago, by Tumblr user ladycyon. It’s about, of all things, “Dog Whisperer” Cesar Millan’s theories of dog psychology. The author thinks Millan reads dogs all wrong, based on old research that did not take into account the impact of environments on complex mammal brains. This section, in particular, struck a chord with me:

The majority of Millan’s theories stem from research done on wolves “in the wild.” The problem with this is that for the majority of the last hundred years, up until 1975 (the year wolves gained endangered species protection from the government) it’s been difficult if not nearly impossible to find a wild wolf pack due to extensive efforts to eradicate the species. In an article featured by the Canadian Journal of Zoology, David Mech writes, “Most research on the social dynamics of wolf packs, however, has been conducted on wolves in captivity. These captive packs were usually composed of an assortment of wolves from various sources placed together and allowed to breed at will,” (Mech, 2). This meshing of random unrelated individuals created a very different social dynamic than those found in wolves in the wild; specifically concerning the occurrence of fights for dominance.

Adult wolves placed in a precarious social situation, will fight with each other, for control of food and resources, and – supposedly – rank in the pack, the strongest, most ferocious animals coming out on top. This is where the concept of an “alpha” wolf stemmed from . . . . The problem with this is the fact that wolves in the wild do not form packs in this manner. Mech writes: “Rather than viewing a wolf pack as a group of animals organized with a “top dog” that fought its way to the top, or a male-female pair of such aggressive wolves, science has come to understand that most wolf packs are merely family groups formed exactly the same way as human families are formed . . . .” [T]hese family groups do not compete for dominance. The parents become the leaders of these groups, the pups following the parents naturally and learning from them. In other words, there are rarely, if ever, fights for dominance amongst wild wolves inhabiting the same pack. To base a dog training theory on this faulty concept of wolf behavior is bad science, yielding inaccurate and ineffective results.

Increasingly I think this is almost word-for-word true of humans as well. Put humans in precarious social environments, and they tend to behave differently — more violently, more aggressively, more fearfully, more selfishly. They “choose” short-term, risky survival strategies over more “rational” long-term planning.

We tend to think of acts like rioting and especially looting as the making of rational choices, but my suspicion is that the behavior changes tend to be sub-rational, a whole different pattern of responses that acts as a kind of emergency override of our more normal responses. Like wolves, we humans have one set of (cooperative, pro-social) behaviors that emerges in environments where those behaviors are likely to be rewarded. But we have another set that emerges in environments that are chaotic, tipped against us, or otherwise sufficiently unrewarding of good citizenship.

For the environment to foster pro-social behavior, then, I think it has to convince the individual that there is enough fairness and enough predictability in his environment that his pro-social actions — generosity and forgiveness, self-sacrifice, obedience to authority and moral codes, forbearance from violence, etc. — will have a meaningful effect. He has to believe that pro-social behavior on his part will generally invite the same from others, allowing for occasional mistakes. When that belief is lacking, a person (or any intelligent, social mammal!) feels what ladycyon calls “precarious.” And that feeling of precariousness, I think, might explain a lot of anti-social or seemingly irrational behavior.


I think Jay Smooth is getting at this when he talks about rioting in Ferguson and people “reaching their limit”:


But also, Ta-Nehisi Coates is approaching the same thing when he says that “having a boot on your neck, while deeply tragic, is not an ennobling experience.”


Baltimore is teaching us, if we can hear it, that contrary to the mainstream liberal narrative, the periodic eruptions of violence that arise in the protest environment are not the work of “bad apples,” of evildoers sneaking into what would otherwise be a festival of Zen calm punctuated occasionally by the joyous singing of spirituals. (Though if we think the first civil rights movement consisted solely of such, we misremember history.) Rather, violence and anti-social behavior are the natural reaction of a significant minority of people on the ground in these communities to their lived experiences.


There are several possible explanations for what is happening in Baltimore. One is the racist explanation — that black people are somehow more prone to violence and theft than other people. (Or the slightly more sophisticated cousin of this idea, which is that black people are such rubes that they’ve been misled by “race hustlers” into a resentment all out of proportion to their own experiences.) Apart from being the racist (in the most literal sense) explanation, it is also deeply contradicted by the long history of white political rioting and destruction of property, whether for good or for evil.

Then there’s Radical Faggot’s explanation — that this violence and destruction is strategic. That might be partly true, but I think there’s something else mixed in there.

And then there’s my explanation — that if you put humans into a hopeless, desperate situation, if you teach them over a long period of time through brutal experience that they are outside the polity and the protection of the law, then they will react accordingly. Not as a matter of strategy, or even as a matter of rational choice or moral decisionmaking, but purely as a matter of an organism changing survival strategies in a near-apocalyptic environment.


That is not to say that the explosive rage of the “precarious” is not often channeled by and merged with political consciousness. Here is Marvin Gaye talking about the moment when his own anger and desperation came welling up in the 1960s:

I remember I was listening to a tune of mine playing on the radio, “Pretty Little Baby,” when the announcer interrupted with news about the Watts riot. My stomach got real tight and my heart started beating like crazy. I wanted to throw the radio down and burn all the bullshit songs I’d been singing and get out there and kick ass with the rest of the brothers. I knew they were going about it wrong, I knew they weren’t thinking, but I understood anger that builds up over years — shit, over centuries — and I felt myself exploding. Why didn’t our music have anything to do with this? Wasn’t music supposed to express feelings . . . ? I wondered to myself, With the world exploding around me, how am I supposed to keep singing love songs?

Gaye was never, to my knowledge, involved with rioting in the ’60s. He was a songwriter, and he was able to pour his heartache and rage into brilliant, beautiful political music. But arguably he died of the rage anyway, when his father shot him after the two men fought over a trivial argument between Gaye’s parents. This is not to say, obviously, that every domestic murder is the result of systemic racism and the alienation of black people from the society in which they live. But it is to suggest that systemic racism and alienation will raise, dramatically, the number of domestic murders in a community.

Often, when black activists complain about police violence, the retort is “What about violence within the black community!?” What, indeed, about that? Where does it come from? Why does it persist, despite universal condemnation within the black community? (As a side note — even the Crips and the Bloods are calling each other brother and speaking out for peace in Baltimore today.) The two kinds of violence — police violence and intra-community violence — are elements of the same problem. When we truly accept black communities and black people into our body politic, I suspect we will see less violence of both kinds.


Anyway, I’ve now spent a thousand words to say what Langston Hughes said better a century ago in a few dozen. So I’ll give him the last word.

What happens to a dream deferred?

Does it dry up
like a raisin in the sun?
Or fester like a sore—
And then run?
Does it stink like rotten meat?
Or crust and sugar over—
like a syrupy sweet?

Maybe it just sags
like a heavy load.

Or does it explode?

Posted in Uncategorized | 1 Comment

insert tired Douglas Adams lizard joke here

Ah — the season for browbeating progressives into submission has arrived! Here’s an essay by Allen Clifton in the old familiar style:

Let me list a few numbers for everyone:
78
80
80
83
Those are the ages that Supreme Court Justices Stephen Breyer, Antonin Scalia, Anthony Kennedy and Ruth Bader Ginsberg will be when the next president is sworn in, respectively. The next president we elect (assuming he or she serves two terms) could very well be the individual who selects four Supreme Court Justices.

Now, in a world where we’ve all seen how powerful the Supreme Court can be concerning the laws that impact all of us, who on the left wants a Republican such as Jeb Bush, Ted Cruz or Scott Walker potentially selecting four Supreme Court Justices . . . ?

Liberals might not like hearing this, but it’s going to be Hillary Clinton or a Republican in 2016. It really breaks down to these two options:

Either get on board with Hillary Clinton, even if she’s not everything you’ve dreamed of. – or –

Whine and cry because Elizabeth Warren isn’t going to run, become apathetic, then let Republicans win the White House in 2016; likely replace four Supreme Court Justices over the following 8 years; start a war with Iran; ruin the planet; destroy our economy again; and undo all the good that’s been done these last 6 years.

Yes, it’s really that simple.

Well, I’m sure that chiding, superior, I’m-the-adult-here tone is really gonna get ’em to the polls, bro.


All right. I will try to be nice. There are, I think, three reasons not to be sucked into the Clinton vortex just yet. I will address them in descending order of their being likely to convince other progressives.

First, the nomination is not yet hers. At this point in the 2008 cycle, Obama had not yet declared his candidacy, and although he had given a well-liked speech, he was not really a familiar national presence. (As John McCain pointed out at every turn, he had been a senator for, like, all of five minutes.) There are plenty of interesting Democrats out there, including Jim Webb and Martin O’Malley (who are already in Iowa!) and even Bernie Sanders, who has been hinting. Clifton argues that Sanders is just too OLLLLLDD, get out of the way, oldie, even though a President Sanders would be 83 at the end of a second term, which is… the same age that Justice Ginsburg is now. (Justice Ginsburg, who is widely supported among liberals in her decision not to step her old ass down from the bench.)

Clifton also argues that Sanders is unelectable because he is a self-described “socialist”:

Even if you get past his age, which many wouldn’t, he’s also a self-described socialist. If you really think this country is going to elect a self-described socialist to the White House, you really don’t know much about politics.

I don’t know, man. 40% or so of the electorate thinks that any Democrat is basically Stalin, and Republicans will inevitably paint any Democratic candidate as a state socialist, because that’s the playbook. There hasn’t recently been a charismatic populist in a presidential race who was willing to own the “socialist” label, instead of scurrying away from it like a fearful ninny. Americans may be suspicious of socialism (the name, not the practice), but they’re even more suspicious of people who are cowardly. Voters can smell fear, and they admire straightforward conviction. I don’t know how much this term would hurt Sanders if he owned it and explained what he meant by it, early and often, in populist terms.

But even if Sanders is unelectable, there may be other Democrats who aren’t, and who tick at least some progressive boxes better than Clinton does. I don’t know why we have to assume, without discussion, that she gets the nomination.


The second reason I don’t think I have to roll over for Clinton is that we don’t know who the Republican nominee will be. I am not a single-issue voter, but to the degree that I have a single issue that dominates all others, it would be my generally anti-war position. I am not a pacifist, but I am an anti-interventionist, I favor dramatically limiting executive authority to wage undeclared wars, and in general I’m a follower of Smedley Butler. Hillary Clinton is, to say the least, a hawk. She voted for the authorization of force for the Iraq War, did not vote for an amendment that would have increased opportunities for a diplomatic solution, and never really apologized for any of it. (I mean, for God’s sake, Montresor — Andrew fucking Sullivan apologized.) She also has nice things to say about Henry Kissinger, has consistently supported militarization of the drug war, and has provided public and backroom support for creepy and dubious foreign regimes, like the one that ascended to power in Honduras in 2009.

Meanwhile, the Republican field so far includes someone who at least claims to be much less warlike — Rand Paul. Paul is an unlikely candidate in today’s Republican party precisely because he’s taken some dovish stances (Lindsay Graham is coming after him for it). As I wrote last week, Paul has already shown a willingness to capitulate to military-industrial interests, which makes his anti-war cred open to suspicion. But suppose he somehow seizes the nomination, and further suppose that he makes a less interventionist, less violent foreign policy an issue in his campaign. What then? I would have the opportunity to vote for a major party candidate whose first instinct is not to assert military dominance over every corner of the world. Would I have to consider that? Yes, I think I would.

Of course, Paul comes with his own zany baggage, such as wanting to eliminate the Department of Energy and privatize everything that’s not bolted to the floor. President Paul would be very bad on, e.g., the social safety net and abortion. Weirdly, for a supposed “libertarian,” he is also not great on immigration. It should also be said that, apart from Supreme Court justices, presidents appoint the people who run executive agencies, and that matters, too; for example, federal engagement with Indian tribes has surged under President Obama because it was an executive priority. On the other hand, Paul would be good, probably much better than Clinton, on things like drawing down the drug war and restraining police violence and warrantless surveillance. And to the extent that many of his domestic policies are terrible, President Paul would be constrained to some degree by Senate Democrats — and the same thing is true of his Supreme Court choices.

But presidents have wide latitude to wage war (and the warlike ones always seem to take even more latitude than they actually have), and the human suffering of people outside our borders should matter to progressives, just as the human suffering inside our borders matters. Al-Jazeera America has a good op-ed this week reminding us of the scale of misery the United States created with our misguided wars in Iraq and Afghanistan:

The report estimates that at least 1.3 million people have been killed in Iraq, Afghanistan and Pakistan from direct and indirect consequences of the U.S. “war on terrorism.” One million people perished in Iraq alone, a shocking 5 percent of the country’s population. The staggering civilian toll and the hostility it has engendered erodes the myth that the sprawling “war on terrorism” made the U.S. safer and upheld human rights, all at an acceptable cost.

As the authors point out, the report offers a conservative estimate. The death toll could exceed 2 million. Those killed in Yemen, Somalia and elsewhere from U.S. drone strikes were not included in the tally. Besides, the body count does not account for the wounded, the grieving and the dispossessed. There are 3 million internally displaced Iraqi refugees and nearly 2.5 million Afghan refugees living in Pakistan.

I think there’s a lot to be said for not repeating that kind of thing in the coming decade. Fortunately, another debacle on that level seems unlikely to happen in the near future. But I am unconvinced that Hillary Clinton would not, say, get into armed conflict with Iran — or even just engage in the hinky shit we always get into when playing world policeman.

There is, of course, another big elephant in the room, one that affects human happiness or misery around the world for decades to come, and that’s climate change. Paul is a libertarian-ish Republican politician, so he’s probably never going to win the Ed Begley Lifetime Achievement Award. But his position on climate change is apparently evolving — he recently voted for an amendment stating that climate change is real and that human activity contributes to it, which is more than you can say for Ted Cruz or Marco Rubio. And he told Bill Maher that he’s “not against regulation,” citing the Clean Air and Clean Water Acts, which practically makes him green for a Republican. Progressives are right to be skeptical of Paul’s change of heart, but at least he’s making movements, and in the correct direction. (So far, however, his suggested solutions seem pretty lackluster — deregulating natural gas, for example. (…?) )

In all likelihood, I’d vote for Clinton over Paul because of her much stronger take on climate change (though O’Malley and Sanders are better still) — that is to say, I’d be willing to risk the possibility that she’d get us into war, be lethargic about dismantling the drug war, and do nothing at all about the surveillance/security state, in order to have a better shot at preventing serious planetwide ecological disaster.

But that’s a pretty grim lesser-of-two-evils: voting to take two steps back from the worst of climate change, while continuing to take a step or two forward in state violence, is not a cheering thought. I take the anti-war and anti-violence mandate very seriously, and that’s something I don’t think I share with Clinton, and might share with Paul. It remains to be seen whether he is at all serious about a non-aggressive foreign policy, whether he is actually a complete moron or just puts his foot in his mouth, and whether he can win the nomination.

(Also, to go back to the Democratic field for a moment: Jim Webb was right on Iraq at the time, which is a rare quality, and seems otherwise a pretty unobjectionable liberal. Soooo… why Clinton again?)


The third factor weighing against a vote for Clinton is the possibility of a third-party vote. I’ve argued before that a third-party vote is not necessarily a wasted vote: if you have minority viewpoints, sometimes the only way you get traction in a coalition party (and both our major parties are coalition parties) is to be willing to walk out and deny the party managers your support. Tea Partiers grasped this, which is why they were able to steer the Republican Party, however haphazardly, toward their policy preferences. They lost some elections and won some, and not all of their ideas found welcome in the mainstream, but indisputably they dragged the party in a certain direction. Progressives should at least consider this basic principle of political life, even while acknowledging that it could result in losses for progressive ideals in the short term.

The great defining myth (in the social and psychological sense) for progressives about third-party candidates is that Nader Cost Gore The Election In 2000. I’m not entirely convinced, for reasons that others have articulated here and here, but let’s assume that it’s true. So what? That is in fact the point of a protest vote — to hurt the mainstream wing of the party. The fact that something terrible happened in this case, way out of proportion to what should have happened, because God or Mighty Thor reached down and gave us a 9/11, does not mean that denying the mainstream wing of our party the opportunity to rule will always and forever result in such calamitous developments.

Consider, for example, Theodore Roosevelt spoiling the 1912 race for the Republicans. Woodrow Wilson won handily, and Wilson was more progressive than Taft, the Republican nominee. Additionally, Roosevelt’s Progressive Party, by being in the race, was able to push ideas like “giving women the right to vote, the abolition of child labor, minimum wages, social security, public health standards, wildlife conservation, workman’s compensation, insurance against sickness and unemployment, lobbying reform, campaign finance reform and election reform” to the front. And by laying the groundwork for a strong, independent progressive wing of the Republican party, Roosevelt and the Progressives arguably made a bipartisan New Deal possible.

Of course it helped that basically everybody in the race that year (including, to a lesser degree, Taft) was some flavor of progressive — except Eugene V. Debs, who went Full Socialist and got 6% of the vote (see, Sanders??). When everybody is pretty close, splitting the vote arguably matters less. Still, 1912 shows, at the very least, that when the moment is right (I don’t know that it is yet, but I do think change is a-comin’), a spoiler candidate is not a bad thing and can even prepare the party and the country for some very good things.

And sometimes third party candidates do not swing the election but do meaningfully shape the debate. Consider Ross Perot, who did not swing the country for Clinton but did have a lasting influence (not for the better, in my opinion, but an influence) on the nation’s debt debate, putting the fear of God into voters about the national deficit and making it a viable issue for the Republicans.

And then there was George Wallace, who came close to throwing the 1968 election to the House of Representatives and foreshadowed the Republican Party’s turn to being the party of class resentments, racism, hippie-punching, and anti-intellectualism. (Not to say Nixon didn’t hold his own in those departments.)

Still other times (most times, in fact) third party candidates have had little or no effect on anything, and are remembered as either an embarrassment or a non-entity.

My best guess is that every election is different and probably every election is sui generis. I’m not willing to let the fact that a third party candidate (may have) brought us a heap of disaster one time poison me on third party candidacies in the future. Other people’s mileage may vary, and I’m comfortable with that. But I don’t think it’s unreasonable to say that there could easily be situations, now or in the future, where voting for a third party candidate moves a certain agenda forward, changes the nature of a political party, and does not bring about an Iraqpocalypse (even if it does come with a certain price tag as to other things). I don’t know if that will be this year (I doubt it), but I go through the thought experiment anyway, because I think it’s important, for the health of the Democratic party and progressivism, for progressive voters to be able to say that it is possible to walk away, and to let things burn a bit, in order to reach a greater good.


So those are my three reasons for not being Ready For Hillary. One is what I hope for — primary challengers to either sharpen Clinton’s left side or replace her altogether. One is the Faustian bargain I’m willing, for now, to at least contemplate. And one is the Thing We Are Afraid Of, which I think we should not fear quite so much.

Posted in Uncategorized | 5 Comments