Stumbling toward wisdom

If you’re like me, you learned a patriotic song called “My Country, ‘Tis of Thee.” And then later you probably learned that the tune is taken from the British anthem “God Save the Queen” (or “God Save the King,” depending on the presence or absence of a Y chromosome in the reigning monarch at any given time). And for some reason thinking about these two songs has helped me clarify some of thoughts about religion that I’ve been struggling to put into words.

In addition to sharing a tune, the songs are very similar in emotional terms. Both arouse and celebrate a sense of national pride, with a very Anglo-American kind of self-congratulatory zeal. Both seek to engage the listener with something intangible, precious and transcendent.

Here’s what struck me as interesting. “My Country, ‘Tis of Thee” is directed at the nation as an abstract entity, a “sweet land of liberty,” while “God Save the Queen” identifies the nation with an actual person, the reigning monarch. And although this seems like a large difference – between a largely metaphorical construct on one hand and an actual 88-year-old woman with a fondness for Corgis – the fact that Queen Elizabeth II actually exists is not that really that important. The song is not trying to convince the British to love their country because of the personal attributes of any particular monarch. The monarch serves as a focal point for a person’s patriotic (or unpatriotic) feelings, but the song could just as easily be addressed to “Britannia” or some other fictional construct. Indeed, for purposes of the song the Queen is essentially a fictional construct. If the British someday stopped having a monarch, they would not stop having patriotic feelings. They would just have to find different rhetorical devices with which to express them.

Both the “sweet land of liberty” and “the Queen” are simply ways of talking about aspects of a national character that can’t be fully described in concrete terms. And obviously people listening to either of these songs will have different feelings about these abstract concepts, but the difference is not a function of which construct is used. Two British people listening to the exact same rendition of “God Save the Queen” might have diametrically opposed reactions, even though they are using the same focal point to process their patriotic (or unpatriotic) feelings. Conversely, some Americans listening to “My Country ‘Tis of Thee” might have feelings very similar to those of their British counterparts, even though the focal point of that song is a fictional construct rather than an actual person. In each case, the identity and attitude of the listener is vastly more important than which rhetorical device they happen to be using.

A similar dynamic seems to apply to religion. I’m not sure that it actually matters that much whether you accept the premise that God exists. And I say that as a fairly committed atheist. But I find whenever I am thinking about the really important intangible questions – things like justice, compassion, equality, and forgiveness – my thought process could easily be described in religious terminology. Roughly speaking, I am trying to imagine how a loving, omniscient intelligence would deal with a particular problem. Or to put it another way, “What would Jesus do?” I view this thought experiment as a hypothetical exercise, but I do not think that really changes the result. If we are going to ponder abstract questions at all, we have to project ourselves into some perspective that transcends our individual experience to some degree. The precise terminology that we use to describe this contemplation is less important than we seem to think.

John Rawls famously employed the image of a “veil of ignorance,” from which a person could make theoretically unbiased choices about how to structure society. Rawls did not really believe that such a thing was possible, but he used it as a way of articulating ideas about fairness and equality. If, centuries from now, a group of people got confused and came to believe that Rawls’ veil of ignorance really existed in the distant past, like the Golden Fleece, would that make any difference to its value as an intellectual tool?

We spend a great deal of time talking about what focal points people are using: Christian, Muslim, secular, and so on. The implication is that the choice of a particular religious or philosophical focal point makes an important difference in the result, but I don’t see any strong evidence for that view. People working within an ostensibly Christian or Muslim worldview have produced staggeringly different approaches to ethical and metaphysical questions, and there does not seem to be any consistent correlation between the religious framework and the result. Strict, authoritarian personalities tend to adopt strict, authoritarian philosophies, whether they find them in the New Testament or the Koran or a purely secular book. Tolerant, liberal personalities tend to find tolerant, liberal philosophies in the same books.

And none of these people are being dishonest. The texts support a wide range of different conclusions, depending on what one chooses to emphasize. Robert Wright wrote an excellent book called The Evolution of God exploring why this is the case. The short version is this: Any religious tradition that survives will go through different periods of war and peace, of scarcity and plenty. Sometimes members must be rallied to defend the group, sometimes the group needs to make peace. These changing conditions end up being reflected in the group’s oral and written traditions, meaning that later generations will have their plenty of material from which to call for either belligerence or conciliation, orthodoxy or tolerance. Just within the Gospel of Matthew you can choose to quote the sentence “I did not come to bring peace, but a sword” or to meditate on the words “Blessed are the peacemakers, for they shall be called sons of God.” Seek and ye shall find. Or, to quote a member of my personal pantheon, “Do I contradict myself? Very well then I contradict myself. I am large, I contain multitudes.”

This does not make religious texts meaningless or unworthy of consideration. These books contain real wisdom about how to live, acquired and refined over centuries by often brave and brilliant people. Dismissing all of that as “superstition” or “fairy tales” is profoundly short-sighted. Whether or not you consider a text to be literally true, you may find guidance and inspiration there. Shakespeare was convinced that people’s personalities were guided by the “humours” in their bodies. That error does not mean that Shakespeare’s thoughts on the human condition should be disregarded.

I started thinking about all of this because of a series of conversations with my older brother Steven. He believes in God, and I do not. And yet we have both been surprised by how little difference that makes to our views on ethical and philosophical questions. Our approaches to these questions is strikingly, almost eerily similar. If the choice between a religious and secular worldview were determinative, we should have much more divergent opinions. What seems to be happening is that we are each picking out a very similar tune, using different instruments. Certain chords resonate no matter how they are produced.

Scott Adams wrote an excellent piece along these lines, describing religion as a form of “user interface” with reality:

Today when I hear people debate the existence of God, it feels exactly like debating whether the software they are using is hosted on Amazon’s servers or Rackspace. From a practical perspective, it probably doesn’t matter to the user one way or the other. All that matters is that the user interface does what you want and expect.

This pretty much sums up my current attitude toward religion. If someone has a philosophy of compassion, justice and honesty that is derived from a holy book, I will gratefully accept whatever wisdom I can glean from it. The substance is much more important than the source.


Moral imagination

I highly recommend reading this article, which urges Americans to think about how our military actions are seen by people in other countries. But the point its making is not confined to foreign policy, and it’s summed up in these words: “The first step in moral reasoning, after all, is the imaginative act of placing oneself in another’s shoes.” We usually call this quality “empathy,” but that can lead to confusion. For various reasons, “empathy” strikes many people as a touchy-feely, therapeutic term, like “self-esteem.” They instinctively feel that a call to “empathy” is just a smoke screen for bleeding-heart sentimentality and rationalization. (This suspicion was voiced in characteristically despicable fashion by Karl Rove, when he accused liberals of lining up to “offer therapy and understanding” to the 9/11 attackers).

A related mistake that people make about the role of empathy in morality is in thinking that empathy requires us to excuse or ignore immoral behaviors by the people we are seeking to understand. This is exacerbated by well-meaning but vapid sentiments like “To understand all is to forgive all.” C.S. Lewis wrote a devastatingly brilliant response to this misconception in an essay on forgiveness, arguing that it was a mistake to imagine that “forgiving your enemies means making out that they are really not such bad fellows after all, when it is quite plain that they are.” Reflecting on the injunction to “love thy neighbor as thyself.” Lewis asked “how exactly do I love myself? Do I think well of myself, think myself a nice chap? Well, I am afraid I sometimes do (and those are, no doubt, my worst moments) but that is not why I love myself. In fact it is the other way round: my self-love makes me think myself nice, but thinking myself nice is not why I love myself. . . . In my most clear-sighted moments not only do I not think myself a nice man, but I know that I am a very nasty one. I can look at some of the things I have done with horror and loathing. So apparently I am allowed to loathe and hate some of the things my enemies do.”

It is entirely possible, and sometimes essential, to empathize fully with another person’s motives while disagreeing with their actions. But the act of moral imagination is still a vital exercise. It is far too easy — and often downright enjoyable — to condemn others with different views without any real effort to understand their position. It’s very satisfying to dismiss another person’s beliefs as thinly veiled envy or greed or prejudice. But that sort of condemnation does not really deserve to be called moral reasoning. Rejection of the unfamiliar is practically the default setting of the human mind. If we’re seriously trying to assess moral claims, we should be stretching our intellectual capacities a little further.

And the essence of that moral reasoning — the essence of any moral reasoning worthy of the name — is a genuine effort to understand another person’s point of view. Not just in a perfunctory way, where we mumble “I get where you’re coming from” before launching into our predetermined argument, but in a real way that may impose some psychological cost. We should actually try to ascertain the other person’s coordinates on the wider moral map, not just their opposition to our own perspective.

That may require translating some things that strike as obviously bad or morally neutral into terms that relate to our own values. For example, when I was thinking about religious objections to birth control, I tried to put myself in the position of people with sincere moral opposition to birth control. This was difficult, because I think birth control is an unalloyed good both practically and morally. So I tried to construct a thought experiment in which I was being asked to pay for something I really did have strong moral objections to, such as “reparative” therapy for gay people. The exercise did not change my ultimate conclusion: I still think it’s appropriate to require birth control coverage. But I think I understand the costs of that requirement, in terms of restrictions on individual conscience, a little better. And it also made me more aware of the connection between the birth control arguments and other issues of individual conscience with different ideological valences (such as the justified outrage over this Texas school’s attempt to bar a Native American boy who had long hair for cultural and religious reasons).

In addition to cultivating empathy, these acts of moral imagination create a healthy sense of humility. When you are standing in another person’s shoes, you can get a much clearer view of yourself. And the interaction of empathy and humility can produce a useful feedback loop, making us more understanding of others’ positions and more critical of our own. (This is even more important in light of the mounting evidence that our brains are heavily biased in the opposite direction). Marcus Aurelius had this advice, which remains highly useful: “Whenever you are about to find fault with someone, ask yourself the following question: What fault of mine most nearly resembles the one I am about to criticize?”

Mothering and fathering

If you’re part of a family that doesn’t include female parents — i.e. you’re a single dad, or in my case a double dad — you acquire certain sensitivities about language. (I assume the same is true in families without male parents, although the experience is probably different because cultural assumptions about gender are different). The word “mother” is soaked with emotional connotations. The word conjures up warmth and comfort, and the phrase “motherless child” is an almost universal metonymy for inconsolable sorrow.

One of my worst experiences as a parent was applying for my son’s Social Security number. We were advised to do that, just to wrap an extra layer of governmental recognition around our newly naturalized American citizen, and the final step was to go in person to a Social Security office. For legal reasons related to Texas being a horrible place, I was our son’s sole legal parent until we moved to Oregon. The woman at the Social Security office – who seemed to have answered a casting call for “hostile civil servant” in a 1980s sitcom – flatly refused to believe that I was the sole parent. She did not care what the adoption decree said, she glared at me and said “No one gets born without a mother.” That was undeniably true, but it’s equally true that no one gets conceived without a father. And I looked around at the waiting room, full of women and children with no other male in evidence, and doubted that these women would face a similar lecture on biology. My unescorted male presence was inherently suspect. I eventually gave up and went to a different Social Security office, where the person on duty was capable of reading and understanding legal documents.

It’s enough to give one a complex, but it also leads to insight. One of my stock answers, when asked how we celebrated Mothers’ Day, was to remind people that “mother” was a verb as well as a noun, and that men were capable of mothering. It’s kind of a defensive posture to strike, and I’ve mellowed over the years,* but it did make me aware of the curious discrepancy between “mother” and “father” as verbs. As a verb, “mother” is just as evocative as the noun, calling to mind acts of caring, nurturing, healing, making it better.

The verb “father” seems to refer exclusively to the biological fact of paternity and often in a negative way, as in “He fathered ten children out of wedlock.” Even when imagining the most stereotypical father-child interaction – playing catch with your son in a park on the Fourth of July – it sounds unnatural to say “He fathered well that day.” As a hopelessly amateur linguist and anthropologist, I’d guess that this reflects some fairly deep cultural assumptions about gender and parenting. Being a mother is seen as a job, a role, a responsibility. Being a father is seen as a status, a biological fact.

As a citizen in good standing of the 21st century, my first impulse is to stop using either word as a verb and substitute “parenting.” But that just doesn’t feel right. It has a stilted, bureaucratic feel, like the laborious use of “his or her” or “chairperson.” And it obscures something real about different modes of being a parent. If you are a parent – whatever your gender identity or marital status – you will have to play both of these roles at different times: the nurturer and the disciplinarian, the comforter and the lawgiver, the advocate and the judge. Different people will be better or worse at different aspects of these roles, and in families with more than one parent, people may gravitate to roles they are better at fulfilling. None of this correlates on a consistent basis with X or Y chromosomes.

In this context, with full awareness that the male and female terms are being used metaphorically, it would be nice to see the verb “to father” acquire some emotional connotations. If “mothering” refers primarily to nurturing children and meeting their emotional needs, perhaps “fathering” is concerned with aspects of parenting that are equally loving but more geared toward preparing children for the outside world. Guiding children toward self-discipline, teaching them what they need to know, helping them navigate other people’s needs and expectations are all aspects of this mode of parenting. To get really patriarchal about it, you could start with Proverbs: “Train up a child in the way he should go, and when he is old, he will not depart from it.” The exclusively masculine pronoun is annoying, but the underlying idea has value. Anyone of any gender who is raising a child will be called upon to do both “mothering” and “fathering,” and we should all try to do it well.


* In the course of writing this post, I noticed with some mild satisfaction that I’ve become less prickly and defensive about this sort of thing over time. The first time Chris and I went to the pediatrician, the form had spaces for “Father” and “Mother,” and Chris had no problem writing his name in under “Mother.” I was so paranoid about being accurate that I crossed out the letter “M” so the form said “Father” and “other.”

Accusing a politician of “plagiarism” is just dumb

Monica Wehby, the right-wing neurosurgeon running for the U.S. Senate in Oregon, is being accused of “plagiarizing” her health care plan from a survey conducted by Karl Rove’s Crossroads USA group. Wehby should be embarrassed by this, but not because she committed “plagiarism.” She should be embarrassed because the plan consists of vague and trivial conservative buzz words. It would be a terrible health care plan even if I were convinced that Wehby had written the whole thing out with her own No. 2 pencil during an all-nighter. It’s especially embarrassing because Wehby’s whole campaign is premised on the idea that her experience as a surgeon makes her better able to set health care policy. That was never really true, and this latest episode just underscores the point.

But the problem is not plagiarism, and the whole practice of accusing politicians of plagiarism is deeply silly. Plagiarism is objectionable in certain settings for two primary reasons. First, in a typical academic setting plagiarism means that a student has not done work that was assigned, which defeats the purpose of academic assignments. Second, in a professional setting, plagiarism is akin to stealing another person’s work product without compensation or credit. Neither rationale applies with much force in the political realm.

Although we hold politicians responsible for all the plans and policies issued in their names, no one seriously thinks that a Senate candidate is sitting at home composing all this material by themselves. Candidates rely on the work of advisers, staffers, party organizations and other sources, and that’s completely appropriate. Copying health care talking points from an ideologically compatible group is no more objectionable than adopting language drafted by one of your advisers. The important thing is that the candidate is accountable for the substantive positions taken, not that the candidate be especially good at writing bullet points.

Our current vice president was once pilloried for copying speeches from a British politician, and our current president was attacked by his main rival (who is probably our next president) for lifting some lines from a supporter’s stump speech. Both of these episodes struck me as absurd at the time. As a former speechwriter I’d estimate that the average politician composes about 10 percent of the words they say in public. Political rhetoric is a constant process of borrowing ideas, images and turns of phrase to advance policy goals.

Nor does political plagiarism raise any real concerns about “stealing” other people’s work product. The entire apparatus of think tanks and advocacy organizations exists to feed ideas into the political sphere. They want to have their ideas “stolen” and spread by candidates and elected officials. They are parking their ideas on the side of the road and leaving the keys inside. At worst, some activists might be annoyed at not receiving proper credit for coming up with an idea. but that strikes me as impolite rather than immoral. And certainly, if a politician is cribbing ideas from a white supremacist newsletter or a Communist front, that’s worth mentioning. But the problem lies in the provenance of the ideas, not the lack of originality.

Conscientious objections

I’ve waited a while to write about the Hobby Lobby case, because I think the underpinnings of the decision are much more complicated than most critics seem to think. For the record, I think the case was wrongly decided, but it draws from an intellectual tradition in American law that I think is noble and vitally important. And liberals, as a rule, should embrace that tradition.

I start with this sentence, which I consider to be the most eloquent and lucid summary of our Constitution ever written:

If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein.

That was written by Justice Jackson in West Virginia Board of Education v. Barnette, holding that a school district could not force Jehovah’s Witnesses to recite the Pledge of Allegiance in violation of their religious beliefs. The first time I read that case, in law school, I was floored by that sentence. It resonated with me at a very deep level, and it still does. It called to mind one of my favorite poems, in which Lord Byron celebrates the “eternal spirit of the chainless mind.”

The image of the “fixed star” captures the amalgam of values that unites and animates the different strands of the First Amendment. And although freedom of religion is part of the equation, the principle is broader. No government official has the right to tell anyone what to believe, period. (I also love the phrase “high or petty.” In the course of dealing with civil rights and civil liberties cases against school boards and local governments in Texas, I found that low-level officials were “petty” in every sense of the word, anxious to impose their values on anyone under their jurisdiction.)

I am convinced that the spirit of this passage was what saved Roe v. Wade from being overturned in 1992. At the time, it seemed inevitable that the Reagan appointees would succeed in rolling back abortion rights entirely. According to some later reports, Justice Kennedy was initially willing to vote with Rehnquist and Scalia but changed his mind. (Sadly Justice Kennedy has become much less deferential to a woman’s right to choose in the decades since 1992). He joined a plurality opinion preserving Roe, with a passage that echoes Justice Jackson:

At the heart of liberty is the right to define one’s own concept of existence, of meaning, of the universe, and of the mystery of human life. Beliefs about these matters could not define the attributes of personhood were they formed under compulsion of the State.

Justice Scalia has mocked this as the “sweet mystery of life” passage, but it expresses an essential truth about the relationship between governments and individuals under our Constitution. Justice Kennedy returned to this idea in Lawrence v. Texas, in which he wrote an opinion holing that Texas could not criminalize same-sex relationships: “The State cannot demean their existence or control their destiny by making their private sexual conduct a crime.”

Hobby Lobby still strikes me as an ill-advised decision. As I’ve written before, it’s analytically wrong to view the employer as “paying for” birth control, since the health insurance is part of an employee’s compensation. It’s an even further stretch to say that the owners of a corporation that’s paying for insurance are being forced to approve of an employee’s birth control choices. In addition, the decisions made by employers impose very real costs on their employees, taking the case further out of the realm of individual conscience. To use the example mentioned earlier, I would support allowing Quakers to refuse military service, but I wouldn’t let them fire or demote employees for serving.

Hobby Lobby is not a good decision, but it draws on principles that are worth protecting. Properly applied, religious or philosophical exemptions can serve important liberal values, even when those exemptions are extended to socially conservative views. I am not saying that individual scruples should win every close call, or even most of them. My only certainty is that such claims deserve more consideration than they receive from defenders of the liberal tradition.


The political spectrum

Although the spectacle of prominent politicians vilifying the President’s tan suit is weird, the conservative movement’s antipathy to neutral colors has a long history. Nixon, of course, loved to refer to his opponents as “pink,” but in truth conservatives have never had much patience for any pale shades. In Invisible Bridge, an account of Ronald Reagan’s 1976 campaign for the Republican presidential nomination, Rick Perlstein recounts a memorable bit of color-coded rhetoric. Reagan – who, let’s be honest, could rock a tan suit – sought to rally Republicans demoralized by Watergate and the bumbling of Gerald Ford with this rousing line:

Is it a third party that we need, or is it a new and revitalized second party, raising a banner of no pale pastels but bold colors which could make it unmistakably clear where we stand on all the issues troubling the people?

Governor Tom Kean, who gave the keynote address at the 1988 Republican National Convention, believed that the Democrats’ fondness for lighter colors should disqualify them from high office:

You see this flag behind me of red, white and blue? To us that symbolizes the “Land of the Free” and “Home of the Brave.” Well, you know [the Democrats'] media consultants in Atlanta said they didn’t think those colors looked too good on television. So they changed that red to pink, and they took that blue and they changed it to azure, and that white became eggshell. Well, I don’t know about you, but I believe Americans, Democrat and Republican alike, have no use for pastel patriotism.

And of course, no one who lived through the soul-crushing triviality of the 2000 presidential campaign can forget the endless attention paid to Al Gore’s decision to wear more “earth tones.” As the economy slowed, and climate change worsened, and extremists made plans to launch terrorist attacks on American soil, we all for some reason decided to talk about whether the Vice President of the United States should wear taupe.

And of course, in the aftermath of 9/11, bold colors have become a pervasive symbol of patriotic fervor:


It’s probably too reductive, but one is very tempted to draw connections to certain aspects of the conservative psyche. Some (admittedly controversial) brain research has suggested that people with conservative political leanings are less “tolerant of ambiguity” because of neurological patterns. This kind of rigid worldview would tend to gravitate toward unambiguous colors.  When you see everything in black and white, beige is the enemy.

Studying ick-ology

I’ve written before about the role of disgust in moral judgments, particularly among traditionalists preoccupied with “corruption” of modern attitudes toward sex. I’ve had very little direct experience with these feelings: for example, I don’t want to participate in heterosexual sex, but the thought of it doesn’t creep me out.  Through the magic of television, I recently got a glimpse into what the feeling must be like.

A friend introduced me to a very good Swedish science fiction show called Real Humans, about a society in which highly realistic robots are employed as servants and companions. Like all good science fiction, the show examines social and philosophical questions through the lens of its premise, and it set up a deliberately provocative scene involving a divorced woman who was intimately “involved” with her robot companion.

The actors playing robots are very convincingly made up to fall squarely in the uncanny valley – human but not quite. And during the scene, watching the woman snuggling with the robot, I experienced the full-on “ick” factor I’ve heard described so often. I even thought, without a trace of irony, “There’s a kid in the room, they shouldn’t be doing that.” I was not reflecting on the philosophical issues; I was having an immediate, visceral reaction to something that felt deeply unnatural.

I was aware of an obvious parallel with the reaction that some straight people have to same-sex relationships. Of course the situations are different: same-sex couples are composed of humans with free will and all that. But for purposes of understanding the feeling at a basic level, it felt comparable. And I came away feeling that we make two opposite mistakes in assessing this kind of instinctive aversion.

The first mistake, made by social conservatives, is to conclude that the feeling of disgust has moral weight, as an indication of the natural order of the universe. That does not really follow, particularly since social conservatives are at war with virtually every other “natural” sexual impulse. If instinct does not sanctify masturbation, instinct should not disqualify homosexuality.

But the socially liberal make an equal and opposite mistake, regarding the visceral reaction itself as a moral failing. The feeling itself seems largely involuntary, no more a moral choice than a dislike for cilantro. Yes, if indulged, it can lead people to adopt bigoted attitudes, but that is a separate issue. A person who supports equality despite being squeamish about same-sex relationships is arguably doing a better job of being open-minded than someone who has no negative reactions at all.

More generally, I think it’s a mistake to regard these immediate, unreflective reactions as an indication of someone’s “real” character. This stems from a largely inaccurate theory of mind, in which there’s an authentic “self” buried under layers of social convention. The human brain appears to be much more modular, and no part can really claim to be the one true self. Our overall character is the sum of many different impulses and judgments, and it’s a mistake to focus exclusively on one link in the chain of personality.

This was expressed beautifully by the novelist Helene Wecker in The Golem and The Jinni (a wonderful book which is well worth your time). In the book, the golem finds herself without a master and is taken in by a wise and compassionate rabbi. She is able to read people’s thoughts — including those of the rabbi — and finds them troubling. The rabbi responds: “A man might desire something for a moment, while a larger part of him rejects it. You’ll need to learn to judge people by their actions, not their thoughts.”

It’s foolish and counter-productive to make people feel guilty about thoughts they can’t help. We should be appealing to the “larger part,” the part that values equality and seeks justice, even for those whose lives seem so different.


Why the “libertarian moment” will never come

The first opinion column I ever wrote (circa 1986) was about how the political future was destined to belong to a “libertarian” philosophy, loosely defined as laissez-faire economics combined with tolerant social views. In retrospect, that prediction was way off base, produced by a kind of political solipsism common to teen-agers. When you are surrounded by young, healthy, relatively affluent pot smokers, libertarian views are not exactly thin on the ground. Among the electorate at large, only about 5-10 percent of the population at any given time holds anything resembling a consistent libertarian philosophy, and even those few tend to break down along conventional party lines in practice. The Koch brothers, for example, are in favor of marriage equality — an ideological conviction that drives exactly zero percent of their prodigious political spending.

But the miracle of selection bias has once again brought breathless declarations of an impending “libertarian moment” that will finally drive the government our of our boardrooms and our bedrooms. Those claims have been pretty thoroughly demolished elsewhere, but what interests me lately is the idea that libertarianism may just be structurally incapable of sustaining a viable political movement of any size. I don’t think it’s an accident that the two major political parties embrace significant government intervention in different spheres.

Political movements require a lot of intellectual and emotional investment from people. They need to have goals, aspirations, milestones, heroes and villains. The libertarian philosophy is poorly equipped to satisfy those needs. As one goes down a list of economic and social problems, asking “What should our government do?” it becomes monotonous when the answer is always “Nothing.” Even when “nothing” is the correct response – for example, when one is asked what to do about an unwinnable conflict in a foreign land – it can be difficult to sustain. People tend to be biased toward active solutions, even when inaction is the wiser course.* When Harry Truman campaigned against Congress, he called it the “Do-Nothing Congress,” even though it had actually passed a record amount of legislation. But Truman was politically savvy enough to know that “Do-Nothing” was an effective epithet.

It’s not difficult to motivate political coalitions to fight government intrusion in some particular area (particular when one’s donors are suffering the brunt of the intrusion). It’s not even that difficult to build coalitions around redirecting governmental action (e.g., away from fighting poverty and toward fighting terrorism). But it’s impossible to sustain a political coalition around the principle of doing nothing at all.

* This is not to suggest that I think libertarianism is right on the merits. I think capitalism is currently regulated too little, not too much. I’m disturbed by the private arsenals being built by gun fanatics. And even on social issues where I agree with libertarians, I sometimes find their thought process strange and off-putting. I frequently hear libertarian acquaintances who favor marriage equality, not so that loving couples can have legal protection, but because they think the government should get out of the business of recognizing marriages altogether. Thanks, I guess?




Manic Pixie Fed Chair

This article about Fed Chair Janet Yellen in Politico set something of a record: it managed to enrage me twice before I’d even finished the opening sentence. Here is that sentence, which would not have been out of place in Us Weekly:

A diminutive woman with a pixie haircut is deciding the future of the world’s biggest economy, and we don’t know what she’s really thinking.

Will we ever reach a point where someone can write a profile of a powerful woman that does not talk about her hairstyle? Alan Greenspan chaired the Fed for almost two decades, and I don’t recall ever reading one word about his hair. Even if Janet Yellen’s hair were somehow relevant to her job (spoiler: it’s not), you’d think a writer in 2014 would stop and ask “Am I falling into a sexist cliche here? Maybe I could focus on some more relevant detail to give my story a hook.”

Actually, I was annoyed even before I got to the manic pixie dream hair because of the word “diminutive.” Can we please get over the journalistic trope where we pretend to be surprised that a physically small person (often, but not always, a woman) is occupying a position of power? It’s not like you become Fed chair by completing a series of grueling physical challenges. You’re thinking of American Ninja Warrior. In this newfangled era, with all our modern conveniences, all sorts of people exercise political and economic power. It’s not a revelation that some of those people are short.

Alas, the article does not get much better. The words “diminutive” and “pixie” turn out to be a signal that the writer is going to paint Yellen’s (entirely mainstream) policy views as some sort of flighty, quirky set of attitudes. The writer acknowledges that Yellen is a “regular person” with a “warm smile” (everything seems to circle back to physical appearance). But the article makes her policies seem much more idiosyncratic and mysterious than they actually are. The fact is, all Fed chairs are guarded in their public pronouncements. Their choice of words can literally cause markets to crash. This does not make Janet Yellen a “mystery woman.” As far as I can tell, Yellen has been fairly clear and consistent about her approach: a focus on reducing unemployment in the near term and somewhat stricter regulations on banking. It’s the approach of a disciplined, capable professional, regardless of what her hair looks like.

(Thanks to Brian Zabcik for inspiring the title of this post)