Archives For

The expansive executive compensation literature has two camps: one camp believes markets generally work; the other that they don’t. I am in the former camp, but believe markets and individuals that comprise them make mistakes and that those with power can sometimes use that power to serve their own, selfish ends. The only difference between my views and those of, say, Lucian Bebchuk, is how pervasive we think those mistakes and abuses are. Prof. Bebchuk thinks managers are systematically overpaid and game the compensation setting process to constantly turn out in their favor. He cites, for instance, the fact that managers earn “secret” profits on trades in company shares that do not show up in disclosure about pay, and believes this is consistent only with a managerial power theory of CEO compensation.

In a paper just posted to SSRN, I examine Bebchuk’s claim empirically by looking at what happens to CEO pay when firms liberalize opportunities for insiders to trade their shares. If markets work reasonably well, the explicit pay of these insiders should fall, since their implicit pay is rising. This is what I find. Markets work. Not always. Not perfectly. But they work.

And, if I’m right, this evidence makes a strong case for the laissez-faire view of insider trading most closely associated with the work of Henry Manne. If boards bargain with insiders about the profits they earn from informed trading, it is hard to see who is harmed by this conduct.

Ask any conservative what the problem with America is today, and the answer you will get is government spending. But ask that same conservative, or any conservative for that matter, what to do about it, and the shoulders will inevitably shrug. Politicians, including conservatives, simply cannot be trusted when they get control of the purse strings. The problem is a familiar one in law and elsewhere — it is called the pre-commitment problem. Political leaders can promise to cut spending, but can’t resist reneging on the promise when in power; governments can promise not to bail out banks, but know that they must when the manure hits the fan. (For instance, Fannie Mae bonds explicitly disclaimed  any government guarantee, but the bail out of Fannie Mae continues to cost us tens of billions of dollars.)

There are solutions. The most famous is when Odysseus lashed himself to the mast of his ship to resist the temptation to steer toward the Siren’s songs. What Odysseus did was raise the costs for any future action, therefore making it less likely. So how can we raise the cost for congressional profligacy?

We cannot just hold ourselves to the commitment to vote the bums out of office. This just moves the pre-commitment problem back one step and puts it squarely in our lap. Sitting here today, I might want to reduce the size of future government, but when the choice is to cut my benefits, it may be harder to vote that way. In addition, we might all collectively lament the growth in government (rising from less than 10 percent of GDP 100 years ago to over 40 percent today), but we might also all individually value the pork our representatives bring home to our district.

So what about a constitutional amendment setting a limit on the size of government? Our founders tried to do this, but instead of setting a dollar or percentage limit, they used enumerated powers. They thought telling the government what it could do (and what it implicitly could not do) would constrain the Leviathan. But this didn’t work. It worked for a time, but it was an imperfect pre-commitment device, as it has been eroded by hundreds of years of court rulings cutting the other way. Instead of telling the government what it can and cannot do, what about telling the government how big it can be.

I propose a new amendment to the Constitution:

“Spending by the federal government shall not exceed 25 percent of the Gross Domestic Product in that year, except in cases where Congress has declared war against another sovereign nation and such additional spending is essential to the defense of the Homeland.”

According to constitution expert Tom Ginsburg, this sort of constitutional provision would be unique in the world, which is odd since the growth of government is a universal problem. (Switzerland has a balanced-budget provision and a limit on tax rates that comes close.) But this wouldn’t be the first time that America has blazed a trail for solving an age-old problem.

Have you ever been tempted to buy a beggar a cup of coffee or a sandwich instead of giving money? If so, you have, like a young Anakin Skywalker, taken your first step to the dark side of altruism. Don’t get me wrong, I’ve been there too. The reason I offered food instead of (money for) vodka is because I wanted to “help” the beggar. From my lofty perch (that is, sober, housed, and employed), I wanted to impose my values on him. Like a father choosing broccoli instead of ice cream for his kids, I thought I knew better what was good for the beggar — what he really wanted if only his thought processes were rational.

At some level, this is sensible. If I am paying, either directly in the form of the handout or indirectly in the form of the obvious externalities from the beggar (e.g., crime, stink, etc.), then it makes sense for me to try to reduce these costs.

But the dark side of caring is the perversity of this control. Once we start thinking this way, the creep towards totalitarian nannyism is hard to resist. Once I am paying for your health insurance, I suddenly care a lot whether you get an abortion, have that gender-switching surgery you’ve always wanted, or, most ominously, eat that Big Mac for lunch instead of the salmon salad. In today’s new America, I suddenly really care how much junk food the people making less than $88,000 eat — I pay for every Dorito that crosses their lips. And, for the record, I hate this about me and about New America. (Evidence this is our future comes from the UK, where 75% of people in a recent survey supported greater government control over individuals’ food choices.)

The problems with altruism are well documented. The IMF for years tried to control the internal policies of countries that it bailed out or loaned money to. These attempts were failures, both because the experts don’t always know what they think they know and because the meddling inevitably involves backlash, power grabs, corruption, and so on. (The IMF has abandoned these policies.) This instinct was also a source of the eugenics movement. Once we think of people as cost centers instead of autonomous individuals, the cost-benefit calculations can lead to some disturbing results. German posters from the eugenics era provide a nice example.

The battles ahead for New America are likely to be just as dirty. The battle over abortion in the Stupak Incident is just a preview of what is to come as every interest group wanting to feed at the trough, remake America, press for rights they hold dear, and so on, head for Washington to convince our dear leaders why the rest of the country should or should not pay for their pet project.Whatever the negative impact of me acting as a control freak is on my neighbor the beggar will be dwarfed by the nation as control freak. At the individual level, the control we try to exercise might actually be a good thing. But multiply it by 300 million, centralize it in Washington, and unleash the forces of public choice on it, and watch the beginning of the end of our freedom.


Paul M. Bator award

Todd Henderson —  20 March 2010

I loath the Oscars, Golden Globes, and other award shows. Is there anything worse than a bunch of self-important blowhards congratulating themselves and blathering about how they are what makes the world a place worth living? Well, perhaps, a bunch of conservative students and law professors doing the same thing might be worse. So I found myself at the Federalist Society Students Symposium this year as the recipient of the 2010 Paul M. Bator Award. As you can see in the video of my remarks here, I wasn’t exactly sure what I was doing there and I took a different lesson from the award than others.

The Texas Board of Education recently decided to add F.A. Hayek to the high school economics curriculum. The New York Times reports:

In economics, the revisions add Milton Friedman and Friedrich von Hayek, two champions of free-market economic theory, among the usual list of economists to be studied, like Adam Smith, Karl Marx and John Maynard Keynes.

To the Times, this is evidence of the Board’s desire to put a “conservative stamp on . . . economics textbooks.” As usual, the Times gets it wrong.

Hayek is the most courageous and important critic of social planning, and if we are going to expose high school students to the poison of Marx, we must give them the antidote of Hayek. Hayek realized the fallacy of central planning and its inevitable failure decades before anyone else. His book “The Road to Serfdom” should be required reading for any literate American. His ideas about the decentralization of knowledge, the important role heterogeneous preferences would play in destabiling attempts at social planning, and the link between progressivism and totalitarianism are some of the most important contributions to human knowledge of the past 100 years.

Economist, and my friend, Justin Wolfers disagrees. On the ever-interesting Freakonomics blog, Wolfers examines citations to Hayek in economics journals, and concludes the data “suggests that Hayek just doesn’t belong with Smith, Marx, Keynes, or Friedman.”

Others are coming to Hayek’s defense. See comments by William Easterly here.

I offered my own defense of sorts in a 2005 paper for the inaugural issue of the New York University Journal of Law & Liberty. I look at citations to Hayek and other famous “economists” in law journals and by judges. Hayek is the ninth most cited economist, behind only Mill, Smith, Coase, Becker, Stigler, Arrow, Marx, and Friedman. Hayek has been quite influential on law, and like Mill, Smith, and Friedman is accessible to high school students wrestling with big-picture ideas about economics and society.

I do agree with Wolfers’s skepticism about school boards generally and some of the specific decisions of the Texas Board. I also agree that Hayek would be skeptical about attempts to impose knowledge from above. But, since these decisions must be made, it is nice to see some balance being brought to economics education.

Of course, much of this shouldn’t matter. Education starts at home, and I can say that no matter what the high school curriuculum at the University of Chicago Laboratory Schools (where my kids will attend), they will learn about Hayek in the Henderson House.

When I was a student at the University of Chicago Law School, our president lectured there. I didn’t take any classes from him — he taught stuff I wasn’t interested in — but I had friends who did; all raved. The other day, I opened up my copy of the Law School directory for reasons of nostalgia. There the president is on page 34, under “Lecturers in Law,” between Judson Miner and Stephen Poskanzer. Although I knew President Obama’s biography by heart at this point, one fact in it surprised me: “Before joining Developing Communities Project, he worked as a financial journalist . . ..” Really? A financial journalist? Am I the only one who had no idea about this? As someone who teaches business law, I would love to see the stories the president wrote when he covered finance. If anyone is out there who has copies, send them my way.

Let me say at the outset, some of my prior beliefs. First, I believe in the marketplace of ideas and think that more speech is generally better than less speech. I believe the Founders shared this belief and enshrined it in the “no law” component of the First Amendment. I believe this is especially true for speech about politics. Why else would we allow the Nazis to march in Skokie? Other countries don’t let Nazi’s march because they (rightfully) view their ideas as repugnant. But we let them march. We do so because we are more confident in our citizens’ ability to know right from wrong, to look beyond rhetoric for substance, and to be able to weigh competing claims of truth. If we didn’t trust the people to make decisions based on all available information, if we didn’t trust the people to be able to filter speech according to its source and content, if we didn’t trust the people to know what is good for them, we wouldn’t let the Nazi’s march. But we let them march.

Continue Reading…

Global warming critics have taken two primary approaches. First, deny the facts based on the incentives for scientists to fudge the data to get prestige and research dollars (see, for example, the East Anglia emails), based on the inherent limitations of humans to build global weather models to predict the temperature 100 years from now, and based on humans’ Chicken-Little tendencies.

Second, criticize the decision to spend money on global warming now. This could either be because other problems are more pressing, because raising energy prices may cause an economic downturn that causes significant human suffering, or because money spent today may turn out to be wasted either because our predictions about warming are incorrect or because future technologies will be able to solve any future problem much more cheaply. (This last criticism is a question of inter-generational bargaining — imagine someone from 100 years from now coming back to us today and we asked her whether we should spend the money now or instead us it to create wealth that future generations could use to solve the problem. The answer to the question is not at all obvious.)

While there is lots to be said, both pro and con, to these criticisms, I want to ask a different question:

What would we do if we knew observed increases in global temperatures were the result of natural causes, say solar or volcanic activity?

If we knew for certain that humans were not to blame, we would be focused on reducing the expected costs of any systematic change in global climate, instead of trying to stop it.  So what would we do? Would we try to develop technologies to stop the changes or would we focus more on trying to ameliorate the costs? If the former, what would those be and how much would they cost? Would we be more or less likely to engage in global cooperation and wealth sharing if we were not pointing fingers at each other about who was to blame?

These questions and others in the same vein are worth asking for two reasons. It may be that the costs of these strategies would be lower than the costs of trying to stop the problem in the first place. I’m not an expert or qualified to even make a guess, but it seems like a question worth asking. In addition, and more importantly, the questions about remediation and prevention raised above are the same questions we would ask if we accept there is insufficient political will to stop climate change. This seems like a fair description of today’s reality. Things may change — China and India may be convinced to focus more on the weather in 100 years than pulling hundreds of millions of their citizens out of poverty, and the US consumer may agree to be a lot poorer to leave a cooler planet to our great grandchildren — but I doubt it. And, until they do, we need to at least think about what we would do if we aren’t to blame.

Looking for something to blame for the Greek debt crisis, some observers are pointing their fingers at credit derivatives. An article in yesterday’s New York Times makes the case that credit default swaps (CDS), and specifically their sale by Goldman Sachs, are somewhat to blame in part for Greece’s problems.

As I explain in this paper, credit derivatives are merely a financial tool that can be used by those exposed to credit risk, say a default by the Greek government or General Electric, to share that risk with others. This lowers the costs of borrowing and helps spread risk. In addition, third parties with no exposure to the particular credit risk can bet on whether the Greeks will default. These secondary-market transactions are the same as an individual buying stock in General Electric betting it will rise. Importantly, these bets provide a liquid market for credit risk, which lowers the cost of hedging for those with primary exposure, and provides the market with better information about whether Greece or General Electric is a good credit risk. Those who might lend to the country or company, those conducting other business with it, and those who might face the risk of default in other ways, can use this information to better plan their activities. For instance, those disbelieving a country or company’s claim of financial soundness, say because of funny accounting (think: Enron or, dare I say, America) can use credit derivatives to short debt, something that was impossible before credit derivatives were invented. This makes debt prices more accurate and holds borrowers, be they sovereigns or corporations, better to account.

Of course, there is the possibility for abuse. Another New York Times article from a few weeks ago highlights the possibility for abuse in this market. (Note the parallel between the conflict of interest across departments at Goldman Sachs and those in the investment analyst scandals from a few years ago.) But abuse is possible in all markets, and everyone should be in favor of vigorous enforcement against those who try to manipulate markets or trade on undisclosed conflicts of interest. The existence of the potential for abuse, however, is no more an indictment of credit derivatives generally than it is of the stock market or any other useful tool of society than can sometimes be abused.

It is the ultimate irony that politicians are blaming their problems on a tool that helps reveal their tricks and mistakes. This is akin to a burglar blaming an alarm system for being caught. Sure, the burglar might have been better off without it, but the homeowner and everyone else is glad it was installed.

ROTC on campus

Todd Henderson —  5 March 2010

I’m delighted to see the news that Stanford is considering reinstating ROTC on campus. I served for four years in ROTC at Princeton, and it was one of the highlights of my college years. (President Clinton’s budget cuts — the so-called peace dividend — and an untimely shoulder surgery kept me from serving.) I am opposed to the government’s discrimination against homosexuals serving in the military, but have always believed that one way to remedy this was by having people like me serving in the officer corps. I hope more schools, like Yale, where I would have gone if they had ROTC on campus, will follow Princeton and, perhaps, Stanford’s lead.

Every morning on my 1-mile drive to work, I pass two signs expressing outrage about torture – one is a hand-made yard sign, the other an ominous black banner hanging from a church window: “torture is wrong.” (Yes, punctuation by e.e. cummings it seems.) Is it? I’m not sure.

The optimal amount of torture is certainly not zero. Only a zealot would claim otherwise. The simple law-school hypothetical of the ticking time bomb shows the absurdity of the claim: if a nuclear bomb were known to be about to go off in Chicago, and if waterboarding were known to be an effective method of extracting information, and if there were no other way of getting the information (three big “ifs,” I admit, but don’t fight the hypothetical), everyone making the decision at the time would torture. Everyone would prefer to scare someone (not even seriously hurt them) to save millions. So we might quibble with the “ifs,” but this isn’t denying torture as a matter of first principles, rather based on details that are quite contestable. For instance, several high-level intelligence figures here and in other countries claim demonstrable success using these techniques. The debate is described here.

Although not zero, the optimal amount of torture may be small, even extremely small. Torture is thus like killing. Everyone agrees the optimal amount of killing other humans is not zero. I can kill an intruder who enters my house and threatens me; the police can kill under limited circumstances; the state (or at least some states) can kill heinous criminals; and the federal government can kill pretty much at will, sometimes massively and indiscriminately (e.g., the firebombing of Dresden), to protect us from perceived threats. While individuals may differ about the wisdom, efficacy, or legality of some of these, only the most idiosyncratic of us would consistently reject any killing of any kind, especially when the connection with human flourishing is established.

To this point about large-scale violence being perpetrated in our name, I find it odd there are no signs on my way to work reading: “Predator attacks are wrong,” or “cluster bombs are wrong,” or any of the other, far more lethal things we do are wrong.  One might say that torture is worse than killing because the latter is often done in “the heat of battle” or when there are no alternative choices because of an imminent threat. There are two problems with this line of reasoning. First, there are millions and millions of graves filled with the bodies of those killed in cold deliberation and where the threats were hardly obvious. Was the murder of tens of thousands of German civilians necessary to impede the Reich’s war production efforts? Perhaps. But, perhaps not. Isn’t it odd to say that we can’t slap a terrorist in the hope of intimidating him into confession but we can kill that same person (plus his entire family) if we use a Predator drone to attack his house in Waziristan? Ah, you might argue, if he is free, he is a bigger threat than when in custody. This is the second problem with this argument. Lawyers call this fallacy the act omission distinction. Is the act of setting a bomb (when free in Waziristan) really more threatening than the omission of telling us where a bomb is set (when held in Bagram Air Base)?

A counter argument to this analogy between killing and torture is that the greater does not always include the lesser – the fact that we can kill does not mean we can do things less than killing. Take the example of animals. Killing animals is socially acceptable – about 9 billion were slaughtered last year in the US for our enjoyment. But torturing animals is generally not socially acceptable. Professional football player Michael Vick served nearly two years in federal prison for his involvement in dog fighting. Why is this? Why doesn’t the right to kill include the right to torture?

To get at this, it is important to remember we do torture animals. We use animals not only when we kill them “humanely,” whatever that means, but we also use them when they are subjected to treatment that is akin to torture. A college friend of mine experiments on pigs with the hope of developing more efficacious heart surgery techniques – the pigs are made sick, subjected to numerous painful surgeries, and then killed. Another friend does research on the brains of chimps – again, these chimps are not happy about this. Why is this form of torture OK (I understand to some it is not), and yet the use of dogs for entertainment (put aside dog racing, if you can) is not?  It must have something to do with the end results. My friends are trying to save human lives and they are acting as compassionately as they can in dealing with the animals on which they experiment. Michael Vick, on the other hand, wanted a cheap thrill and acted like an animal himself. In other words, we punish not because of the impact on the animal, but because of a cost-benefit analysis of the impact and a view about what the conduct tells us about the perpetrator. If your goal is one that is important and the costs are minimized, torture is OK.

This distinction lets us draw a sensible line in the current debate about torture. What rogue soldiers did at Abu Grahib looks like it was not part of a plan, not well designed, not managed, not intended to resolve an imminent threat, and more revealing about the nature of the people doing the torture. In short, the perpetrators look more like Michael Vick than my medical researcher friends. The case of the waterboarding of high-level al Qaeda operatives, in contrast, looks more like the latter. Sure it makes us uncomfortable to think about our government doing this, but doesn’t it make us uncomfortable to think about the pigs and the chimps, not to mention the children incinerated in Dresden? Our top intelligence officials believe “coercive interrogation methods,” gave us “deeper understanding of the al Qaeda organization that was attacking this country.” Why don’t we believe them?

Perhaps we do. My hope is that the compromise we’ve reached is to publicly condemn torture but to privately signal that in extreme cases we will forgive those who do it. This is probably the best legal regime to handle something like this. The banning of torture is good PR, but more importantly, it puts the onus on potential torturers to make sure that when they do torture, it is for the extreme, ticking-time-bomb case. In other words, the legal uncertainty about torture means we will have less of it. If we believe the optimal amount of torture is less than the amount before the ban, this should be expected to lower it. But, the use of pardons after torture is revealed should provide sufficient protection for those who are certain under the circumstances that torture is the right decision. Of course, for this to work well – to generate the efficient level of torture, if you will – the public needs to have the debate in a sensible way. All the hand wringing and political posturing is not helping, since it likely gives our intelligence officials less comfort about whether a pardon would be forthcoming, even in extreme cases. Better we face the cold, hard realities of the world with a pragmatic view, than to simply condemn something because it gives us the willies. Killing animals or children gives me the willies too, but sometimes, tragically, it must be done.

My wife makes me subscribe to the New York Times, and occasionally it is worth it. Take this recent essay by Roger Cohen. It is difficult to get past the faux-intellectual babble — “As it is, everyone’s shrieking their lonesome anger, burrowing deeper into stress, gazing at their own images” — but if you can resist laughing or immolating yourself to escape Cohen’s drivel, you’ll get to a tremendous claim. Cohen writes:

Americans don’t want a European nanny state — fine! But, as a lawyer friend, Manuel Wally, put it to me, “When it comes to health it makes sense to involve government, which is accountable to the people, rather than corporations, which are accountable to shareholders.

Some thoughts about Mr. Cohen’s claim about corporations:

1. Health care is indeed essential to our wellbeing, but so is food, water, entertainment, productive work, transportation, and on and on. I assume Mr. Cohen eats food from supermarkets, dines at restaurants, drinks bottled water, flies across the ocean, and drives a car. But these are goods and services provided by corporations! Unaccountable corporations! It is a short leap from his absurd claim to nationalization of the means of production, and it is an inevitable step from there to totalitarianism, murder, starvation, and chaos.

2. Shareholders are people. Who exactly does Mr. Cohen think owns our corporations? We do. The “people” who allegedly can hold our government to account are exactly the same people who can hold our corporations to account. If corporations are not accountable to us, the fault dear Brutus lies not in the stars but in ourselves.

3. Neither the govenrment nor corporations are perfectly accountable or aligned with the people’s interests. The question is which is more capable and accountable under specific circumstances. Markets for labor, capital, and products and services provide discipline for firms; elections provide dicipline for government. Sometimes we can rely on the former — no one believes we would have a better Internet search engine, grocery store, or computer if it were provided by Uncle Sam instead of Google, Whole Foods, or Apple. Sometimes we can rely on the latter — no one believes we could reduce acid rain without government involvement, in one way or another. There is a reasonable debate to be had about how well functioning the market for delivering health care is and how we may be able to improve it, but that debate is not advanced at all by absurd claims like Mr. Cohen’s. We law professors tinker at the edges to try to make governments and corporations more accountable, but to flatly assert one always dominates the other is just sophistry.

4. Does Mr. Cohen really think the government is always accountable to the people? Let me guess that Mr. Cohen was opposed to the invasion of Iraq. Who exactly does he think did this? Hint: it wasn’t a corporation.

5. If Mr. Cohen’s problem with shareholders is the profit motive, he might want to do some research on the provision of health care by non-profit hospitals, clinics, and insurance companies. In many states, the big, terrible insurance companies are non-profit corporations. Are these unaccountable too? If so, why? Just because the individuals behind them chose to be a “corporation”? If that is the reason, perhaps Mr. Cohen would be surprised to know Chicago is also a corporation.

6. Although I could go on and on and on about Mr. Cohen’s simplisitc and downright silly analysis, I’ll just point out that Mr. Cohen himself works for an unaccountable, greedy, terrible, world-destroying corporation, called the New York Times. Perhaps we should be afraid.