Archives For musings

According to Senators Barbara Boxer, Jeanne Shaheen, and Patty Murray, the Catholic Church is the real bully in the fight over whether religious employers must include coverage for contraception in the insurance policies they offer their employees.  In yesterday’s Wall Street Journal, the three responded to, in their words, the “aggressive and misleading campaign” against this new Obamacare mandate.  They wrote:

Those now attacking the new health-coverage requirement claim that it is an assault on religious liberty, but the opposite is true.  Religious freedom means that Catholic women who want to follow their church’s doctrine can do so, avoiding the use of contraception in any form.  But the millions of American women who choose to use contraception should not be forced to follow religious doctrine, whether Catholic or non-Catholic.

The three Senators seem to believe that as long as the government doesn’t force Catholic women to use birth control and the morning after pill, religious liberty is protected.  They also believe that in praying to the Almighty One (not that Almighty One) for permission not to pay for a medical intervention that offends their deeply and sincerely held religious beliefs, Catholic officials are trying to force women to follow their religious doctrine.

That’s ridiculous, and it shows how desperate the defenders of President Obama’s intrusion on individual conscience have become.  In a world in which religious employers were exempt from paying for a measure that violates their sacred beliefs, any woman who didn’t share those beliefs would be perfectly free to obtain birth control.  The Catholic Church, after all, doesn’t have the power to overrule Griswold v. Connecticut.

By contrast, in the world of Mr. Obama’s contraception mandate, Catholic officials who choose to follow their consciences by refusing to subsidize interventions that violate their religious beliefs may ultimately be thrown in jail.  That, Honorable Senators, is a full-frontal assault on religious liberty.

[More on the deeply misguided contraception mandate here.]

Stan Liebowitz (UT-Dallas) offers a characteristically thoughtful and provocative op-ed in the WSJ today commenting on SOPA and the Protect IP Act.  Here’s an excerpt:

You may have noticed last Wednesday’s blackout of Wikipedia or Google’s strange blindfolded-logo screen. These were attempts to kill the Protect IP Act and the Stop Online Piracy Act, proposed legislation intended to hinder piracy and counterfeiting. The laws now before Congress may not be perfect, and they can still be amended. But to do nothing and stay with the status quo is to keep our creative industries at risk by failing to enforce their property rights.

Critics of these proposed laws claim that they are unnecessary and will lead to frivolous claims, reduce innovation and stifle free speech. Those are gross exaggerations. The same critics have been making these claims about every previous attempt to rein in piracy, including the Digital Millennium Copyright Act that was called a draconian antipiracy measure at the time of its passage in 1998. As we all know, the DMCA did not kill the Internet, or even do any noticeable damage to freedom—or to pirates.

Scads of Internet pundits and bloggers have vehemently argued that piracy is really a sales-promoting activity—because it gives people a free sample that might lead to a purchase—or that any piracy problems have been due to a failure of industry to embrace the Internet. Yet these claims are little more than wishful thinking. Some reflect a hostility to commercial activities—think Occupy Wall Street, or self-interest. Others make “freedom” claims on behalf of sites that profit by helping individuals find pirate sites, makers of complementary hardware, or companies that benefit from Internet usage and collect revenues whether the material being accessed was legally obtained or not.

In my examination of peer-reviewed studies, the great majority have results that conform to common sense: Piracy harms copyright owners. I was also somewhat surprised to discover that the typical finding of such academic studies was that the entire enormous decline that has occurred is due to piracy.

Contrary to an often-repeated myth, providing consumers with convenient downloads at reasonable prices, as iTunes did, does not appear to have ameliorated piracy at all. The sales decline after iTunes exploded on the scene was about the same as the decline before iTunes existed. Apparently it really is difficult to compete with free. Is that really such a surprise?

Do check out the whole thing.

 

 

A thought experiment:

It’s late January 2016.  Newt Gingrich is President.  The House of Representatives is solidly Republican, and there’s a slight Republican majority in the Senate.  Because Republicans lack a filibuster-proof majority in the Senate, the Affordable Care Act (a.k.a. Obamacare) remains on the books.  (The reconciliation process, which allowed the law to be enacted without supermajority support in the Senate, could not be used to repeal the law.)  The Act continues to require employer-provided insurance to provide full coverage for all preventive care measures.

Secretary Rick Santorum of the Department of Health and Human Services has determined that conversion therapy for gay males will help prevent all sorts of costly health problems.  HIV and related health problems, it seems, are extremely costly to treat and are far more common among gay men than among straight men.  HHS has determined that the most modern conversion therapies can cheaply and successfully alter sexual orientation or, at a minimum, reduce homosexual impulses so that they can be managed by homosexually oriented patients who would prefer not to engage in homosexual activity.

President Gingrich and Secretary Santorum have therefore mandated that employer-provided health insurance policies cover gay conversion therapies.  Claiming to be sensitive to the concerns of gay groups, they have included a narrow exemption for employers who don’t employ or serve significant numbers of straight people.  In reality, though, none of the major gay and lesbian advocacy groups (e.g., the Human Rights Campaign, GLAAD) or publishing organizations (e.g., The Advocate, OUT Magazine) could qualify for this exemption because all employ a great many gay-affirming straight people and include outreach to heterosexuals as one of their objectives.     

Can you imagine the howls from the New York Times, the television networks, and basically every other political commentator in America?  Andrew Sullivan might just explode.  And rightly so.  Forcing gay groups to pay for a procedure that so deeply offends their core principles would be beyond the pale in a liberal society that respects personal conscience and the right of individuals to associate in groups that share their values – a right that can exist only if groups are allowed to express those values and, to the extent they aren’t hurting others, order their affairs according to them.  

So why do President Obama and HHS Secretary Kathleen Sebelius get a pass when they order Catholic schools, hospitals, and social service agencies to cover birth control, sterilization, and the morning after pill?  The ridiculous “exemption” they created shows how little they know about what churches actually do:  Christ’s apostles themselves wouldn’t have qualified because they, like any church worth its salt, served multitudes of nonbelievers.  Providing an extra year to come into compliance does nothing to alleviate the fundamental problem (Is the doctrinal conflict going to disappear next year?) and is a transparent attempt to deflect media attention until after the 2012 election.  There are lots of Catholics in Ohio and Pennsylvania, after all.

One might say that my analogy fails because the science doesn’t show that gay conversion therapy actually works, and it therefore wouldn’t reduce total health care costs.  But that’s beside the point.  Even if there were a therapy that could cheaply and effectively make gay people straight (i.e., a pill or a quick surgical procedure) it would still be inappropriate to force groups whose central objective is to affirm gay people and fight anti-gay bias to provide coverage for such a therapy.

My point is not to defend the Catholic Church’s views on birth control (with which I disagree), to defend gay conversion therapy (which I think is a harmful crock), or to question the mission of gay rights organizations.  Instead, I mean to point out that governments in liberal societies do not force individuals or voluntary associations to violate their consciences where their conscience-following does not violate the rights of others.  Yet another example of Obamacare’s heavy hand.

Poets vs. capitalists

Larry Ribstein —  17 December 2011

Eric Felten writing in yesterday’s WSJ, observes the hypocrisy of the poets who withdrew from competition for the T.S. Eliot Poetry Prize because it was funded by a financial firm. “Hedge funds are at the very pointy end of capitalism” sniffed one self-described “anti-capitalist in full-on form.” The anarchist vegan correctly observed that the funder’s business “does not sit with my personal politics and ethics.”

Felten notes that modern winners of a poetry prize do not “expect the florid lickspittlery once lavished on those who provided artists their livings.” He also calls out the hypocrisy of a poet who turned down a hedge funded prize but wasn’t too shy to acknowledge the support of

the ‘Arts for Everyone’ budget of the Arts Council of England’s Lottery Department. Which is to say, she’s happy to bank the cash culled from the easy marks who pay the stupidity tax, but not the earnings of a mainstream investment firm. * * * So let’s get this straight: If the investment bankers’ money is grudgingly handed over to the taxman it’s squeaky clean. But if it is given voluntarily, the lucre is filthy. What an odd and upside-down moral equation.

But Felten shouldn’t have focused all of his ire on poets.  I have written about American filmmakers who similarly find a lot to dislike in the capitalists who support their work but have little problem with government.

Filmmakers imagine finance

Larry Ribstein —  14 November 2011

Margin Call is the best film to come out of the recent financial crisis. This is no polemic masquerading as a “documentary” (Inside Job) or good vs. evil melodrama (Money Never Sleeps). It is serious film, with superb acting, script, direction and photography, which uses the financial crisis as the realistic backdrop for a timeless story.

And yet the film is fatally flawed. Its serious qualities make more transparent its defects, which it shares with most films about business—filmmakers’ sour view of capitalists, which colors their view of business and perennially hobbles their efforts to make credible films about the business world.

In a nutshell: Eric Dale (Stanley Tucci), a risk management employee of a large securities firm becomes one of many casualties of a downturn in the firm’s business. On the way out the door he hands subordinate Peter Sullivan (Zachary Quinto) a USB drive. Sullivan, a rocket scientist who chose a career in finance, learns from this information that the firm’s substantial mortgage-backed security portfolio was based on a flawed real estate pricing model, and now threatens the firm’s financial soundness. Moreover, since the rest of Wall Street used the same model, the whole financial world is vulnerable (obviously an oversimplification of the causes of the financial crisis, but this is movieland). The revelation works its way up the corporate hierarchy, including executive Sam Rogers (Kevin Spacey), all the way to the top, CEO John Tuld (Jeremy Irons). Tuld and Rogers must decide whether to solve the firm’s problem by dumping the portfolio on its unsuspecting customers.

Unlike so many films about business, this one makes the business credible. The audience understands the setup. Though a few of the firm’s employees, including Dale, had an inkling this could happen, nobody acted on this information. The firm’s dilemma is also clear: selling the securities could save the firm in the short term but destroy it in the longer term because the firm will lose its customers’ trust. Hence the tension between the coldblooded Tuld and the conscience-ridden Rogers. This realism contrasts starkly with the hokey business scenarios in films like Wall Street I and II, which derived their limited dramatic power more from foreboding atmospherics than inherent logic.

Margin Call also differs from other business films in the depth of its characters and absence of obvious villains. There is no looming “corporation” that somehow is able to motivate its employees to behave like evil automatons. Here the corporation dissolves into its all-too-human employees.

Having shed the defects of the typical business film, Margin Call had a chance at greatness. Lurking in the film is an existentialist core, the story of how a crisis brings people to question the worth of what they are doing. While they were surfing the financial wave the universe was in a perfect harmony, where hard work created deserved wealth and happiness all around. But when the wave crashes their world loses its meaning. Finance looks like a zero-sum game, a way to transfer wealth from starving dogs to fat cats, as Tuld says. Nothing is immune. Dale laments leaving his former career as an engineer where he built a bridge that saved time and money. But his former subordinate Will Emerson (Paul Bettany) points out that maybe the drivers wanted to take the long way around. Rogers says digging holes would be better than what he does. At least he loves his dying dog and clings to it as his anchor. But then the dog dies and ends up in the hole he has dug. Where is the value?

In a better world, the film’s characters might have confronted the void and, possibly, found something to hold on to. But instead the deeper message vanishes leaving the simplistic point that the problem lies in the financiers and their sandcastles built of money. The characters are moral monsters obsessed with how much they and others make. When they flank a cleaning lady on the elevator we see and hear through her eyes their nasty conversations.

The characters’ search for meaning might, in this better world, have started with their jobs. But their self-rationalizations are lame. Tuld says, “It’s not wrong,” but the only reason he can offer is that “it’s all just the same thing over and over; we can’t help ourselves,” –followed by a list of years of financial crashes in recent world history. Will Emerson says, “If you really wanna do this with your life, you have to believe you’re necessary.” But the only necessity he finds is that “people wanna live like this in their cars and big . . . houses they can’t even pay for.” The film judges the characters for us — the cleaning lady, Rogers left with nothing but his dead dog, his childless woman subordinate, Sarah Robinson (Demi Moore) who threw her life away for an empty career, Tuld’s death’s-head face.

This is what happens to so many films about business. In my study of films about business and my law review articles How Movies Created the Financial Crisis and Imagining Wall Street I see a common theme: The artists who make films resent and distrust the capitalists who provide their money under the condition that the artists satisfy merciless markets that have no time for art. Of course the market’s judgment has to be shown to be irrational. So capitalism is often presented as a zero-sum game, where results depend on chance. Crashes happen, and people suffer. It has nothing to do with anything real.

In most business films (e.g., Oliver Stone’s Wall Street), this diminutive narrative of business shrinks the whole film: the characters are cardboard, the drama forced, the technical features marshaled to shore up the weaknesses. But since Margin Call is a serious film, its failure to fulfill its promise is more obvious. This film forces us to consider why filmmakers are so unable to reckon with the lives that so many Americans lead within large firms.

Perhaps the most prominent American filmmaker who could create a plausible narrative of big business was Billy Wilder. His films, such as The Apartment and Double Indemnity, had characters who found personal meaning even if some of their co-workers had not. But, then, Wilder was not subject to the anti-capitalist disease of modern filmmakers. He had not led his entire life in Hollywood or in movie theaters. His early years in Nazi Germany made him appreciate that free enterprise was not the worst thing in the world.

There was another story to be told in Margin Call, if only the filmmakers had been receptive to it. Finance is not basically a zero-sum game. It brings the resources together that create the worthwhile dreams that people do have. Where did the money come from to build Eric Dale’s bridge? The financiers who assembled the cash to build the construction and design firms were as responsible for the bridge as the engineers who worked for those firms. Financial engineering doesn’t create just instruments only rocket scientists can understand, but also the institutions that encourage investors to hand over their money.

If finance, even so envisioned, is worthless, then we can more readily believe that the rest of the world is, too. But we are also receptive to an existentialist construction of a reason to live. In the end, Rogers might have found that reason in constructing a financial solution to the financial dilemma instead of caving in to Tuld’s demand for a short-term solution that sacrificed both the firm’s customers and its own reputation. Or Rogers might have rejected this solution and taken the cash, just as Fred McMurray succumbed to murder in Double Indemnity. But at least we would have seen that finance gave him the same kind of choices that people have in other walks of life.

In the end the film can claim at least one important accomplishment. It shows that a realistic portrayal of business can be dramatic. Business does not have to be a generic prop. But it also shows that filmmakers’ anti-finance bias has real artistic costs. Filmmakers’ impoverished narrative of business can dilute the drama inherent in what so many people do with their lives.

Note:  This review was written for the Atlas Society’s Business Rights Center and was first published on their website.  My thanks to the Atlas Society for encouraging me to think and write about this film.

Welcome Baby 7B!

Thom Lambert —  31 October 2011

According to the United Nations, sometime around Halloween a newborn baby will push the world’s population above seven billion people.  Welcome to our spectacular planet, Little One!

I should warn you that not everyone will greet your arrival as enthusiastically as I.  A great many smart folks on our planet—especially highly educated people in rich countries like my own—have fallen under the spell of this fellow named Malthus, who once warned that our planet was “overpopulated.”  Although Mr. Malthus’s ideas have been proven wrong time and again, his smart and influential disciples keep insisting that your arrival spells disaster, that this lonely planet just can’t support you. 

Now my own suspicion is that modern day Malthusians, who are smart enough to know that actual events have discredited their leader’s theories, continue to parrot Mr. Malthus’s ideas because they lend support to all manner of governmental intervention into private affairs.  (These smarty-pants Malthusians, who are well-aware of their own intelligence, tend to think they can arrange things better than the “men and women on the spot” and are constantly looking for reasons to go meddling in others’ business!)  Whatever their motivation, Mr. Malthus’s disciples just won’t shut up about how our planet is overpopulated.

You should know, though, that this simply isn’t true.  The first time you hear one of Mr. Malthus’s followers decrying your very existence by insisting that our planet is overpopulated, you should ask him or her:  “Overpopulated relative to what?”  Modern Malthusians can never give a good answer to that question, though they always try.

Sometimes they say “living space.”  But that’s plain silly.  Our planet is really pretty huge.  Indeed, if all seven billion people on the planet moved to the state ofAlaska, each person would have 2,300 square feet of living space!  Now I realize lots of cities get crowded, but that’s because people choose to live in those areas—they’ve decided that the benefits of enhanced economic opportunity in a densely populated area outweigh the costs of close confines.  If they really wanted extra living space, they could easily find it in our planet’s vast uninhabited (or sparsely inhabited) regions.

Sometimes modern day Malthusians say the planet is overpopulated relative to available food.  Wrong again.  In the nations of the world where institutions have evolved to allow people to profit from coming up with new ideas that enhance welfare, individuals have developed all sorts of ways to get more food from less land.  Accordingly, food production has always outpaced population growth.  Now, modern day Malthusians will probably tell you that food prices have been rising in recent years — a sign that food is getting scarcer relative to people’s demand for it.  But that’s because governments, beholden to powerful agricultural lobbyists, have been requiring that huge portions of agricultural output be diverted to fuel production even though the primary biofuel (ethanol) provides no environmental benefit.  As usual, it’s actually bad government policy, not population growth, that’s creating scarcity.

In recent days, Mr. Malthus’s disciples have insisted that the world is overpopulated relative to available resources.  Nothing new here.  Back in the 1970s, lots of smart folks contended that the earth was quickly running out of resources and that drastic measures were required to constrain continued population growth.  One of those smarty pants was Stanford University biologist Paul Ehrlich, who, along with his wife Anne and President Obama’s science czar John Holdren, asked (in all seriousness): “Why should the law not be able to prevent a person from having more than two children?”  (See Paul R. Ehrlich, Anne H. Ehrlich & John P. Holdren, Ecoscience 838 (1977).)  (Ehrlich also proclaimed, in his 1968 blockbuster The Population Bomb, that “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now. At this late date nothing can prevent a substantial increase in the world death rate.”)

In 1980, Prof. Ehrlich bet economist Julian Simon (a jolly fellow who would have welcomed your birth!) that the booming population would raise demand for resources so much that prices would skyrocket.  Mr. Simon thought otherwise and therefore allowed Prof. Ehrlich to pick five metals whose price he believed would rise over the next decade.  As it turns out, the five metals Prof. Ehrlich selected — chromium, copper, nickel, tin, and tungsten — fell in price as clever, profit-seeking humans discovered both how to extract more from the earth and how to substitute other, cheaper substances.  Mr. Simon was not at all surprised.  He recognized that the long-term price trend of most resources points downward, indicating that resources are becoming more plentiful, relative to human needs, over time.  (Modern Malthusians may point to some recent price trends showing rising prices for some resources, especially precious metals.  It’s likely, though, that those price increases are due to the fact that central banks all over the world have been creating lots and lots of money, thereby threatening inflation and causing investors to hold their wealth in the form of commodities.)

The fundamental mistake Mr. Malthus’s disciples make, Little One, is to assume that our planet is the ultimate source of resources.  That’s just not true.  Our planet does contain lots of useful “stuff,” but it’s human ingenuity — something only you and those like you can provide — that turns that stuff into “resources.”  Take oil, for instance.  For most of human history, messy crude oil was a source of annoyance for landowners.  It polluted their water and fouled their property.  But when whale oil prices started to rise in response to scarcity (or, put differently, when the world started to look “overpopulated” relative to whale oil), some clever, profit-seeking folks discovered how to turn that annoyance into kerosene, and eventually petroleum.  Voila!  A “resource” was created!

Just as people once worried about overpopulation relative to whale oil supplies, lots of folks now worry about overpopulation relative to crude oil.  Well I’m not that worried, and you shouldn’t be either.  As oil prices rise, more and more clever profit-seekers will turn their energies toward finding new ways to obtain oil (e.g., hydraulic facturing), new techniques for reducing oil requirements (e.g., enhanced efficiency), and new substitutes for oil (e.g., alternative fuels).  Mr. Malthus’s disciples will continue to fret about the limits to growth, but the historical record is clear on this one:  Human ingenuity — the ultimate resource — always outpaces the diminution in useful “stuff.”

And so, Little Resource, your arrival on our planet should be celebrated, not scorned!  As you and your fellow newborns flex your creative muscle, you’ll develop new sources of wealth for the world.  As you do so, birth rates will plummet, as they typically do when societies become wealthier, and the demand for a cleaner environment, demand that rises with wealth, will grow.  We therefore need not worry about “overpopulation.”

We do, though, need to ensure the survival of those institutions — property rights, free markets, the rule of law — that encourage resource-creating innovation.  I, for one, promise to do my best to defend those institutions so that you and your fellow newborns can add to our planet’s resource base.

First, Google had the audacity to include a map in search queries suggesting a user wanted a map.  Consumers liked it.  Then came video.  Then, they came for the beer:

Google’s first attempt at brewing has resulted in a beer that taps ingredients from all across the globe. They teamed up with Delaware craft brewery Dogfish Head to make “URKontinent,” a Belgian Dubbel style beer with flavors from five different continents.

No word yet from the Google’s antitrust-wielding critics whether integration into beer will exclude rivals who vertical search engines who, without access to the beer, have no chance to compete.  Yes, there are specialized beer search sites if you must know (or local beer search).  Or small breweries who, because of Google’s market share in search, cannot compete against Dogfish Head’s newest product.  But before we start the new antitrust investigation, Google has offered some new facts to clarify matters:

Similarly, the project with Dogfish Head brewery was a Googler-driven project organized by a group of craftbrewery aficionados across the company. While our Googlers had fun advising on the creation of a beer recipe, we aren’t receiving any proceeds from the sale of the beer and we have no plans to enter the beer business.

Whew.  What a relief.  But, I’m sure the critics will be watching just in case to see if Dogfish Head jumps in the search rankings.  Donating time and energy to the creation of beer is really just a gateway to more serious exclusionary conduct, right?  And Section 5 of the FTC Act applies to incipient conduct in the beer market, clearly.  Or did the DOJ get beer-related Google activities in the clearance arrangement between the agencies?

One of my colleagues recently accepted a publication offer on a law review article, only to receive a later publication offer from a much more prestigious journal.  This sort of occurrence is not uncommon in the legal academy, where scholars submitting articles for publication do not offer to publish their work in a journal but rather solicit publication offers from journals (and generally solicit multiple offers at the same time).  One may easily accept an inferior journal’s offer before receiving another from a preferred journal. 

I’ve been in my colleague’s unfortunate position three times: once when I was trying to become a professor, once during my first semester of teaching, and once in the semester before I went up for tenure.  Each time, breaching my initial publication contract and accepting the later-received offer from the more prestigious journal would have benefited me by an amount far greater than the harm caused to the jilted journal.  Accordingly, the welfare-maximizing outcome would have been for me to breach my initial publication agreement and to pay the put-upon journal an amount equal to the damage caused by my breach.  Such a move would have been Pareto-improving:  I would have been better off, and the original publisher, the breach “victim,” would have been as well off as before I breached.  

As all first-year law students learn (or should learn!), the law of contracts is loaded with doctrines designed to encourage efficient breach and discourage inefficient performance.  Most notable among these is the rule precluding punitive damages for breach of contract:  If a breaching party were required to pay such damages, in addition to the so-called “expectancy” damages necessary to compensate the breach victim for her loss, then promisors contemplating breach might perform even though doing so would cost more than the value of the performance to the promisee.  Such performance would be wasteful.

So why didn’t I — a contracts professor who knows that a promisor’s contract duty is always disjunctive: “perform or pay” — breach my initial publication agreements and offer the jilted journal editors some amount of settlement (say, $1,000 for an epic staff party — an amount far less than the incremental value to me of going with the higher-ranked journal)?  Because of a silly social norm frowning upon such conduct as indicative of a flawed character.  When I was looking for a teaching job, I was informed that breaching a publication agreement is a definite no-no and might impair my job prospects.  After I became a professor, I learned that members of my faculty had threatened to vote against the tenure of professors who breached publication agreements.  To be fair, I’m not sure those faculty members would do so if the breaching professor compensated the jilted journal, effectively “buying himself out” of his contract.  But who would run that risk?

So I empathize with my colleague who now feels stuck publishing in the less prestigious journal.  And, while I recognize the difference between a legal and moral obligation, I would commend the following wise words to those law professors who would imbue law review publishing contracts with “mystic significance”:

Nowhere is the confusion between legal and moral ideas more manifest than in the law of contract.  Among other things, here again the so-called primary rights and duties are invested with a mystic significance beyond what can be assigned and explained.  The duty to keep a contract at common law means a prediction that you must pay damages if you do not keep it — and nothing else.  If you commit a tort, you are liable to pay a compensatory sum.  If you commit a contract, you are liable to pay a compensatory sum unless the promised event comes to pass, and that is all the difference.  But such a mode of looking at the matter stinks in the nostrils of those who think it advantageous to get as much ethics into the law as they can.

Oliver Wendell Holmes, Jr., The Path of the Law, 10 Harv. L. Rev. 457 (1897).  

Today’s WSJ covers Hollywood’s treatment of business.  And so, of course, they went to the Source (link added):

Hollywood has been famously left-leaning for decades, even as it teemed with shrewd business operators. Larry Ribstein, a professor of law at the University of Illinois who wrote a paper called “Wall Street and Vine” about the historically negative portrayal of business in film, concludes that the ongoing antipathy to corporate execs in films has nothing to do with politics. Rather, many creative types—notably screenwriters and directors—are expressing their own perennial resentment of bottom-line focused studio heads, who often seek to dilute a film’s message for mass-market appeal.

Orson Welles, director of the 1941 classic “Citizen Kane,” about a ruthless media mogul based on William Randolph Hearst, detested interference and famously refused to allow studio executives to visit the set. “He was feeling that artist resentment,” says Mr. Ribstein.

The Journal article has interesting background on the current “Margin Call,” which it describes as unusually fair to business, and suggests it’s because the director’s (J.C. Chandor) father, worked for Merrill Lynch:

A low-budget movie with a high-powered cast, its Wall Street characters are flawed, cynical—but, for once, actually human. * * *.

Mr. Chandor says he wanted to draw a more balanced portrait of the financiers who were being demonized in the media for causing the global economic collapse. The caricatures of executives being denounced by politicians at the time bore little resemblance to Mr. Chandor’s dad, he says.

I wonder how sympathetic the film comes out. I remember another director whose father worked in the securities industry — Oliver Stone.  (My article about Wall Street discusses, among other things, all the father-son threads in the movie).

Before concluding that “there ought to be a law” to remedy an unhappy situation, one should ask whether it’s really a law that’s causing the problem in the first place.  I was reminded of that principle this afternoon when I read some remarks by Michael Pollan, doyen of the “slow food” movement, in today’s New York Times Magazine.    

Responding to the question, “How can you tell if food is genetically engineered?,” Mr. Pollan answered:

You can’t, unless you’re willing to move to Europe or Japan, where the government requires that it be labeled.  Ours doesn’t, so there’s no way to tell.  This is despite the fact that 80 to 90 percent of Americans tell pollsters they want it labeled, and Barack Obama, as a candidate, once promised to make it happen.  But the industry is afraid you won’t buy genetically modified foods if they’re labeled – and they’re probably right.

Mr. Pollan is correct on a couple of matters here.  First, lots of people do want to know if they’re eating genetically modified food.  My own view is that this is silly.  Nearly all food products are, and for generations have been, “genetically modified” via hybridization, selective breeding, etc., and the scientific consensus is that there’s no increased risk when the modification occurs in a laboratory rather than the old-fashioned way.  Nevertheless, many consumers do care about whether their food is genetically modified in the newfangled manner, and who am I (or you, or the government) to tell them that their preferences are invalid.  Second, Mr. Pollan is correct in asserting that Big Ag doesn’t want to label GM food products, lest people refuse to buy them (or reduce the price they’re willing to pay for them).  Indeed, large agribusinesses have lobbied vociferously against mandatory GM labeling rules like those imposed in Europe and Japan.

But Mr. Pollan is wrong to insinuate that regulations mandating GM labeling are necessary if consumers are to know whether food products are genetically modified.  Given that a great many consumers disfavor genetic modification (of the newfangled variety, at least), one would expect entrepreneurs to produce non-GM foods and to tout the pedigree of their products.  By engaging in “voluntary negative” labeling (e.g., “GM Free”), producers could boost demand for their products and provide consumers with useful information about GM status.  Just as mandatory labeling of “Gentile” food isn’t necessary to enable observant Jews to fulfill their preferences for kosher options, the government need not mandate GM labeling in order to protect the interests of GM-phobes.

This assumes, though, that producers of non-GM products, like producers of kosher foods, are free to label their products as such.  Unfortunately for consumers who would prefer to avoid newfangled genetic modification (the old-fashioned type is ubiquitous and unavoidable), current regulations hinder the sort of voluntary negative labeling that could accommodate heterogeneous preferences.  Under an FDA Industry Guidance ostensibly aimed at fraud prevention (and drafted with significant input from Monsanto), sellers of non-GM foods are precluded from:

  1. Using acronyms such as “GM” or “GMO” (according to FDA, saying something is “non-GM” or “non-GMO” is misleading because people don’t understand these acronyms);
  2. Utilizing the term “genetically modified” (according to FDA, saying that a non-gene-transferred organism is not genetically modified is misleading because nearly all foods have been genetically modified through cross-breeding);
  3. Referring to “organisms” or “GMOs” (according to FDA, a food label touting the absence of GMOs is misleading because it implies that foods which are not GMO-free contain “organisms” — that is, living things);
  4. Claiming to be GMO “free” (according to FDA, a claim that a product is GM “free” implies a complete absence of GM material, and it’s very difficult to ensure that there are no trace amounts of GM material in a food item); and
  5. Asserting any implication of superiority (according to FDA, any label that implies that the food product is superior because it lacks GM material misleadingly implies that non-GM is superior).

In light of this guidance from a captured regulatory agency, sellers of non-GM foods are essentially forced to label their products as though they were playing the board game “Taboo,” in which players provide clues to their partners to identify a word but, in doing so, are forbidden to say any of the words one would most naturally use in conveying clues.  Given the laundry list of terms the FDA has declared to be “taboo,” it should not be surprising that producers of non-GM products have not been able to market their products effectively.  And if you can’t market them, why produce them in the first place?

At the end of the day, then, Mr. Pollan is wrong to place the blame for consumer ignorance of GM-status on governmental inaction.  It is affirmative government regulation – not its absence – that precludes consumers from telling if their food is genetically engineered.

We classical liberals are often criticized for undermining communitarian values by emphasizing individual liberties.  In reality, though, a liberal society (in the classical sense, not the welfare-state sense) fosters community by allowing people to associate in ways they find most meaningful.  Indeed, one of the great things about a liberal, live-and-let-live city is that it can accommodate so many communities that cater to different preferences and values:  Orthodox Jews, devout Muslims, evangelical Protestants, gays and lesbians, and various ethnic groups can create their own little communities to foster shared values.  As long as nobody injures the person or property of another, folks are free to commune as they will.

Unfortunately, the sort of liberalism that fosters the spontaneous formation of community groups can be tough to maintain, especially when governments create regulatory bodies charged with “protecting” people from improvident choices.  Those regulators, under constant pressure to “do something” in order to protect their turf, often impose rules that prevent people from communing as they will, even when they’re not hurting anybody else.

I was reminded of this point yesterday when I read that the Bloomberg administration, in the name of “public health,” is cracking down on bars that allow dogs (even in outdoor areas).  How sad for New York City.  Nothing builds community better than a collection of spaces — bars, coffeshops, diners, etc. — where neighbors can go to relax, converse, and share their lives.  And nothing is more likely to keep people coming back and to get them talking to each other than to allow them to bring their dogs.  If you don’t believe me, head down to your local dog park and watch people interact.  Nobody’s a stranger at the dog park. 

Of course, there are lots of people who are scared of dogs, or don’t like them, or believe that their mere presence renders a place unsanitary (even though millions of Americans have dogs in their homes — often in their beds — and seem to suffer no ill-effects).  Such dog phobes needn’t worry.  Profit-seeking entrepreneurs will cater to their preferences by creating dog-free spaces.  The rest of us, then, can head down to our canine-friendly pubs and bond with our fellow dog lovers.

As much as I hate to say it, the French are sometimes right.

Here.  The article highlights an a paper stressing the role of gang colors as a commitment device to ensure higher quality criminals.  The mechanism works, the authors contend, because gang colors are a handicap that increases the probability of detection and thus, low quality criminals are less likely to be able to “afford” wearing them.  Here’s the WSJ description:

Like certain ostentatious displays by males in the animal kingdom, gang colors serve as a handicap, Mell argues: Yes, they make it more likely that the person wearing them will be caught. Yet they semaphore the following message: If I’m still willing to commit crimes when I have this handicap, I must be pretty good at evading the police. Incompetent criminals couldn’t get away with wearing gang colors.

Or from the paper:

In our model this brazen behavior is a solution to an enforcement problem. The central idea is that less able criminals see lower gains from continued participation in crime because they will be caught and punished more often. Lower future gains imply that reputational concerns will be less effective at enforcing honesty. Only dealing with brazen criminals will become a good way to avoid dealing with incompetent criminals, because they cannot afford to mimic the brazen behavior. The principle is similar to the selection for a handicap in evolutionary biology.

Interesting stuff.  The authors more general research question involves actions that appear to increase the probability of detection for criminals.  With respect to the specific example of gang colors, my initial reaction is that I’m skeptical that this mechanism is the dominant explanation for gang colors given their widespread use among teenagers and others who are unlikely to be highly skilled criminals and other open and notorious displays gang members take to reduce the probability of detection (e.g., wearing masks to prevent identification, use of “community” guns that reduce police ability to attribute ownership to any individual member of the gang, etc.).  Instead, I suspect that open display of gang membership, e.g. through signaling membership, is a combination of signaling status and a commitment to bear the costs of actions taken by the rest of the gang which weeds out the non-loyal by forcing prospective members to get some “skin the game.”