The Federal Trade Commission’s (FTC) June 23 Workshop on Conditional Pricing Practices featured a broad airing of views on loyalty discounts and bundled pricing, popular vertical business practices that recently have caused much ink to be spilled by the antitrust commentariat.  In addition to predictable academic analyses featuring alternative theoretical anticompetitive effects stories, the Workshop commendably included presentations by Benjamin Klein that featured procompetitive efficiency explanations for loyalty programs and by Daniel Crane that stressed the importance of (1) treating discounts hospitably and (2) requiring proof of harmful foreclosure.  On balance, however, the Workshop provided additional fuel for enforcers who are enthused about applying new anticompetitive effects models to bring “problematic” discounting and bundling to heel.

Before U.S. antitrust enforcement agencies launch a new crusade against novel vertical discounting and bundling contracts, however, they may wish to ponder a few salient factors not emphasized in the Workshop.

First, the United States has the most efficient marketing and distribution system in the world, and it has been growing more efficient in recent decades (this is the one part of the American economy that has been a bright spot).  Consumers have benefited from more shopping convenience and higher quality/lower priced offerings due to the advent of  “big box” superstores, Internet sales engines (and e-commerce in general), and other improvements in both on-line and “bricks and mortar” sales methods.

Second, and relatedly, the Supreme Court’s recognition of vertical contractual efficiencies in GTE-Sylvania (1977) ushered in a period of greatly reduced potential liability for vertical restraints, undoubtedly encouraging economically beneficial marketing improvements.  A new government emphasis on investigating and litigating the merits of novel vertical practices (particularly practices that emphasize discounting, which presumptively benefits consumers) could inject costly new uncertainty into the marketing side of business planning, spawn risk aversion, and deter marketing innovations that reduce costs, thereby harming welfare.  These harms would mushroom to the extent courts mistakenly “bought into” new theories and incorrectly struck down efficient practices.

Third, in applying new theories of competitive harm, the antitrust enforcers should be mindful of Ronald Coase’s admonition that “if an economist finds something—a business practice of one sort or other—that he does not understand, he looks for a monopoly explanation.  And as in this field we are very ignorant, the number of ununderstandable practices tends to be rather large, and the reliance on a monopoly explanation, frequent.”  Competition is a discovery procedure.  Entrepreneurial businesses constantly seek improvements not just in productive efficiency, but in distribution and marketing efficiencies, in order to eclipse their rivals.  As such, entrepreneurs may experiment with new contractual forms (such as bundling and loyalty discounts) in an effort to expand their market shares and grow their firms.  Business persons may not know ex ante which particular forms will work.  They may try out alternatives, sticking with those that succeed and discarding those that fail, without necessarily being able to articulate precisely the reasons for success or failure.  Real results in the market, rather than arcane economic theorems, may be expected to drive their decision-making.   Distribution and marketing methods that are successful will be emulated by others and spread.  Seen in this light (and relatedly, in light of transaction cost economics explanations for “non-standard” contracts), widespread adoption of new vertical contractual devices most likely indicates that they are efficient (they improve distribution, and imitation is the sincerest form of flattery), not that they represent some new competitive threat.  Since an economic model almost always can be ginned up to explain why some new practice may reduce consumer welfare in theory, enforcers should instead focus on hard empirical evidence that output and quality have been reduced due to a restraint before acting.  Unfortunately, the mere threat of costly misbegotten investigations may chill businesses’ interest in experimenting with new and potentially beneficial vertical contractual arrangements, reducing innovation and slowing welfare enhancement (consistent with point two, above).

Fourth, decision theoretic considerations should make enforcers particularly wary of pursuing conditional pricing contracts cases.  Consistent with decision theory, optimal antitrust enforcement should adopt an error cost framework that seeks to minimize the sum of the costs attributable to false positives, false negatives, antitrust administrative costs, and disincentive costs imposed on third parties (the latter may also be viewed as a subset of false positives).  Given the significant potential efficiencies flowing from vertical restraints, and the lack of empirical showing that they are harmful, antitrust enforcers should exercise extreme caution in entertaining proposals to challenge new vertical arrangements, such as conditional pricing mechanisms.  In particular, they should carefully assess the cumulative weight of the high risk of false positives in this area, the significant administrative costs that attend investigations and prosecutions, and the disincentives toward efficient business arrangements (see points two and three above).  Taken together, these factors strongly suggest that the aggressive pursuit of conditional pricing practice investigations would flunk a reasonable cost-benefit calculus.

Fifth, a new U.S. antitrust enforcement crusade against conditional pricing could be used by foreign competition agencies to justify further attacks on efficient vertical practices.  This could add to the harm suffered by companies (including, of course, U.S.-based multinationals) which would be deterred from maintaining and creating new welfare-beneficial distribution methods.  Foreign consumers, of course, would suffer as well.

My caveats should not be read to suggest that the FTC should refrain from pursuing new economic learning on loyalty discounting and bundled pricing, nor on other novel business practices.  Nor should it necessarily eschew all enforcement in the vertical restraints area – although that might not be such a bad idea, given error cost and resource constraint issues.  (Vertical restraints that are part of a cartel enforcement scheme should be treated as cartel conduct, and, as such, should be fair game, of course.)  In order optimally to allocate scarce resources, however, the FTC might benefit by devoting relatively greater attention to the most welfare-inimical competitive abuses – namely, anticompetitive arrangements instigated, shielded, or maintained by government authority.  (Hard core private cartel activity is best left to the Justice Department, which can deploy powerful criminal law tools against such schemes.)

U.S. antitrust law focuses primarily on private anticompetitive restraints, leaving the most serious impediments to a vibrant competitive process – government-initiated restraints – relatively free to flourish.  Thus the Federal Trade Commission (FTC) should be commended for its July 16 congressional testimony that spotlights a fast-growing and particularly pernicious species of (largely state) government restriction on competition – occupational licensing requirements.  Today such disciplines (to name just a few) as cat groomers, flower arrangers, music therapists, tree trimmers, frozen dessert retailers, eyebrow threaders, massage therapists (human and equine), and “shampoo specialists,” in addition to the traditional categories of doctors, lawyers, and accountants, are subject to professional licensure.  Indeed, since the 1950s, the coverage of such rules has risen dramatically, as the percentage of Americans requiring government authorization to do their jobs has risen from less than five percent to roughly 30 percent.

Even though some degree of licensing responds to legitimate health and safety concerns (i.e., no fly-by-night heart surgeons), much occupational regulation creates unnecessary barriers to entry into a host of jobs.  Excessive licensing confers unwarranted benefits on fortunate incumbents, while effectively barring large numbers of capable individuals from the workforce.  (For example, many individuals skilled in natural hair braiding simply cannot afford the 2,100 hours required to obtain a license in Iowa, Nebraska, and South Dakota.)  It also imposes additional economic harms, as the FTC’s testimony explains:  “[Occupational licensure] regulations may lead to higher prices, lower quality services and products, and less convenience for consumers.  In the long term, they can cause lasting damage to competition and the competitive process by rendering markets less responsive to consumer demand and by dampening incentives for innovation in products, services, and business models.”  Licensing requirements are often enacted in tandem with other occupational regulations that unjustifiably limit the scope of beneficial services particular professionals can supply – for instance, a ban on tooth cleaning by dental hygienists not acting under a dentist’s supervision that boosts dentists’ income but denies treatment to poor children who have no access to dentists.

What legal and policy tools are available to chip away at these pernicious and costly laws and regulations, which largely are the fruit of successful special interest lobbying?  The FTC’s competition advocacy program, which responds to requests from legislators and regulators to assess the economic merits of proposed laws and regulations, has focused on unwarranted regulatory restrictions in such licensed professions as real estate brokers, electricians, accountants, lawyers, dentists, dental hygienists, nurses, eye doctors, opticians, and veterinarians.  Retrospective reviews of FTC advocacy efforts suggest it may have helped achieve some notable reforms (for example, 74% of requestors, regulators, and bill sponsors surveyed responded that FTC advocacy initiatives influenced outcomes).  Nevertheless, advocacy’s reach and effectiveness inherently are limited by FTC resource constraints, by the need to obtain “invitations” to submit comments, and by the incentive and ability of licensing scheme beneficiaries to oppose regulatory and legislative reforms.

Former FTC Chairman Kovacic and James Cooper (currently at George Mason University’s Law and Economics Center) have suggested that federal and state antitrust experts could be authorized to have ex ante input into regulatory policy making.  As the authors recognize, however, several factors sharply limit the effectiveness of such an initiative.  In particular, “the political feasibility of this approach at the legislative level is slight”, federal mandates requiring ex ante reviews would raise serious federalism concerns, and resource constraints would loom large.

Antitrust law challenges to anticompetitive licensing schemes likewise offer little solace.  They are limited by the antitrust “state action” doctrine, which shields conduct undertaken pursuant to “clearly articulated” state legislative language that displaces competition – a category that generally will cover anticompetitive licensing requirements.  Even a Supreme Court decision next term (in North Carolina Dental v. FTC) that state regulatory boards dominated by self-interested market participants must be actively supervised to enjoy state action immunity would have relatively little bite.  It would not limit states from issuing simple statutory commands that create unwarranted occupational barriers, nor would it prevent states from implementing “adequate” supervisory schemes that are designed to approve anticompetitive state board rules.

What then is to be done?

Constitutional challenges to unjustifiable licensing strictures may offer the best long-term solution to curbing this regulatory epidemic.  As Clark Neily points out in Terms of Engagement, there is a venerable constitutional tradition of protecting the liberty interest to earn a living, reflected in well-reasoned late 19th and early 20th century “Lochner-era” Supreme Court opinions.  Even if Lochner is not rehabilitated, however, there are a few recent jurisprudential “straws in the wind” that support efforts to rein in “irrational” occupational licensure barriers.  Perhaps acting under divine inspiration, the Fifth Circuit in St. Joseph Abbey (2013) ruled that Louisiana statutes that required all casket manufacturers to be licensed funeral directors – laws that prevented monks from earning a living by making simple wooden caskets – served no other purpose than to protect the funeral industry, and, as such, violated the 14th Amendment’s Equal Protection and Due Process Clauses.  In particular, the Fifth Circuit held that protectionism, standing alone, is not a legitimate state interest sufficient to establish a “rational basis” for a state statute, and that absent other legitimate state interests, the law must fall.  Since the Sixth and Ninth Circuits also have held that intrastate protectionism standing alone is not a legitimate purpose for rational basis review, but the Tenth Circuit has held to the contrary, the time may soon be ripe for the Supreme Court to review this issue and, hopefully, delegitimize pure economic protectionism.  Such a development would place added pressure on defenders of protectionist occupational licensing schemes.  Other possible avenues for constitutional challenges to protectionist licensing regimes (perhaps, for example, under the Dormant Commerce Clause) also merit being explored, of course.  The Institute of Justice already is performing yeoman’s work in litigating numerous cases involving unjustified licensing and other encroachments on economic liberty; perhaps their example can prove an inspiration for pro bono efforts by others.

Eliminating anticompetitive occupational licensing rules – and, more generally, vindicating economic liberties that too long have been neglected – is obviously a long-term project, and far-reaching reform will not happen in the near term.  Nevertheless, while we the currently living may in the long run be dead (pace Keynes), our posterity will be alive, and we owe it to them to pursue the vindication of economic liberties under the Constitution.

The International Center for Law & Economics (ICLE) and TechFreedom filed two joint comments with the FCC today, explaining why the FCC has no sound legal basis for micromanaging the Internet and why “net neutrality” regulation would actually prove counter-productive for consumers.

The Policy Comments are available here, and the Legal Comments are here. See our previous post, Net Neutrality Regulation Is Bad for Consumers and Probably Illegal, for a distillation of many of the key points made in the comments.

New regulation is unnecessary. “An open Internet and the idea that companies can make special deals for faster access are not mutually exclusive,” said Geoffrey Manne, Executive Director of ICLE. “If the Internet really is ‘open,’ shouldn’t all companies be free to experiment with new technologies, business models and partnerships?”

“The media frenzy around this issue assumes that no one, apart from broadband companies, could possibly question the need for more regulation,” said Berin Szoka, President of TechFreedom. “In fact, increased regulation of the Internet will incite endless litigation, which will slow both investment and innovation, thus harming consumers and edge providers.”

Title II would be a disaster. The FCC has proposed re-interpreting the Communications Act to classify broadband ISPs under Title II as common carriers. But reinterpretation might unintentionally ensnare edge providers, weighing them down with onerous regulations. “So-called reclassification risks catching other Internet services in the crossfire,” explained Szoka. “The FCC can’t easily forbear from Title II’s most onerous rules because the agency has set a high bar for justifying forbearance. Rationalizing a changed approach would be legally and politically difficult. The FCC would have to simultaneously find the broadband market competitive enough to forbear, yet fragile enough to require net neutrality rules. It would take years to sort out this mess — essentially hitting the pause button on better broadband.”

Section 706 is not a viable option. In 2010, the FCC claimed Section 706 as an independent grant of authority to regulate any form of “communications” not directly barred by the Act, provided only that the Commission assert that regulation would somehow promote broadband. “This is an absurd interpretation,” said Szoka. “This could allow the FCC to essentially invent a new Communications Act as it goes, regulating not just broadband, but edge companies like Google and Facebook, too, and not just neutrality but copyright, cybersecurity and more. The courts will eventually strike down this theory.”

A better approach. “The best policy would be to maintain the ‘Hands off the Net’ approach that has otherwise prevailed for 20 years,” said Manne. “That means a general presumption that innovative business models and other forms of ‘prioritization’ are legal. Innovation could thrive, and regulators could still keep a watchful eye, intervening only where there is clear evidence of actual harm, not just abstract fears.” “If the FCC thinks it can justify regulating the Internet, it should ask Congress to grant such authority through legislation,” added Szoka. “A new communications act is long overdue anyway. The FCC could also convene a multistakeholder process to produce a code enforceable by the Federal Trade Commission,” he continued, noting that the White House has endorsed such processes for setting Internet policy in general.

Manne concluded: “The FCC should focus on doing what Section 706 actually commands: clearing barriers to broadband deployment. Unleashing more investment and competition, not writing more regulation, is the best way to keep the Internet open, innovative and free.”

For some of our other work on net neutrality, see:

“Understanding Net(flix) Neutrality,” an op-ed by Geoffrey Manne in the Detroit News on Netflix’s strategy to confuse interconnection costs with neutrality issues.

“The Feds Lost on Net Neutrality, But Won Control of the Internet,” an op-ed by Berin Szoka and Geoffrey Manne in Wired.com.

“That startup investors’ letter on net neutrality is a revealing look at what the debate is really about,” a post by Geoffrey Manne in Truth on the Market.

Bipartisan Consensus: Rewrite of ‘96 Telecom Act is Long Overdue,” a post on TF’s blog highlighting the key points from TechFreedom and ICLE’s joint comments on updating the Communications Act.

The Net Neutrality Comments are available here:

ICLE/TF Net Neutrality Policy Comments

TF/ICLE Net Neutrality Legal Comments

With Berin Szoka.

TechFreedom and the International Center for Law & Economics will shortly file two joint comments with the FCC, explaining why the FCC has no sound legal basis for micromanaging the Internet—now called “net neutrality regulation”—and why such regulation would be counter-productive as a policy matter. The following summarizes some of the key points from both sets of comments.

No one’s against an open Internet. The notion that anyone can put up a virtual shingle—and that the good ideas will rise to the top—is a bedrock principle with broad support; it has made the Internet essential to modern life. Key to Internet openness is the freedom to innovate. An open Internet and the idea that companies can make special deals for faster access are not mutually exclusive. If the Internet really is “open,” shouldn’t all companies be free to experiment with new technologies, business models and partnerships? Shouldn’t the FCC allow companies to experiment in building the unknown—and unknowable—Internet of the future?

The best approach would be to maintain the “Hands off the Net” approach that has otherwise prevailed for 20 years. That means a general presumption that innovative business models and other forms of “prioritization” are legal. Innovation could thrive, and regulators could still keep a watchful eye, intervening only where there is clear evidence of actual harm, not just abstract fears. And they should start with existing legal tools—like antitrust and consumer protection laws—before imposing prior restraints on innovation.

But net neutrality regulation hurts more than it helps. Counterintuitively, a blanket rule that ISPs treat data equally could actually harm consumers. Consider the innovative business models ISPs are introducing. T-Mobile’s unRadio lets users listen to all the on-demand music and radio they want without taking a hit against their monthly data plan. Yet so-called consumer advocates insist that’s a bad thing because it favors some content providers over others. In fact, “prioritizing” one service when there is congestion frees up data for subscribers to consume even more content—from whatever source. You know regulation may be out of control when a company is demonized for offering its users a freebie.

Treating each bit of data neutrally ignores the reality of how the Internet is designed, and how consumers use it.  Net neutrality proponents insist that all Internet content must be available to consumers neutrally, whether those consumers (or content providers) want it or not. They also argue against usage-based pricing. Together, these restrictions force all users to bear the costs of access for other users’ requests, regardless of who actually consumes the content, as the FCC itself has recognized:

[P]rohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks.

The rules that net neutrality advocates want would hurt startups as well as consumers. Imagine a new entrant, clamoring for market share. Without the budget for a major advertising blitz, the archetypical “next Netflix” might never get the exposure it needs to thrive. But for a relatively small fee, the startup could sign up to participate in a sponsored data program, with its content featured and its customers’ data usage exempted from their data plans. This common business strategy could mean the difference between success and failure for a startup. Yet it would be prohibited by net neutrality rules banning paid prioritization.

The FCC lacks sound legal authority. The FCC is essentially proposing to do what can only properly be done by Congress: invent a new legal regime for broadband. Each of the options the FCC proposes to justify this—Section 706 of the Telecommunications Act and common carrier classification—is deeply problematic.

First, Section 706 isn’t sustainable. Until 2010, the FCC understood Section 706 as a directive to use its other grants of authority to promote broadband deployment. But in its zeal to regulate net neutrality, the FCC reversed itself in 2010, claiming Section 706 as an independent grant of authority. This would allow the FCC to regulate any form of “communications” in any way not directly barred by the Act — not just broadband but “edge” companies like Google and Facebook. This might mean going beyond neutrality to regulate copyright, cybersecurity and more. The FCC need only assert that regulation would somehow promote broadband.

If Section 706 is a grant of authority, it’s almost certainly a power to deregulate. But even if its power is as broad as the FCC claims, the FCC still hasn’t made the case that, on balance, its proposed regulations would actually do what it asserts: promote broadband. The FCC has stubbornly refused to conduct serious economic analysis on the net effects of its neutrality rules.

And Title II would be a disaster. The FCC has asked whether Title II of the Act, which governs “common carriers” like the old monopoly telephone system, is a workable option. It isn’t.

In the first place, regulations that impose design limitations meant for single-function networks simply aren’t appropriate for the constantly evolving Internet. Moreover, if the FCC re-interprets the Communications Act to classify broadband ISPs as common carriers, it risks catching other Internet services in the cross-fire, inadvertently making them common carriers, too. Surely net neutrality proponents can appreciate the harmful effects of treating Skype as a common carrier.

Forbearance can’t clean up the Title II mess. In theory the FCC could “forbear” from Title II’s most onerous rules, promising not to apply them when it determines there’s enough competition in a market to make the rules unnecessary. But the agency has set a high bar for justifying forbearance.

Most recently, in 2012, the Commission refused to grant Qwest forbearance even in the highly competitive telephony market, disregarding competition from wireless providers, and concluding that a cable-telco “duopoly” is inadequate to protect consumers. It’s unclear how the FCC could justify reaching the opposite conclusion about the broadband market—simultaneously finding it competitive enough to forbear, yet fragile enough to require net neutrality rules. Such contradictions would be difficult to explain, even if the FCC generally gets discretion on changing its approach.

But there is another path forward. If the FCC can really make the case for regulation, it should go to Congress, armed with the kind of independent economic and technical expert studies Commissioner Pai has urged, and ask for new authority. A new Communications Act is long overdue anyway. In the meantime, the FCC could convene the kind of multistakeholder process generally endorsed by the White House to produce a code enforceable by the Federal Trade Commission. A consensus is possible — just not inside the FCC, where the policy questions can’t be separated from the intractable legal questions.

Meanwhile, the FCC should focus on doing what Section 706 actually demands: clearing barriers to broadband deployment and competition. The 2010 National Broadband Plan laid out an ambitious pro-deployment agenda. It’s just too bad the FCC was so obsessed with net neutrality that it didn’t focus on the plan. Unleashing more investment and competition, not writing more regulation, is the best way to keep the Internet open, innovative and free.

[Cross-posted at TechFreedom.]

There were several letters in today’s Wall Street Journal commenting on my recent op-ed with my son Joe on second best arguments for various forms of crony capitalism.

Overall, these articles are critical of our position, but I do not disagree with them. Our original article was at best a weak defense, with terms like “may be” and “a second-best world is messy” and “there may be better ways.”  The letters are basically amplifying these caveats, and I do not disagree that the second best alternatives Joe and I proposed were flawed.  Clearly, as the letter writers indicate, a first best world with no inefficient regulation would be better.  Would that we could get there. 

But it does seem odd to pick on the Export-Import Bank as the major effort to reduce crony capitalism. The Bank makes money in an accounting sense, but loses in terms of the risk-adjusted cost of capital.  The cost is estimated as $2 billion.  The ethanol program or the import-restriction policies of the Commerce Department and the International Trade Commission are much more costly, and would make better targets for deregulation.  

There is another political point.  Those of us in favor of free markets are fond of pointing out that being pro-market is not the same as being pro-business, and may point to opposition to the Ex-Im Bank as an example.  But while being pro-market is not the same as being  pro-business, it is also true that business is one of the major forces generally advocating freer markets and decreased regulation.  There is a cost to antagonizing a major ally in the fight against inefficient rules.

 

Today is the last day for public comment on the Federal Communications Commission’s latest net neutrality proposal.  Here are two excellent op-eds on the matter, one by former FCC Commissioner Robert McDowell and the other by Tom Hazlett and TOTM’s own Josh Wright.  Hopefully, the Commission will take to heart the pithy observation of one of my law school friends, Commissioner Ajit Pai:  “The Internet was free and open before the FCC adopted net neutrality rules. It remains free and open today. Net neutrality has always been a solution in search of a problem.”

The Times seems to specialize in stories that use lots of economics but still miss the important points. Two examples from today: Stories about Uber, and about the dispute between Amazon and Hachette.

UBER:  The article describes Uber’s using price changes to measure elasticity of demand, and more or less gets it right.  But it goes on to discuss the competition between Uber and Lyft with taxi companies.  However, what is not mentioned is that taxis are greatly handicapped in this fight because of their own sins.  They have lobbied for price fixing and supply limitation, thus creating the very market that Uber is entering.  It is quite plausible that if the taxi market were a free entry free price market there would be no demand for firms such as Uber.  Interesting to see how Uber does in cities such as Washington D.C. with relatively free entry into the taxi market, compared with New York city with highly restrictive rules.

The article also misses another point.  It discusses an agreement recently signed by Uber that limits “surge” pricing in times of disasters.  But what is not mentioned is the effect of this restriction in reducing supply and increasing demand during the very times when transportation services are most needed.  While we economists have won some public relations battles, we have not weaned the public away from its hatred of “price gouging.”

 

AMAZON: The story about the Amazon-Hachette dispute is interesting.  But again, some of the key economics is missing. 

Traditional publishers serve two purposes: They organize the physical publishing of books, and they certify quality.  Neither of these functions is needed any more an a world of ebooks.  For ebooks, there is no need of physical publishing, and reader comments are a good substitute for quality certification, at least for fiction.  Amazon provides other services to help inform consumers about books that might be of interest.

Moreover, authors should have a natural affinity with ebook publishers.  For physical books, there is a conflict between authors and publishers.  Authors are paid a royalty based on dollar volume, so they want a price that maximizes revenue.  All of the author’s costs are fixed costs.  Publishers have the marginal cost of actually printing and distributing the book, so their goal is to maximize profit, revenue minus cost.  When costs are positive, the profit maximizing price (MR=MC) is greater than the revenue maximizing price (MR=0), so authors traditionally think that publishers have overpriced their books.  This conflict does not exist for ebooks (marginal cost is zero) so Amazon and authors both want the revenue maximizing price.  As a result, I predict that in the long term Amazon will win because it will have a comparative advantage in dealing with authors. 

Last Monday, a group of nineteen scholars of antitrust law and economics, including yours truly, urged the U.S. Court of Appeals for the Eleventh Circuit to reverse the Federal Trade Commission’s recent McWane ruling.

McWane, the largest seller of domestically produced iron pipe fittings (DIPF), would sell its products only to distributors that “fully supported” its fittings by carrying them exclusively.  There were two exceptions: where McWane products were not readily available, and where the distributor purchased a McWane rival’s pipe along with its fittings.  A majority of the FTC ruled that McWane’s policy constituted illegal exclusive dealing.

Commissioner Josh Wright agreed that the policy amounted to exclusive dealing, but he concluded that complaint counsel had failed to prove that the exclusive dealing constituted unreasonably exclusionary conduct in violation of Sherman Act Section 2.  Commissioner Wright emphasized that complaint counsel had produced no direct evidence of anticompetitive harm (i.e., an actual increase in prices or decrease in output), even though McWane’s conduct had already run its course.  Indeed, the direct evidence suggested an absence of anticompetitive effect, as McWane’s chief rival, Star, grew in market share at exactly the same rate during and after the time of McWane’s exclusive dealing.

Instead of focusing on direct evidence of competitive effect, complaint counsel pointed to a theoretical anticompetitive harm: that McWane’s exclusive dealing may have usurped so many sales from Star that Star could not achieve minimum efficient scale.  The only evidence as to what constitutes minimum efficient scale in the industry, though, was Star’s self-serving statement that it would have had lower average costs had it operated at a scale sufficient to warrant ownership of its own foundry.  As Commissioner Wright observed, evidence in the record showed that other pipe fitting producers had successfully entered the market and grown market share substantially without owning their own foundry.  Thus, actual market experience seemed to undermine Star’s self-serving testimony.

Commissioner Wright also observed that complaint counsel produced no evidence showing what percentage of McWane’s sales of DIPF might have gone to other sellers absent McWane’s exclusive dealing policy.  Only those “contestable” sales – not all of McWane’s sales to distributors subject to the full support policy – should be deemed foreclosed by McWane’s exclusive dealing.  Complaint counsel also failed to quantify sales made to McWane’s rivals under the generous exceptions to its policy.  These deficiencies prevented complaint counsel from adequately establishing the degree of market foreclosure caused by McWane’s policy – the first (but not last!) step in establishing the alleged anticompetitive harm.

In our amicus brief, we antitrust scholars take Commissioner Wright’s side on these matters.  We also observe that the Commission failed to account for an important procompetitive benefit of McWane’s policy:  it prevented rival DIPF sellers from “cherry-picking” the most popular, highest margin fittings and selling only those at prices that could be lower than McWane’s because the cherry-pickers didn’t bear the costs of producing the full line of fittings.  Such cherry-picking is a form of free-riding because every producer’s fittings are more highly valued if a full line is available.  McWane’s policy prevented the sort of free-riding that would have made its production of a full line uneconomical.

In short, the FTC’s decision made it far too easy to successfully challenge exclusive dealing arrangements, which are usually procompetitive, and calls into question all sorts of procompetitive full-line forcing arrangements.  Hopefully, the Eleventh Circuit will correct the Commission’s mistake.

Other professors signing the brief include:

  • Tom Arthur, Emory Law
  • Roger Blair, Florida Business
  • Don Boudreaux, George Mason Economics (and Café Hayek)
  • Henry Butler, George Mason Law
  • Dan Crane, Michigan Law (and occasional TOTM contributor)
  • Richard Epstein, NYU and Chicago Law
  • Ken Elzinga, Virginia Economics
  • Damien Geradin, George Mason Law
  • Gus Hurwitz, Nebraska Law (and TOTM)
  • Keith Hylton, Boston University Law
  • Geoff Manne, International Center for Law and Economics (and TOTM)
  • Fred McChesney, Miami Law
  • Tom Morgan, George Washington Law
  • Barack Orbach, Arizona Law
  • Bill Page, Florida Law
  • Paul Rubin, Emory Economics (and TOTM)
  • Mike Sykuta, Missouri Economics (and TOTM)
  • Todd Zywicki, George Mason Law (and Volokh Conspiracy)

The brief’s “Summary of Argument” follows the jump. Continue Reading…

As another Israeli-Muslim armed conflict begins, it instructive to consider the lethality of previous conflicts.  The best estimate is that about 35,000 Muslims have been killed in all of the Israel-Muslim conflicts since 1948. During that same period, about 10,000,000 Muslims have been killed by other Muslims.  The Arab-Israeli conflict overall is the 49th deadliest conflict since 1950.  Of the total 85,000,000 deaths in that period, the Israeli-Arab conflict is responsible for .06 percent of all fatalities.  Hard to understand why this conflict generates so much attention and news. Also hard to understand why so many blame Israel for killing Muslims when many many more Muslims have killed each other. (All data from Daniel Pipes website.)

Debates among modern antitrust experts focus primarily on the appropriate indicia of anticompetitive behavior, the particular methodologies that should be applied in assessing such conduct, and the best combination and calibration of antitrust sanctions (fines, jail terms, injunctive relief, cease and desist orders).  Given a broad consensus that antitrust rules should promote consumer welfare (albeit some disagreement about the meaning of the term), discussions tend (not surprisingly) to emphasize the welfare effects of particular practices (and, relatedly, appropriate analytic techniques and procedural rules).  Less attention tends to be paid, however, to whether the overall structure of enforcement policy enhances welfare.

Assuming that one views modern antitrust enforcement as an exercise in consumer welfare maximization, what does that tell us about optimal antitrust enforcement policy design?  In order to maximize welfare, enforcers must have an understanding of – and seek to maximize the difference between – the aggregate costs and benefits that are likely to flow from their policies.  It therefore follows that cost-benefit analysis should be applied to antitrust enforcement design.  Specifically, antitrust enforcers first should ensure that the rules they propagate create net welfare benefits.  Next, they should (to the extent possible) seek to calibrate those rules so as to maximize net welfare.  (Significantly, Federal Trade Commissioner Josh Wright also has highlighted the merits of utilizing cost-benefit analysis in the work of the FTC.)

Importantly, while antitrust analysis is different in nature from agency regulation, cost-benefit analysis also has been the centerpiece of Executive Branch regulatory review since the Reagan Administration, winning bipartisan acceptance.  (Cass Sunstein has termed it “part of the informal constitution of the U.S. regulatory state.”)  Indeed, an examination of general Executive Branch guidance on cost-benefit regulatory assessments, and, in particular, on the evaluation of old policies, is quite instructive.  As stated by the Obama Administration in the context of Office of Management regulatory review, pursuant to Executive Order 13563, retrospective analysis allows an agency to identify “rules that may be outmoded, ineffective, insufficient, or excessively burdensome, and to modify, streamline, expand, or repeal them in accordance with what has been learned.”  Although Justice Department and FTC antitrust policy formulation is not covered by this Executive Order, its principled focus on assessments of preexisting as well as proposed regulations should remind federal antitrust enforcers that scrutinizing the actual effects of past enforcement initiatives is key to improving antitrust enforcement policy.  (Commendably, FTC Chairwoman Edith Ramirez and former FTC Chairman William Kovacic have emphasized the value of retrospective reviews.)

What should underlie cost-benefit analysis of antitrust enforcement policy?  The best approach is an error cost (decision theoretic) framework, which tends toward welfare maximization by seeking to minimize the sum of the costs attributable to false positives, false negatives, antitrust administrative costs, and disincentive costs imposed on third parties (the latter may also be viewed as a subset of false positives).  Josh Wright has provided an excellent treatment of this topic in touting the merits of evidence-based antitrust enforcement.  As Wright points out, such an approach places a premium on hard evidence of actual anticompetitive harm and empirical analysis, rather than mere theorizing about anticompetitive harm (which too often may lead to a misidentification of novel yet efficient business practices).

How should antitrust enforcers implement an error cost framework in establishing enforcement policy protocols?  Below I suggest eight principles that, I submit, would align antitrust enforcement policy much more closely with an error cost-based, cost-benefit approach.  These suggestions are preliminary tentative thoughts, put forth solely to stimulate future debate.  I fully recognize that convincing public officials to implement a cost-benefit framework for antitrust enforcement (which inherently limits bureaucratic discretion) will be exceedingly difficult, to say the least.  Generating support for such an approach is a long term project.  It must proceed in light of the political economy of antitrust and more specifically the institutional structure of antitrust enforcement (which Dan Crane has addressed in impressive fashion), topics that merit separate exploration.

First, antitrust enforcers should seek to identify and expound simple rules they will follow in both case selection and evaluation of business conduct, in order to rein in administrative costs.

Second, borrowing from Frank Easterbrook, they should place a greater emphasize on avoiding false positives than false negatives, particularly in the area of unilateral conduct (since false positives may send cautionary signals to third party businesses that the market cannot easily correct).

Third, they should pursue cases based on hard empirically-based indications of likely anticompetitive harm, rather than theoretical constructs that are hard to verify.

Fourth, they should avoid behavioral remedies in merger cases (and, indeed, other cases) to the greatest extent possible, given inherent problems of monitoring and administration posed by such requirements.  (See the trenchant critique of merger behavioral remedies by John Kwoka and Diana Moss.)

Fifth, they should emphasize giving full consideration to efficiencies (including dynamic efficiencies), given their importance to innovation and economic welfare gains.

Sixth, they should announce their positions in public pronouncements and guidelines that are as simple and straightforward as possible.  Agency guidance should be “tweaked” in light of compelling new empirical evidence, but “pendulum swing” changes should be minimized to avoid costly uncertainty.

Seventh, in non per se matters, they should pledge that they will only bring cases when (1) they have substantial evidence for the facts on which they rely and (2) that reasoning from those facts makes their prediction of harm to future competition more plausible than the defendant’s alternative account of the future.  (Doug Ginsburg and Josh Wright recommend that such a standard be applied to judicial review of antitrust enforcement action.)

Eighth, in the area of cartel conduct, they should adjust leniency and other enforcement policies based on the latest empirical findings and economic theory, seeking to pursue optimal detection and deterrence in light of “real world” evidence (see, for example, Greg Werden, Scott Hammond, and Belinda Barnett).

Admittedly, these suggestions bear little resemblance to recent federal antitrust enforcement initiatives.  Indeed, Obama Administration antitrust enforcers appear to me to have been moving farther away from an approach rooted in cost-benefit analysis.  The 2010 Horizontal Merger Guidelines, although more sophisticated than prior versions, give relatively short shrift to efficiencies (as Josh Wright has pointed out).  The Obama Justice Department’s withdrawal in 2009 of its predecessor’s Sherman Act Section Two Report (which had emphasized error costs and proposed simple rules for assessing monopolization cases) highlighted a desire for “aggressive enforcement,” without providing specific guidance for the private sector.  More generally, an assessment by William Shughart and Diana Thomas of antitrust enforcement in the Obama Administration’s first term concluded that antitrust agency activity had moved away from structural remedies and toward intrusive behavioral remedies “in an unprecedented fashion,” yielding suboptimal regulation – a far cry from cost-beneficial norms.

One may only hope (which after all makes “all the difference in the world”) that Federal Trade Commission and Justice Department officials, inspired by their teams of highly qualified economists, may consider according greater weight to cost-benefit considerations and error cost approaches as they move forward.