Archives For decision theory

The American Bar Association’s (ABA) “Antitrust in Asia:  China” Conference, held in Beijing May 21-23 (with Chinese Government and academic support), cast a spotlight on the growing economic importance of China’s six-year old Anti-Monopoly Law (AML).  The Conference brought together 250 antitrust practitioners and government officials to discuss AML enforcement policy.  These included the leaders (Directors General) of the three Chinese competition agencies (those agencies are units within the State Administration for Industry and Commerce (SAIC), the Ministry of Foreign Commerce (MOFCOM), and the National Development and Reform Commission (NDRC)), plus senior competition officials from Europe, Asia, and the United States.  This was noteworthy in itself, in that the three Chinese antitrust enforcers seldom appear jointly, let alone with potential foreign critics.  The Chinese agencies conceded that Chinese competition law enforcement is not problem free and that substantial improvements in the implementation of the AML are warranted.

With the proliferation of international business arrangements subject to AML jurisdiction, multinational companies have a growing stake in the development of economically sound Chinese antitrust enforcement practices.  Achieving such a result is no mean feat, in light of the AML’s (Article 27) explicit inclusion of industrial policy factors, significant institutional constraints on the independence of the Chinese judiciary, and remaining concerns about transparency of enforcement policy, despite some progress.  Nevertheless, Chinese competition officials and academics at the Conference repeatedly emphasized the growing importance of competition and the need to improve Chinese antitrust administration, given the general pro-market tilt of the 18th Communist Party Congress.  (The references to Party guidance illustrate, of course, the continuing dependence of Chinese antitrust enforcement patterns on political forces that are beyond the scope of standard legal and policy analysis.)

While the Conference covered the AML’s application to the standard antitrust enforcement topics (mergers, joint conduct, cartels, unilateral conduct, and private litigation), the treatment of price-related “abuses” and intellectual property (IP) merit particular note.

In a panel dealing with the investigation of price-related conduct by the NDRC (the agency responsible for AML non-merger pricing violations), NDRC Director General Xu Kunlin revealed that the agency is deemphasizing much-criticized large-scale price regulation and price supervision directed at numerous firms, and is focusing more on abuses of dominance, such as allegedly exploitative “excessive” pricing by such firms as InterDigital and Qualcomm.  (Resale price maintenance also remains a source of some interest.)  On May 22, 2014, the second day of the Conference, the NDRC announced that it had suspended its investigation of InterDigital, given that company’s commitment not to charge Chinese companies “discriminatory” high-priced patent licensing fees, not to bundle licenses for non-standard essential patents and “standard essential patents” (see below), and not to litigate to make Chinese companies accept “unreasonable” patent license conditions.  The NDRC also continues to investigate Qualcomm for allegedly charging discriminatorily high patent licensing rates to Chinese customers.  Having the world’s largest consumer market, and fast growing manufacturers who license overseas patents, China possesses enormous leverage over these and other foreign patent licensors, who may find it necessary to sacrifice substantial licensing revenues in order to continue operating in China.

The theme of ratcheting down on patent holders’ profits was reiterated in a presentation by SAIC Director General Ren Airong (responsible for AML non-merger enforcement not directly involving price) on a panel discussing abuse of dominance and the antitrust-IP interface.  She revealed that key patents (and, in particular, patents that “read on” and are necessary to practice a standard, or “standard essential patents”) may well be deemed “necessary” or “essential” facilities under the final version of the proposed SAIC IP-Antitrust Guidelines.  In effect, implementation of this requirement would mean that foreign patent holders would have to grant licenses to third parties under unfavorable government-set terms – a recipe for disincentivizing future R&D investments and technological improvements.  Emphasizing this negative effect, co-panelists FTC Commissioner Ohlhausen and I pointed out that the “essential facilities” doctrine has been largely discredited by leading American antitrust scholars.  (In a separate speech, FTC Chairwoman Ramirez also argued against treating patents as essential facilities.)  I added that IP does not possess the “natural monopoly” characteristics of certain physical capital facilities such as an electric grid (declining average variable cost and uneconomic to replicate), and that competitors’ incentives to develop alternative and better technology solutions would be blunted if they were given automatic cheap access to “important” patents.  In short, the benefits of dynamic competition would be undermined by treating patents as essential facilities.  I also noted that, consistent with decision theory, wise competition enforcers should be very cautious before condemning single firm behavior, so as not to chill efficiency-enhancing unilateral conduct.  Director General Ren did not respond to these comments.

If China is to achieve its goal of economic growth driven by innovation, it should seek to avoid legally handicapping technology market transactions by mandating access to, or otherwise restricting returns to, patents.  As recognized in the U.S. Justice Department-Federal Trade Commission 1995 IP-Antitrust Guidelines and 2007 IP-Antitrust Report, allowing the IP holder to seek maximum returns within the scope of its property right advances innovative welfare-enhancing economic growth.  As China’s rapidly growing stock of IP matures and gains in value, it hopefully will gain greater appreciation for that insight, and steer its competition policy away from the essential facilities doctrine and other retrograde limitations on IP rights holders that are inimical to long term innovation and welfare.

As I noted in my prior post, two weeks ago the 13th Annual Conference of the International Competition Network (ICN) released two new sets of recommended best practices.  Having focused on competition assessment in my prior blog entry, I now turn to the ICN’s predatory pricing recommendations.

Aggressive price cutting is the essence of competitive behavior, and the application of antitrust enforcement to price cuts that are mislabeled as “predatory” threatens to chill such competition on the merits and deny consumers the benefits of lower prices.

Fortunately, the U.S. Supreme Court’s 1993 Brooke Group decision appropriately limited antitrust predatory pricing liability to cases where the defendant (1) priced below “an appropriate measure” of its costs and (2) had a “reasonable prospect of recouping” its investment in below cost pricing.  Brooke Group enhanced United States welfare by largely eliminating the risk of unwarranted predatory pricing suits, to the benefit of consumers and producers.  In particular, because courts generally have applied stringent cost measures (such as average variable cost, not the higher average total cost), findings of below cost pricing have been rare.  Consistent with decision theory, there is good reason to believe that whatever increase in antitrust “false negatives” (failure to challenge truly harmful behavior) it engendered has been greatly outweighed by the reduction in false positives (unwarranted challenges to procompetitive behavior).

The European Union’s test for antitrust predatory pricing is, by contrast, easier to satisfy.  Prices below average variable cost are presumed illegal, prices between average variable cost and average total cost are abusive if part of a plan to eliminate competitors (such prices would not be deemed predatory in the United States), and likelihood of recoupment need not be shown (enforcers presume that parties would not engage in below cost pricing if they did not think it would ultimately be profitable).  Europeans generally have been far more willing to carry out detailed case-specific predatory pricing evaluations, believing that they have the ability to get difficult analyses right.  Given the widespread adoption of the European approach to competition in much of the world, and the benefit for prosecutors of not having to prove recoupment, the European take on predatory pricing has seemed to be in the ascendancy.

Given this background, the ICN’s newly minted Recommended Practices on Predatory Pricing Analysis Pursuant to Unilateral Conduct Laws (RPPP) are a welcome breath of fresh air.  The RPPP are strongly grounded in economics, and they place great stress on the need to obtain solid evidence (rather than rely on mere theory) that predation is occurring in a particular case.  The following RPPP features are particularly helpful:

  • They stress up front the importance of focusing on the benefits of vigorous price competition to consumers;
  • They explain that a predatory strategy is rational only when a firm expects to acquire, maintain, or strengthen market power through its actions, which means that the predator expects not only to recoup its losses sustained during the predatory period, but also to enhance profits by holding its prices above what they otherwise would have been;
  • They urge that agencies use a sound economically-based theory of harm tied to a relevant market, and determine early on (before running difficult price-cost tests) whether the alleged predator’s prices are likely to cause competitive harm;
  • They advocate basing price-cost tests on the costs of the dominant firm, with concern centering on harm to equally efficient (not less efficient) competitors;
  • They provide an economically sophisticated summary of differences among potential measures of cost;
  • They recognize that to harm competition, low prices must deprive rivals of significant actual or potential sales in at least one market;
  • They stress that low barriers to entry and re-entry in the market render predation unlikely because recoupment is infeasible;
  • They call for examination of evidence relating to the rationale of a pricing strategy to distinguish between low pricing that harms competition and low pricing that reflects healthy competition;
  • They urge that agencies examine objective business justifications and defenses for low prices (such as promotional pricing and achieving scale economies); and
  • They support administrable and clearly communicated enforcement standards (an implicit nod to decision theory), the adoption of safe harbors that can be easily complied with, and agency cooperation early on with the alleged predator to understand the records it keeps and to facilitate price-cost comparisons.

Although the RPPP do not adopt the simple rules embodied in Brooke Group (which in my view would have been the optimal outcome), they reflect throughout a concern for economically rational evidence-based enforcement.  Such enforcement is based on a full appreciation of the welfare benefits of vigorous price competition, the possible procompetitive business justifications for price cutting, and the need for clear enforcement standards and safe harbors.

Overall, the RPPP demonstrate that the ICN remains capable of building consensus support for concise, economically-based antitrust enforcement principles, that take into account practical business justifications for certain practices.  As business deals increasingly take on a global dimension, the convergence of predatory pricing norms around a model suggested by the RPPP would be a most welcome, welfare-enhancing development.

Regular readers will know that several of us TOTM bloggers are fans of the “decision-theoretic” approach to antitrust law.  Such an approach, which Josh and Geoff often call an “error cost” approach, recognizes that antitrust liability rules may misfire in two directions:  they may wrongly acquit harmful practices, and they may wrongly convict beneficial (or benign) behavior.  Accordingly, liability rules should be structured to minimize total error costs (welfare losses from condemning good stuff and acquitting bad stuff), while keeping in check the costs of administering the rules (e.g., the costs courts and business planners incur in applying the rules).  The goal, in other words, should be to minimize the sum of decision and error costs.  As I have elsewhere demonstrated, the Roberts Court’s antitrust jurisprudence seems to embrace this sort of approach.

One of my long-term projects (once I jettison some administrative responsibilities, like co-chairing my school’s dean search committee!) will be to apply the decision-theoretic approach to regulation generally.  I hope to build upon some classic regulatory scholarship, like Alfred Kahn’s Economics of Regulation (1970) and Justice Breyer’s Regulation and Its Reform (1984), to craft a systematic regulatory model that both avoids “regulatory mismatch” (applying the wrong regulatory fix to a particular type of market failure) and incorporates the decision-theoretic perspective. 

In the meantime, I’ve been thinking about insider trading regulation.  Our friend Professor Bainbridge recently invited me to contribute to a volume he’s editing on insider trading.  I’m planning to conduct a decision-theoretic analysis of actual and proposed insider trading regulation.

Such regulation is a terrific candidate for decision-theoretic analysis because stock trading on the basis of material, nonpublic information itself is a “mixed bag” practice:  Some instances of insider trading are, on net, socially beneficial; others create net welfare losses.  Contrast, for example, two famous insider trading cases:

  • In SEC v. Texas Gulf Sulphur, mining company insiders who knew of an unannounced ore discovery purchased stock in their company, knowing that the stock price would rise when the discovery was announced.  Their trading activity caused the stock price to rise over time.  Such price movement might have tipped off landowners in the vicinity of the deposit and caused them not to sell their property to the company (or to do so only at a high price), in which case the traders’ activity would have thwarted a valuable corporate opportunity.  If corporations cannot exploit their discoveries of hidden value (because of insider trading), they’ll be less likely to seek out hidden value in the first place, and social welfare will be reduced.  TGS thus represents “bad” insider trading.  
  • Dirks v. SEC, by contrast, illustrates “good” insider trading.  In that case, an insider tipped a securities analyst that a company was grossly overvalued because of rampant fraud.  The analyst recommended that his clients sell (or buy puts on) the stock of the fraud-ridden corporation.  That trading helped expose the fraud, creating social value in the form of more accurate stock prices.

These are just two examples of how insider trading may reduce or enhance social welfare.  In general, instances of insider trading may reduce social welfare by preventing firms from exploiting and thus creating valuable information (as in TGS), by creating incentives for deliberate mismanagement (because insiders can benefit from “bad news” and might therefore be encouraged to “create” it), and perhaps by limiting stock market liquidity or reducing market efficiency by increasing bid-ask spreads.  On the other hand, instances of insider trading may enhance social welfare by making stock markets more efficient (so that prices better reflect firms’ expected profitability and capital is more appropriately channeled), by reducing firms’ compensation costs (as the right to engage in insider trading replaces managers’ cash compensation—on this point, see the excellent work by our former blog colleague, Todd Henderson), and by reducing the corporate mismanagement and subsequent wealth destruction that comes from stock mispricing (mainly overvaluation of equity—see work by Michael Jensen and yours truly).

Because insider trading is sometimes good and sometimes bad, rules restricting it may err in two directions:  they may acquit/encourage bad instances, or they may condemn/prevent good instances.  In either case, social welfare suffers.  Accordingly, the optimal regulatory regime would seek to minimize the sum of losses from improper condemnations and improper acquittals (total error costs), while keeping administrative costs in check.

My contribution to Prof. Bainbridge’s insider trading book will employ decision theory to evaluate three actual or proposed approaches to regulating insider trading:  (1) the “level playing field” paradigm, apparently favored by many prosecutors and securities regulators, which would condemn any stock trading on the basis of material, nonpublic information; (2) the legal status quo, which deems “fraudulent” any insider trading where the trader owes either a fiduciary duty to his trading partner or a duty of trust or confidence to the source of his nonpublic information; and (3) a laissez-faire, “contractarian” approach, which would permit corporations and sources of nonpublic information to posit their own rules about when insiders and informed outsiders may trade on the basis of material, nonpublic information.  I’ll then propose a fourth disclosure-based alternative aimed at maximizing social welfare by enhancing the social benefits and reducing the social costs of insider trading, while keeping decision costs in check. 

Stay tuned…I’ll be trying out a few of the paper’s ideas on TOTM.  I look forward to hearing our informed readers’ thoughts.

What will legal education be like in the significantly deregulated world I’ve predicted in prior posts?

I gave some thought to this question in my recent paper, Practicing Theory. There I pointed out that law schools, and particularly law faculty, have benefited from the same regulation that has benefited lawyers.  Although lawyers now complain that legal education is insufficiently “practical,” they have only themselves to blame for any deficiencies.  The legal profession has established law school accreditation as a costly barrier to entry, and then effectively delegated control over what was taught in law schools behind the regulatory walls.

I also argued in the above paper that the debate over the content of legal education in a deregulated world is not the one that we seem to be having — that is, between “practice” and “theory.”  When deregulation comes the market will control content.  It’s far from clear that the market will demand that lawyers keep doing what they’ve been doing, which is what lawyers mean by “practice.”  It follows that law schools should not necessarily train students to do what lawyers are doing right now.  New lawyers’ roles will require new types of education.

My article outlines some future roles of lawyers, and how law school can help train for these rules.

Lawyers as collaborators:   In the new world of legal services, the more menial tasks will be done by machines or non-professionals, leaving lawyers for the more sophisticated stuff.  This will require collaborations across the physical and social sciences.  For example, lawyers might work with psychologists to incorporate the tools of behavioral psychology into creating and applying consumer, securities, and other regulation. Legal experts also will have to learn to work with (or be) computer engineers to produce the powerful technological tools I’ve discussed in previous posts.

The lawyer as manufacturer: Lawyers will not simply be applying old cases to new fact situations to advise clients what they should do. Rather, they will be designing the products discussed in previous posts such as contracts and compliance devices.  As designers they will need to delve into basic theories of contract production, deterrence and the like.  While automation handles many legal tasks, designing the tools for these tasks will require experts who understand their basic architecture.

The lawyer as lawmaker: Lawyers, freed from simply applying the law, may be increasingly involved in designing it.  This entails an understanding of how and why laws, constitutions and procedural systems work.  The theory taught in law schools, including economics, philosophy, history and comparative law, was often not very relevant to routine law practice.  When software and low-paid workers take over those tasks, the legal experts who remain will need this theory.

The lawyer as information engineer: Lawyers and scholars might be able to use data to predict the future.  But to do that they will need theories from such fields as economics, psychology, sociology, decision theory, and political science to construct the models that make sense out of the raw data.  This work also provides another reason why lawyers will need to learn how to collaborate with (or be) computer scientists.

The lawyer as capitalist: Lawyers can make a lot of money in the capital markets from being able to predict legal outcomes that determine asset values. The demand for this expertise could increase the demand in law schools for training in securities and finance law. It also could refocus the study of such basic areas as contract, property, and tort law from advising and litigating to handicapping the results of litigation.

Global legal education: Legal educators increasingly cater to law students from outside the United States. They therefore need to focus on the basic principles of American common law and system of government.

Private meets public law. The theories legal experts will need to learn to move from applying existing law to creating new legal structures will have to meet market demands rather than educators’ preferences. While legal experts no longer may be able to ignore such fields as constitutional and administrative law, they will have to take with them into these fields the tools and lessons of private ordering and markets.

Educating business lawyers. Many legal experts will move directly into businesses.  But in-house lawyers’ tasks may change from the current model.  Increasing automation of corporate contracting and compliance may help embed legal work into the basic structure of business.  In-house lawyers will move from talking to business people to being business people.  This suggests that legal education and business education may merge for at least a subset of legal experts.

The end of one-size-fits-all:  Licensing, accreditation and bar exams have locked in a single model of three-years of law school with a fairly standardized curriculum.  The developments discussed in my previous posts make this model increasingly untenable.  The new legal expert must be trained for business, law making, technology design and many other tasks that cannot be encompassed by a single course of study.  Moreover, this world will rapidly evolve in uncertain ways once freed of licensing’s constraints.  Legal educators will have to be free to experiment with a variety of different approaches, much as business schools do today. The accreditation standards that survive as part of the new regulation of lawyers will have to provide this freedom.  This argues for the “driver’s license” approach to licensing suggested in a previous post in which lawyers can use their home state license to practice anywhere. Such an approach could allow for different forms of mandatory training for different types of specialties.  These requirements could evolve as states balance the need for some regulation against the clamor by local consumers for access to cheaper services.

Lessons for today’s law schools:  What should law school faculty and administrators do now?  The top six or so can probably keep plugging away at what they have done:  teaching high end theory to top law students.  These students likely will be the legal architects of the future.  When the new era comes, the top six schools will have the resources and reputation to turn on a dime and embrace the future.

But for the Harvard wannabes that think they can ignore the changes shaking the profession and party like it’s 1899:  you are ill-serving your students and will be fighting for your lives in a few years.  The time to think about the future is right now.

Lawyers in Jeopardy

Larry Ribstein —  17 February 2011

The WSJ reports:

In a nationally televised competition, the Watson computer system built by International Business Machines Corp. handily defeated two former “Jeopardy” champions. * * *

To emulate the human mind, and make it competitive on the TV quiz show, Watson was stuffed with millions of documents—including dictionaries, anthologies and the World Book Encyclopedia.  After reading a clue, Watson mines the database, poring over 200 million pages of content in less than three seconds. Researchers developed algorithms to measure Watson’s level of confidence in an answer in order to decide whether it should hit the “Jeopardy!” buzzer.

The article notes that one commercial plan for Watson targets the health care industry.  Well, what about law?

In my recently posted working paper, Law’s Information Revolution, Bruce Kobayashi and I discuss how developments like this could fundamentally change “law practice” into an information-based industry where law-based information is sold through product and capital markets. For example, we note (footnotes omitted):

There is room for more radical developments in using computers to create legal knowledge.  This could involve reengineering the underlying idea of what legal research entails.  Instead of the conventional method of relying on courts’ holdings categorized in treatises or “tagged” via West Key Numbers, lawyers might analyze facts in extensive databases of cases or court records available through PACER (Public Access to Court Electronic Records) to predict case results.  These predictions might be refined using theories based on economic analysis, psychology, sociology, decision theory and political science to determine relevant variables. Lawyers might collaborate with computer scientists to develop new computer prediction algorithms. This would be analogous to the techniques already used to predict consumers’ tastes in films and music.   Computers already can provide the correct Jeopardy question “Who is Eddie Albert Camus” for the answer “A ‘Green Acres’ star goes existential (& French) as the author of ‘The Fall.’” They ought to be able to answer a question like “can a lawyer copyright a complaint?”

I will be discussing the implications of these developments for law teaching at an Iowa symposium next week, and posting my paper on that shortly.

Of course computers won’t replace humans anytime soon.  Watson’s creator conceded that “A computer doesn’t know what it means to be human.”

Yes, but do lawyers know that?

Kevin McCabe is a Professor of Law at George Mason University and holds appointments at George Mason’s Interdisciplinary Center for Economic Science, the Mercatus Center, and Krasnow Institute.

Having started my career as an experimental economist I probably have a little different, but I hope complimentary, perspective on behavioral economics and other experimental programs in general.

I view the difference between experimental and behavioral economics in terms of (1) what is studied, and (2) how it is studied.  Experimental economists are interested in institutional and organizational rules and how these rules affect both, the joint behavior of participants, and the outcome generating, or process, performance of the institutional rules in question.  To study this the experimental economist induces preferences and implements a microeconomic system.  One major problem for this approach is that ‘risk preferences’ are very noisy, when induced, due either to, the added complexity imposed on subjects of having to work with induced preferences, or that the induced preferences conflict with a subject’s actual preferences.  A second major problem with this approach is that institutional rules that are isolated in the lab often depend on on additional rules that are not being studied, or social and cultural norms that are not present in the lab.  Experimental economists have learned to manage these problems and many interesting research papers have been produced.

Behavioral economists are interested in individual behavior, whether it be individual choices, strategic decision making, or competitive strategies in markets.  The behavioral economist does not in general induce preferences, but does often use salient rewards in a well defined decision theoretic problem defined by decision theory, game theory, or price theory.  As a consequence of not inducing the behavioral economist is interested in the nature of preferences, and the nature of decision making.  One major problem for this approach is that preferences and decisions interact, and it is often not clear whether one is studying the former, the later, or a combination of both.  A second major problem with this approach is that behavior observed in the lab may not capture the full computations that people are capable of making when augmented by technology and institutions.  But again, behavioral economists have learned to manage these problems and many interesting research papers have been produced.

When I refer to experimental economists, or behavioral economists, I am referring to a researcher employing a specific methodology to explore a specific class of problems.  So, in my experience, there are many researchers who employ more than one methodology, and this has proven to be very useful.  But now I can be more specific, thus narrower, and I’m sure subject to more debate.  Lets hypothesize that experiments are all about exploring the computations that humans make.  Under this hypothesis both experimental economics and behavioral economics are methods for exploring computational mechanisms.  In the former case institutions are mechanisms than make computations, and in the later case individuals are mechanisms that make computations, but in the end we will want a computational theory of economics that includes both.  I think this is where we are heading and when I look at some of the most promising experimental programs including economic systems design, which seeks to engineer better institutions, and neuroeconomics, which seeks to understand the computations occurring in embodied brains, it seems that the computational hypothesis is one that will best integrate the different experimental methodologies and best serve to move experimentation forward.

This raises the question, should we use experiments to study the law?  By my hypothesis anything computational can be studied experimentally, and in legal institutions and in legal decision making many interesting computations are made.  This suggests that we could use experiments to study the law.  The downside of course is that our experiments could mislead us, but any source of data could mislead us.  In its favor experiments invite a form of structured debate that is almost impossible to have without them.  In particular, if I don’t like your experiment, then I’m free to run my own counter experiment, and as long as both our experiments replicate, a good theory should be able to explain both results and lead us to a better understanding of the mechanism in question.  If we agree to the theory but are still hesitant to apply our knowledge to the field we are now in a better position to design, and run, a field experiment that can help us decide.

My latest working paper, which bears the same title as this post, is now available on SSRN. In the paper, I address the challenge created by the Supreme Court’s 2007 Leegin decision, which abrogated the 96 year-old rule declaring resale price maintenance (RPM) to be per se illegal. The Leegin Court held that instances of RPM must instead be evaluated under antitrust’s more lenient rule of reason. It also directed lower courts to craft a structured liability analysis for separating pro- from anticompetitive instances of the practice.

Since Leegin was decided, courts, commentators, and regulators have proposed at least four types of approaches for evaluating instances of RPM. Some of the approaches, like that advocated by the American Antitrust Institute, focus on whether an instance of RPM has raised consumer prices. Others, like that set forth in the pending Toys-R-Us case, focus on the identity of the party initiating the RPM (manufacturer or retailer(s)?). Some, like that proposed by Professor Marina Lao, focus on whether the product subject to RPM is sold with retailer services that are susceptible to free-riding. One approach, that endorsed by the FTC, mechanically applies factors the Leegin Court mentioned as relevant, but with little consideration of the potential for proof failures.

My paper critiques these four approaches from the perspective of decision theory (or what Josh and Geoff might call error cost analysis). Recognizing that antitrust liability rules always involve a risk of imposing social costs — either losses from under-deterrence if the rule wrongly acquits anticompetitive acts or losses from over-deterrence if it wrongly convicts procompetitive practices — decision theory says liability rules should be tailored to minimize the expected total cost of a liability decision. Specifically, the optimal rule will minimize the sum of decision costs (the costs of reaching a decision) and expected error costs (the costs of getting the decision wrong).

To evaluate how the proposed RPM rules fare from a decision-theoretic perspective, I begin by considering the theoretical harms and benefits associated with RPM and the empirical evidence on the incidence of those various effects. This analysis leads me to conclude that most instances of RPM are pro- rather than anticompetitive. I then consider whether wrongful convictions or wrongful acquittals are likely to cause greater social losses, and I conclude that wrongful convictions threaten greater harm. Taken together, these two conclusions call for a liability rule that tends to acquit more instances of RPM than it convicts. The proposed liability approaches, by contrast, are tilted toward conviction. Moreover, several of the proposed approaches would condemn instances of RPM even when the preconditions for anticompetitive harm are not satisfied.

Finding each of the proposed liability analyses to be deficient, I set forth an alternative approach that (1) reflects the economic learning on RPM (with respect to both the theories of competitive effects and the empirical evidence of those various effects), (2) is aimed at minimizing the costs of incorrect judgments, and (3) would be fairly easy for courts and business planners to administer. The proposed approach, in short, aims to minimize the sum of decision and error costs in regulating RPM.

Please download the piece. Comments are most welcome.

lambertThom Lambert is an Associate Professor of Law at University of Missouri Law School and a blogger at Truth on the Market.

A bundled discount occurs when a seller offers to sell a collection of different goods for a lower price than the aggregate price for which it would sell the constituent products individually. Such discounts pose different competitive risks than single-product discounts because, as I explained in this post, they may have an exclusionary effect even if they result in a price that exceeds the cost of producing the bundle. In particular, even an “above-cost” bundled discount may have the effect of excluding rivals that (1) are more efficient at producing the products that compete with the discounter’s but (2) produce a less extensive product line than the discounter. In other words, bundled discounts may drive equally efficient but less diversified rivals from the market.

Given that they are a “mixed bag” practice (some immediate benefits, some potential anticompetitive harms) and pose risks beyond those presented by straightforward predatory pricing, courts and commentators have struggled to articulate a legal standard that would prevent unreasonably exclusionary bundled discounts without chilling procompetitive bundling. With the notable exception of the en banc Third Circuit’s LePage’s decision, which is essentially standardless, most of the approaches courts and commentators have articulated for evaluating bundled discounts have involved some sort of test that compares prices and costs. Chapter 6 of the Department of Justice’s Section 2 Report explains the various “price-cost” tests in detail.

Based on the presentations in the Section 2 hearings, the Department reached essentially four conclusions concerning bundled discounts:

Continue Reading…

kobayashiBruce Kobayashi is a Professor of Law at George Mason Law School.

One of the most important changes in the antitrust laws over the past 40 years has been the diminished reliance of rules of per se illegality in favor of a rule of reason analysis. With the Court’s recent rulings in Leegin (eliminating per se rule for minimum RPM) and Independent Ink (eliminating the per se rule against intellectual property tying), the evolution of the antitrust laws has left only tying (under a “modified” per se rule) and horizontal price fixing under per se rules of illegality. This movement reflects advances in law and economics that recognize that vertical restraints, once condemned as per se illegal when used by firms with antitrust market power, can be procompetitive. It also reflects the judgment that declaring such practices pre se illegal produced high type I error costs (the false condemnation and deterrence of pro competitive practices).

The widespread use of the rule of reason can be problematic, however, because of the inability of antitrust agencies and courts to reliably differentiate between pro and anticompetitive conduct. Conduct analyzed under Section 2 often has the potential to generate efficiencies and be anticompetitive, and finding a way to reliably differentiate between the two has been described as “one of the most vexing questions in antitrust law” (Section 2 Report, p. 12). Under these conditions, applying a rule of reason analysis on a case by case basis may not substantially reduce error costs and can drastically increase the costs of enforcement. Thus, under the decision theory framework widely used by economists and courts, which teaches that optimal legal standards should minimize the sum of error costs and enforcement costs, “bright line” per se rules of legality and illegality can dominate more nuanced but error prone standards under the rule of reason. Continue Reading…

hyltonKeith Hylton is a Professor of Law at Boston University School of Law.  [Eds – This post originally appeared on Day 1 of the Symposium, but we are re-publishing it today because it bears directly on today’s debate over general standards]

The “error cost” or “decision theory” approach to Section 2 legal standards emphasizes the probabilities and costs of errors in monopolization decisions.  Two types of error, and two associated types of cost are examined.  One type of error is that of a false acquittal, or false negative.  The other type of error is that of a false conviction, or false positive.  Under the error cost approach to legal standards, a legal standard should be chosen that minimizes the total expected costs of errors.

Suppose, for example, the legal decision maker has a choice between two legal standards, A and B.  Suppose under standard A, the probability of a false acquittal is 1/4 and the probability of a false conviction is 1/5.  Under standard B, the probability of false acquittal is 1/5 and the probability of a false conviction is 1/4.  Suppose the cost of a false acquittal is $1 and the cost of a false conviction is $2.  The expected error cost of standard A is therefore .25 + (.2)($2) = $.65.  The expected error cost of standard B is .2 + (.25)($2) = $.70.  Since the expected error cost of standard B is greater than that of standard A, standard A should be preferred.

In monopolization law there are several legal standards that have been applied by courts and proposed by commentators, such as the balancing test, the specific intent test, the profit sacrifice test, the disproportionality test, the equally efficient competitor test, no-economic-sense test, and others.  Almost all of the tests can be grouped under the alternative categories of balancing or non-balancing tests.  Under the error cost approach, the ideal legal standard for any given area of monopolization law is the one that generates the smallest expected error cost.  Moreover, each of these tests has been proposed as a default rule to be applied across the board, but which can be abandoned in a specific case that merits an alternative standard.

The Department of Justice’s recent Section 2 Report reviews the various monopolization standards and embraces the disproportionality test as the best default rule.  The disproportionality test holds the defendant liable under Section 2 only when the anticompetitive effects of his conduct are disproportionate in light of the precompetitive benefits.  This is an approach that makes sense if one adopts the view, as did the authors of the DOJ report, that the costs of false convictions under monopolization law are larger than the costs of false acquittals.  The disproportionality test is quite close in application to the specific intent test, the no-economic-sense test, and one version of the profit sacrifice test.

Although it is ultimately an empirical question, there are several reasons to adopt the presumption that false conviction costs are greater than false acquittal costs in the monopolization context.  Two of the most persuasive reasons are based on the incentives for entry and for rent-seeking.  The costs of false acquittals in the monopolization setting can be kept in check through the threat of competitive entry.  The costs of false convictions, on the other hand, generate rent seeking incentives to file suit under Section 2 on the part of firms that compete against dominant firms.  Another important reason for the presumption is the asymmetric impact of errors.  False acquittals permit one firm, the falsely-acquitted defendant, to continue practices that harm consumers.  False convictions overdeter dominant firms in general, and can lead to a form of soft competition which is especially harmful to consumers.

One of the key purposes of error cost analysis is to serve as bridge between economic theory and legal standards in antitrust.  Economic models often assume courts can implement legal tests with perfect accuracy.  But this is not always true.  The accuracy of a balancing test that requires courts to distinguish vigorous competition from predation will depend on the quality of judges, juries, lawyers, and the procedural mechanisms in place for conducting a trial.  Even a small risk of error leading to a possible multibillion dollar trebled judgment can lead a firm that has not engaged in anticompetitive conduct to alter its conduct to avoid the risk of an antitrust lawsuit.  For economic theory to lead to useful recommendations for antitrust courts, analysts must consider the likelihood of error and the costs of error under proposed monopolization tests.  Error cost analysis provides a framework for courts to screen and assign “value weights” to the recommendations from economic analysis.

lambertThom Lambert is an Associate Professor of Law at University of Missouri Law School and a blogger at Truth on the Market.

There’s a fundamental problem with Section 2 of the Sherman Act: nobody really knows what it means. More specifically, we don’t have a very precise definition for “exclusionary conduct,” the second element of a Section 2 claim. The classic definition from the Supreme Court’s Grinnell decision — “the willful acquisition or maintenance of [monopoly] power as distinguished from growth or development as a consequence of a superior product, business acumen, or historic accident” — provides little guidance. The same goes for vacuous statements that exclusionary conduct is something besides “competition on the merits.” Accordingly, a generalized test for exclusionary conduct has become a sort of Holy Grail for antitrust scholars and regulators.

In its controversial Section 2 Report, the Department of Justice considered four proposed general tests for unreasonably exclusionary conduct: the so-called “effects-balancing,” “profit-sacrifice/no-economic-sense,” “equally efficient competitor,” and “disproportionality” tests. While the Department concluded that conduct-specific tests and safe harbors (e.g., the Brooke Group test for predatory pricing) provide the best means of determining when conduct is unreasonably exclusionary, it did endorse the disproportionality test for novel business practices for which “a conduct-specific test is not applicable.” Under the disproportionality test, “conduct that potentially has both procompetitive and anticompetitive effects is anticompetitive under section 2 if its likely anticompetitive harms substantially outweigh its likely procompetitive benefits.”

According to the Department, the disproportionality test satisfies several criteria that should guide selection of a generalized test for exclusionary conduct. It is focused on protecting competition, not competitors. Because it precludes liability based on close balances of pro- and anticompetitive effects, it is easy for courts and regulators to administer and provides clear guidance to business planners. And it properly accounts for decision theory, recognizing that the costs of false positives in this area likely exceed the costs of false negatives.

While it has some laudable properties (most notably, its concern about overdeterrence), the disproportionality test is unsatisfying as a general test for exclusionary conduct because it is somewhat circular. In order to engage in the required balancing of pro- and anticompetitive effects, one needs to know which effects are, in fact, anticompetitive. As the Department correctly noted, the mere fact that a practice disadvantages or even excludes a competitor does not make that practice anticompetitive. For example, lowering one’s prices from supracompetitive levels or enhancing the quality of one’s product will usurp business from one’s rivals. Yet we’d never say such competitor-disadvantaging practices are anticompetitive, and the loss of business to rivals should not be deemed an anticompetitive effect of the practices.

“Anticompetitive” harm presumably means harm to competition. We know that that involves something other than harm to individual competitors. But what exactly does it mean? If Acme Inc. offers a bundled discount that results in a bundle price that is above the aggregate cost of the products in the bundle but cannot be met by a less diversified rival, is that a harm to competition or just a harm to the less diversified competitor? If Acme pays a loyalty rebate that results in an above-cost price for its own product but usurps so much business from rivals that they fall below minimum efficient scale and thus face higher per-unit costs, is that harm to competition or to a competitor? These are precisely the sorts of hard (and somewhat novel) cases in which we need a generalized test for exclusionary conduct. Unfortunately, they are also the sorts of cases in which the Department’s proposed disproportionality test is unhelpful.
Continue Reading…

hyltonKeith Hylton is a Professor of Law at Boston University School of Law.

The “error cost” or “decision theory” approach to Section 2 legal standards emphasizes the probabilities and costs of errors in monopolization decisions.  Two types of error, and two associated types of cost are examined.  One type of error is that of a false acquittal, or false negative.  The other type of error is that of a false conviction, or false positive.  Under the error cost approach to legal standards, a legal standard should be chosen that minimizes the total expected costs of errors.

Suppose, for example, the legal decision maker has a choice between two legal standards, A and B.  Suppose under standard A, the probability of a false acquittal is 1/4 and the probability of a false conviction is 1/5.  Under standard B, the probability of false acquittal is 1/5 and the probability of a false conviction is 1/4.  Suppose the cost of a false acquittal is $1 and the cost of a false conviction is $2.  The expected error cost of standard A is therefore .25 + (.2)($2) = $.65.  The expected error cost of standard B is .2 + (.25)($2) = $.70.  Since the expected error cost of standard B is greater than that of standard A, standard A should be preferred.

In monopolization law there are several legal standards that have been applied by courts and proposed by commentators, such as the balancing test, the specific intent test, the profit sacrifice test, the disproportionality test, the equally efficient competitor test, no-economic-sense test, and others.  Almost all of the tests can be grouped under the alternative categories of balancing or non-balancing tests.  Under the error cost approach, the ideal legal standard for any given area of monopolization law is the one that generates the smallest expected error cost.  Moreover, each of these tests has been proposed as a default rule to be applied across the board, but which can be abandoned in a specific case that merits an alternative standard.

The Department of Justice’s recent Section 2 Report reviews the various monopolization standards and embraces the disproportionality test as the best default rule.  The disproportionality test holds the defendant liable under Section 2 only when the anticompetitive effects of his conduct are disproportionate in light of the precompetitive benefits.  This is an approach that makes sense if one adopts the view, as did the authors of the DOJ report, that the costs of false convictions under monopolization law are larger than the costs of false acquittals.  The disproportionality test is quite close in application to the specific intent test, the no-economic-sense test, and one version of the profit sacrifice test.

Although it is ultimately an empirical question, there are several reasons to adopt the presumption that false conviction costs are greater than false acquittal costs in the monopolization context.  Two of the most persuasive reasons are based on the incentives for entry and for rent-seeking.  The costs of false acquittals in the monopolization setting can be kept in check through the threat of competitive entry.  The costs of false convictions, on the other hand, generate rent seeking incentives to file suit under Section 2 on the part of firms that compete against dominant firms.  Another important reason for the presumption is the asymmetric impact of errors.  False acquittals permit one firm, the falsely-acquitted defendant, to continue practices that harm consumers.  False convictions overdeter dominant firms in general, and can lead to a form of soft competition which is especially harmful to consumers.

One of the key purposes of error cost analysis is to serve as bridge between economic theory and legal standards in antitrust.  Economic models often assume courts can implement legal tests with perfect accuracy.  But this is not always true.  The accuracy of a balancing test that requires courts to distinguish vigorous competition from predation will depend on the quality of judges, juries, lawyers, and the procedural mechanisms in place for conducting a trial.  Even a small risk of error leading to a possible multibillion dollar trebled judgment can lead a firm that has not engaged in anticompetitive conduct to alter its conduct to avoid the risk of an antitrust lawsuit.  For economic theory to lead to useful recommendations for antitrust courts, analysts must consider the likelihood of error and the costs of error under proposed monopolization tests.  Error cost analysis provides a framework for courts to screen and assign “value weights” to the recommendations from economic analysis.