Archives For Competition law

A key issue raised by the United Kingdom’s (UK) withdrawal from the European Union (EU) – popularly referred to as Brexit – is its implications for competition and economic welfare.  The competition issue is rather complex.  Various potentially significant UK competition policy reforms flowing from Brexit that immediately suggest themselves are briefly summarized below.  (These are merely examples – further evaluation may point to additional significant competition policy changes that Brexit is likely to inspire.)

First, UK competition policy will no longer be subject to European Commission (EC) competition law strictures, but will be guided instead solely by UK institutions, led by the UK Competition and Markets Authority (CMA).  The CMA is a free market-oriented, well-run agency that incorporates careful economic analysis into its enforcement investigations and industry studies.  It is widely deemed to be one of the world’s best competition and consumer protection enforcers, and has first-rate leadership.  (Former U.S. Federal Trade Commission Chairman William Kovacic, a very sound antitrust scholar, professor, and head of George Washington University Law School’s Competition Law Center, serves as one of the CMA’s “Non-Executive Directors,” who set the CMA’s policies.)  Post-Brexit, the CMA will no longer have to conform its policies to the approaches adopted by the EC’s Directorate General for Competition (DG Comp) and determinations by European courts.   Despite its recent increased reliance on an “economic effects-based” analytical approach, DG-Comp still suffers from excessive formalism and an over-reliance on pure theories of harm, rather than hard empiricism.  Moreover, EU courts still tend to be overly formalistic and deferential to EC administrative determinations.  In short, CMA decision-making in the competition and consumer protection spheres, free from constraining EU influences, should (at least marginally) prove to be more welfare-enhancing within the UK post-Brexit.  (For a more detailed discussion of Brexit’s implication for EU and UK competition law, see here.)  There is a countervailing risk that Brexit might marginally worsen EU competition policy by eliminating UK pro-free market influence on EU policies, but the likelihood and scope of such a marginal effect is not readily measurable.

Second, Brexit will allow the UK to escape participation in the protectionist, wasteful, output-limiting European agricultural cartel knows as the “Common Agricultural Policy,” or CAP, which involves inefficient subsidies whose costs are borne by consumers.  This would be a clearly procompetitive and welfare-enhancing result, to the extent that it undermined the CAP.  In the near term, however, its net effects on CAP financing and on the welfare of UK farmers appear to be relatively small.

Third, the UK may be able to avoid the restrictive EU Common Fisheries Policy and exercise greater control over its coastal fisheries.  In so doing, the UK could choose to authorize the creation of a market-based tradable fisheries permit system that would enhance consumer and producer welfare and increase competition.

Fourth, Brexit will free the UK economy from one-size-fits-all supervisory regulatory frameworks in such areas as the environment, broadband policy (“digital Europe”), labor, food and consumer products, among others.  This regulatory freedom, properly handled, could prove a major force for economic flexibility, reductions in regulatory burdens, and enhanced efficiency.

Fifth, Brexit will enable the UK to enter into true free trade pacts with the United States and other nations that avoid the counterproductive bells and whistles of EU industrial policy.  For example, a “zero tariffs” agreement with the United States that featured reciprocal mutual recognition of health, safety, and other regulatory standards would avoid heavy-handed regulatory harmonization features of the Transatlantic Trade and Investment Policy agreement being negotiated between the EU and the United States.  (As I explained in a previous Truth on the Market post, “a TTIP focus on ‘harmonizing’ regulations could actually lower economic freedom (and welfare) by ‘regulating upward’ through acceptance of [a] more intrusive approach, and by precluding future competition among alternative regulatory models that could lead to welfare-enhancing regulatory improvements.”)

In sum, while Brexit’s implications for other economic factors, such as macroeconomic stability, remain to be seen, Brexit will likely prove to have an economic welfare-enhancing influence on key aspects of competition policy.

P.S.  Notably, a recent excellent study by Iain Murray and Rory Broomfield of Brexit’s implications for various UK industry sectors (commissioned by the London-based Institute of Economic Affairs) concluded “that in almost every area we have examined the benefit: cost trade-off [of Brexit] is positive. . . .  Overall, the UK will benefit substantially from a reduction in regulation, a better fisheries management system, a market-based immigration system, a free market in agriculture, a globally-focused free trade policy, control over extradition, and a shale gas-based energy policy.”

While we all wait on pins and needles for the DC Circuit to issue its long-expected ruling on the FCC’s Open Internet Order, another federal appeals court has pushed back on Tom Wheeler’s FCC for its unremitting “just trust us” approach to federal rulemaking.

The case, round three of Prometheus, et al. v. FCC, involves the FCC’s long-standing rules restricting common ownership of local broadcast stations and their extension by Tom Wheeler’s FCC to the use of joint sales agreements (JSAs). (For more background see our previous post here). Once again the FCC lost (it’s now only 1 for 3 in this case…), as the Third Circuit Court of Appeals took the Commission to task for failing to establish that its broadcast ownership rules were still in the public interest, as required by law, before it decided to extend those rules.

While much of the opinion deals with the FCC’s unreasonable delay (of more than 7 years) in completing two Quadrennial Reviews in relation to its diversity rules, the court also vacated the FCC’s rule expanding its duopoly rule (or local television ownership rule) to ban joint sales agreements without first undertaking the reviews.

We (the International Center for Law and Economics, along with affiliated scholars of law, economics, and communications) filed an amicus brief arguing for precisely this result, noting that

the 2014 Order [] dramatically expands its scope by amending the FCC’s local ownership attribution rules to make the rule applicable to JSAs, which had never before been subject to it. The Commission thereby suddenly declares unlawful JSAs in scores of local markets, many of which have been operating for a decade or longer without any harm to competition. Even more remarkably, it does so despite the fact that both the DOJ and the FCC itself had previously reviewed many of these JSAs and concluded that they were not likely to lessen competition. In doing so, the FCC also fails to examine the empirical evidence accumulated over the nearly two decades some of these JSAs have been operating. That evidence shows that many of these JSAs have substantially reduced the costs of operating TV stations and improved the quality of their programming without causing any harm to competition, thereby serving the public interest.

The Third Circuit agreed that the FCC utterly failed to justify its continued foray into banning potentially pro-competitive arrangements, finding that

the Commission violated § 202(h) by expanding the reach of the ownership rules without first justifying their preexisting scope through a Quadrennial Review. In Prometheus I we made clear that § 202(h) requires that “no matter what the Commission decides to do to any particular rule—retain, repeal, or modify (whether to make more or less stringent)—it must do so in the public interest and support its decision with a reasoned analysis.” Prometheus I, 373 F.3d at 395. Attribution of television JSAs modifies the Commission’s ownership rules by making them more stringent. And, unless the Commission determines that the preexisting ownership rules are sound, it cannot logically demonstrate that an expansion is in the public interest. Put differently, we cannot decide whether the Commission’s rationale—the need to avoid circumvention of ownership rules—makes sense without knowing whether those rules are in the public interest. If they are not, then the public interest might not be served by closing loopholes to rules that should no longer exist.

Perhaps this decision will be a harbinger of good things to come. The FCC — and especially Tom Wheeler’s FCC — has a history of failing to justify its rules with anything approaching rigorous analysis. The Open Internet Order is a case in point. We will all be better off if courts begin to hold the Commission’s feet to the fire and throw out their rules when the FCC fails to do the work needed to justify them.

On January 26 the Heritage Foundation hosted a one-day conference on “Antitrust Policy for a New Administration.”  Featured speakers included three former heads of the U.S. Department of Justice’s Antitrust Division (DOJ) (D.C. Circuit Senior Judge Douglas Ginsburg, James Rill, and Thomas Barnett) and a former Chairman of the U.S. Federal Trade Commission (FTC) (keynote speaker Professor William Kovacic), among other leading experts on foreign and domestic antitrust.  The conference addressed developments at DOJ, the FTC, and overseas.  The entire program (which will be posted for viewing very shortly at Heritage.org) has generated substantial trade press coverage (see, for example, two articles published by Global Competition Review).  Four themes highlighted during the presentations are particularly worth noting.

First, the importance of the federal judiciary – and judicial selection – in the development and direction of U.S. antitrust policy.  In his opening address, Professor Bill Kovacic described the central role the federal judiciary plays in shaping American antitrust principles.  He explained how a few key judges with academic backgrounds (for example, Frank Easterbrook, Richard Posner, Stephen Breyer, and Antonin Scalia) had a profound effect in reorienting American antitrust rules toward the teachings of law and economics, and added that the Reagan Administration focused explicitly on appointing free market-oriented law professors for key appellate judgeships.  Since the new President will appoint a large proportion of the federal judiciary, the outcome of the 2016 election could profoundly influence the future direction of antitrust, according to Professor Kovacic.  (Professor Kovacic also made anecdotal comments about various candidates, noting the short but successful FTC experience of Ted Cruz; Donald Trump having once been an antitrust plaintiff (when the United States Football League sued the National Football League); Hillary Clinton’s misstatement that antitrust has not been applied to anticompetitive payoffs made by big drug companies to generic producers; and Bernie Sanders’ pronouncements suggesting a possible interest in requiring the breakup of large companies.)

Second, the loss of American global economic leadership on antitrust enforcement policy.  There was a consensus that jurisdictions around the world increasingly have opted for the somewhat more interventionist European civil law approach to antitrust, in preference to the American enforcement model.  There are various explanations for this, including the fact that civil law predominates in many (though not all) nations that have adopted antitrust regimes, and the natural attraction many governments have for administrative models of economic regulation that grant the state broad enforcement discretion and authority.  Whatever the explanation, there also seemed to be some sentiment that U.S. government agencies have not been particularly aggressive in seeking to counter this trend by making the case for the U.S. approach (which relies more on flexible common law reasoning to accommodate new facts and new economic learning).  (See here for my views on a desirable approach to antitrust enforcement, rooted in error cost considerations.)

Third, the need to consider reforming current cartel enforcement programs.  Cartel enforcement programs, which are a mainstay of antitrust, received some critical evaluation by the members of the DOJ and international panels.  Judge Ginsburg noted that the pattern of imposing ever- higher fines on companies, which independently have strong incentives to avoid cartel conduct, may be counterproductive, since it is typically “rogue” employees who flout company policies and collaborate in cartels.  The focus thus should be on strong sanctions against such employees.  Others also opined that overly high corporate cartel fines may not be ideal.  Relatedly, some argued that the failure to give “good behavior” credit to companies that have corporate compliance programs may be suboptimal and welfare-reducing, since companies may find that it is not cost-beneficial to invest substantially in such programs if they receive no perceived benefit.  Also, it was pointed out that imposing very onerous and expensive internal compliance mandates would be inappropriate, since companies may avoid them if they perceive the costs of compliance programs to outweigh the expected value of antitrust penalties.  In addition, the programs by which governments grants firms leniency for informing on a cartel in which they participate – instituted by DOJ in the 1990s and widely emulated by foreign enforcement agencies – came in for some critical evaluation.  One international panelist argued that DOJ should not rely solely on leniency to ferret out cartel activity, stressing that other jurisdictions are beginning to apply econometric methods to aid cartel detection.  In sum, while there appeared to be general agreement about the value and overall success of cartel prosecutions, there also was support for consideration of new means to deter and detect cartels.

Fourth, the need to work to enhance due process in agency investigations and enforcement actions.  Concerns about due process surfaced on both the FTC and international panels.  A former FTC general counsel complained about staff’s lack of explanation of theories of violation in FTC consumer protection investigations, and limitations on access to senior level decision-makers, in cases not raising fraud.  It was argued that such investigations may promote the micromanagement of non-deceptive business behavior in areas such as data protection.  Although consumer protection is not antitrust, commentators raised the possibility that foreigner agencies would cite FTC consumer protection due process deficiencies in justifying their antitrust due process inadequacies (since the FTC enforces both antitrust and consumer protection under one statutory scheme).  The international panel discussed the fact that due process problems are particularly bad in Asia but also exist to some extent in Europe.  Particular due process issues panelists found to be pervasive overseas included, for example, documentary request abuses, lack of adequate access to counsel, and inadequate information about the nature or purpose of investigations.  The international panelists agreed that the U.S. antitrust enforcement agencies, bar associations, and international organizations (such as the International Competition Network and the OECD) should continue to work to promote due process, but that there is no magic bullet and this will be require a long-term commitment.  (There was no unanimity as to whether other U.S. governmental organs, such as the State Department and the U.S. Trade Representative’s Office, should be called upon for assistance.)

In conclusion, the 2016 Heritage Foundation antitrust conference shed valuable light on major antitrust policy issues that the next President will have to confront.  The approach the next President takes in dealing with these issues will have major implications for a very significant branch of economic regulation, both here and abroad.

Thanks to the Truth on the Market bloggers for having me. I’m a long-time fan of the blog, and excited to be contributing.

The Third Circuit will soon review the appeal of generic drug manufacturer, Mylan Pharmaceuticals, in the latest case involving “product hopping” in the pharmaceutical industry — Mylan Pharmaceuticals v. Warner Chilcott.

Product hopping occurs when brand pharmaceutical companies shift their marketing efforts from an older version of a drug to a new, substitute drug in order to stave off competition from cheaper generics. This business strategy is the predictable business response to the incentives created by the arduous FDA approval process, patent law, and state automatic substitution laws. It costs brand companies an average of $2.6 billion to bring a new drug to market, but only 20 percent of marketed brand drugs ever earn enough to recoup these costs. Moreover, once their patent exclusivity period is over, brand companies face the likely loss of 80-90 percent of their sales to generic versions of the drug under state substitution laws that allow or require pharmacists to automatically substitute a generic-equivalent drug when a patient presents a prescription for a brand drug. Because generics are automatically substituted for brand prescriptions, generic companies typically spend very little on advertising, instead choosing to free ride on the marketing efforts of brand companies. Rather than hand over a large chunk of their sales to generic competitors, brand companies often decide to shift their marketing efforts from an existing drug to a new drug with no generic substitutes.

Generic company Mylan is appealing U.S. District Judge Paul S. Diamond’s April decision to grant defendant and brand company Warner Chilcott’s summary judgment motion. Mylan and other generic manufacturers contend that Defendants engaged in a strategy to impede generic competition for branded Doryx (an acne medication) by executing several product redesigns and ceasing promotion of prior formulations. Although the plaintiffs generally changed their products to keep up with the brand-drug redesigns, they contend that these redesigns were intended to circumvent automatic substitution laws, at least for the periods of time before the generic companies could introduce a substitute to new brand drug formulations. The plaintiffs argue that product redesigns that prevent generic manufacturers from benefitting from automatic substitution laws violate Section 2 of the Sherman Act.

Product redesign is not per se anticompetitive. Retiring an older branded version of a drug does not block generics from competing; they are still able to launch and market their own products. Product redesign only makes competition tougher because generics can no longer free ride on automatic substitution laws; instead they must either engage in their own marketing efforts or redesign their product to match the brand drug’s changes. Moreover, product redesign does not affect a primary source of generics’ customers—beneficiaries that are channeled to cheaper generic drugs by drug plans and pharmacy benefit managers.

The Supreme Court has repeatedly concluded that “the antitrust laws…were enacted for the protection of competition not competitors” and that even monopolists have no duty to help a competitor. The district court in Mylan generally agreed with this reasoning, concluding that the brand company Defendants did not exclude Mylan and other generics from competition: “Throughout this period, doctors remained free to prescribe generic Doryx; pharmacists remained free to substitute generics when medically appropriate; and patients remained free to ask their doctors and pharmacists for generic versions of the drug.” Instead, the court argued that Mylan was a “victim of its own business strategy”—a strategy that relied on free-riding off brand companies’ marketing efforts rather than spending any of their own money on marketing. The court reasoned that automatic substitution laws provide a regulatory “bonus” and denying Mylan the opportunity to take advantage of that bonus is not anticompetitive.

Product redesign should only give rise to anticompetitive claims if combined with some other wrongful conduct, or if the new product is clearly a “sham” innovation. Indeed, Senior Judge Douglas Ginsburg and then-FTC Commissioner Joshua D. Wright recently came out against imposing competition law sanctions on product redesigns that are not sham innovations. If lawmakers are concerned that product redesigns will reduce generic usage and the cost savings they create, they could follow the lead of several states that have broadened automatic substitution laws to allow the substitution of generics that are therapeutically-equivalent but not identical in other ways, such as dosage form or drug strength.

Mylan is now asking the Third Circuit to reexamine the case. If the Third Circuit reverses the lower courts decision, it would imply that brand drug companies have a duty to continue selling superseded drugs in order to allow generic competitors to take advantage of automatic substitution laws. If the Third Circuit upholds the district court’s ruling on summary judgment, it will likely create a circuit split between the Second and Third Circuits. In July 2015, the Second Circuit court upheld an injunction in NY v. Actavis that required a brand company to continue manufacturing and selling an obsolete drug until after generic competitors had an opportunity to launch their generic versions and capture a significant portion of the market through automatic substitution laws. I’ve previously written about the duty created in this case.

Regardless of whether the Third Circuit’s decision causes a split, the Supreme Court should take up the issue of product redesign in pharmaceuticals to provide guidance to brand manufacturers that currently operate in a world of uncertainty and under the constant threat of litigation for decisions they make when introducing new products.

Last week concluded round 3 of Congressional hearings on mergers in the healthcare provider and health insurance markets. Much like the previous rounds, the hearing saw predictable representatives, of predictable constituencies, saying predictable things.

The pattern is pretty clear: The American Hospital Association (AHA) makes the case that mergers in the provider market are good for consumers, while mergers in the health insurance market are bad. A scholar or two decries all consolidation in both markets. Another interested group, like maybe the American Medical Association (AMA), also criticizes the mergers. And it’s usually left to a representative of the insurance industry, typically one or more of the merging parties themselves, or perhaps a scholar from a free market think tank, to defend the merger.

Lurking behind the public and politicized airings of these mergers, and especially the pending Anthem/Cigna and Aetna/Humana health insurance mergers, is the Affordable Care Act (ACA). Unfortunately, the partisan politics surrounding the ACA, particularly during this election season, may be trumping the sensible economic analysis of the competitive effects of these mergers.

In particular, the partisan assessments of the ACA’s effect on the marketplace have greatly colored the Congressional (mis-)understandings of the competitive consequences of the mergers.  

Witness testimony and questions from members of Congress at the hearings suggest that there is widespread agreement that the ACA is encouraging increased consolidation in healthcare provider markets, for example, but there is nothing approaching unanimity of opinion in Congress or among interested parties regarding what, if anything, to do about it. Congressional Democrats, for their part, have insisted that stepped up vigilance, particularly of health insurance mergers, is required to ensure that continued competition in health insurance markets isn’t undermined, and that the realization of the ACA’s objectives in the provider market aren’t undermined by insurance companies engaging in anticompetitive conduct. Meanwhile, Congressional Republicans have generally been inclined to imply (or outright state) that increased concentration is bad, so that they can blame increasing concentration and any lack of competition on the increased regulatory costs or other effects of the ACA. Both sides appear to be missing the greater complexities of the story, however.

While the ACA may be creating certain impediments in the health insurance market, it’s also creating some opportunities for increased health insurance competition, and implementing provisions that should serve to hold down prices. Furthermore, even if the ACA is encouraging more concentration, those increases in concentration can’t be assumed to be anticompetitive. Mergers may very well be the best way for insurers to provide benefits to consumers in a post-ACA world — that is, the world we live in. The ACA may have plenty of negative outcomes, and there may be reasons to attack the ACA itself, but there is no reason to assume that any increased concentration it may bring about is a bad thing.

Asking the right questions about the ACA

We don’t need more self-serving and/or politicized testimony We need instead to apply an economic framework to the competition issues arising from these mergers in order to understand their actual, likely effects on the health insurance marketplace we have. This framework has to answer questions like:

  • How do we understand the effects of the ACA on the marketplace?
    • In what ways does the ACA require us to alter our understanding of the competitive environment in which health insurance and healthcare are offered?
    • Does the ACA promote concentration in health insurance markets?
    • If so, is that a bad thing?
  • Do efficiencies arise from increased integration in the healthcare provider market?
  • Do efficiencies arise from increased integration in the health insurance market?
  • How do state regulatory regimes affect the understanding of what markets are at issue, and what competitive effects are likely, for antitrust analysis?
  • What are the potential competitive effects of increased concentration in the health care markets?
  • Does increased health insurance market concentration exacerbate or counteract those effects?

Beginning with this post, at least a few of us here at TOTM will take on some of these issues, as part of a blog series aimed at better understanding the antitrust law and economics of the pending health insurance mergers.

Today, we will focus on the ambiguous competitive implications of the ACA. Although not a comprehensive analysis, in this post we will discuss some key insights into how the ACA’s regulations and subsidies should inform our assessment of the competitiveness of the healthcare industry as a whole, and the antitrust review of health insurance mergers in particular.

The ambiguous effects of the ACA

It’s an understatement to say that the ACA is an issue of great political controversy. While many Democrats argue that it has been nothing but a boon to consumers, Republicans usually have nothing good to say about the law’s effects. But both sides miss important but ambiguous effects of the law on the healthcare industry. And because they miss (or disregard) this ambiguity for political reasons, they risk seriously misunderstanding the legal and economic implications of the ACA for healthcare industry mergers.

To begin with, there are substantial negative effects, of course. Requiring insurance companies to accept patients with pre-existing conditions reduces the ability of insurance companies to manage risk. This has led to upward pricing pressure for premiums. While the mandate to buy insurance was supposed to help bring more young, healthy people into the risk pool, so far the projected signups haven’t been realized.

The ACA’s redefinition of what is an acceptable insurance policy has also caused many consumers to lose the policy of their choice. And the ACA’s many regulations, such as the Minimum Loss Ratio requiring insurance companies to spend 80% of premiums on healthcare, have squeezed the profit margins of many insurance companies, leading, in some cases, to exit from the marketplace altogether and, in others, to a reduction of new marketplace entry or competition in other submarkets.

On the other hand, there may be benefits from the ACA. While many insurers participated in private exchanges even before the ACA-mandated health insurance exchanges, the increased consumer education from the government’s efforts may have helped enrollment even in private exchanges, and may also have helped to keep premiums from increasing as much as they would have otherwise. At the same time, the increased subsidies for individuals have helped lower-income people afford those premiums. Some have even argued that increased participation in the on-demand economy can be linked to the ability of individuals to buy health insurance directly. On top of that, there has been some entry into certain health insurance submarkets due to lower barriers to entry (because there is less need for agents to sell in a new market with the online exchanges). And the changes in how Medicare pays, with a greater focus on outcomes rather than services provided, has led to the adoption of value-based pricing from both health care providers and health insurance companies.

Further, some of the ACA’s effects have  decidedly ambiguous consequences for healthcare and health insurance markets. On the one hand, for example, the ACA’s compensation rules have encouraged consolidation among healthcare providers, as noted. One reason for this is that the government gives higher payments for Medicare services delivered by a hospital versus an independent doctor. Similarly, increased regulatory burdens have led to higher compliance costs and more consolidation as providers attempt to economize on those costs. All of this has happened perhaps to the detriment of doctors (and/or patients) who wanted to remain independent from hospitals and larger health network systems, and, as a result, has generally raised costs for payors like insurers and governments.

But much of this consolidation has also arguably led to increased efficiency and greater benefits for consumers. For instance, the integration of healthcare networks leads to increased sharing of health information and better analytics, better care for patients, reduced overhead costs, and other efficiencies. Ultimately these should translate into higher quality care for patients. And to the extent that they do, they should also translate into lower costs for insurers and lower premiums — provided health insurers are not prevented from obtaining sufficient bargaining power to impose pricing discipline on healthcare providers.

In other words, both the AHA and AMA could be right as to different aspects of the ACA’s effects.

Understanding mergers within the regulatory environment

But what they can’t say is that increased consolidation per se is clearly problematic, nor that, even if it is correlated with sub-optimal outcomes, it is consolidation causing those outcomes, rather than something else (like the ACA) that is causing both the sub-optimal outcomes as well as consolidation.

In fact, it may well be the case that increased consolidation improves overall outcomes in healthcare provider and health insurance markets relative to what would happen under the ACA absent consolidation. For Congressional Democrats and others interested in bolstering the ACA and offering the best possible outcomes for consumers, reflexively challenging health insurance mergers because consolidation is “bad,” may be undermining both of these objectives.

Meanwhile, and for the same reasons, Congressional Republicans who decry Obamacare should be careful that they do not likewise condemn mergers under what amounts to a “big is bad” theory that is inconsistent with the rigorous law and economics approach that they otherwise generally support. To the extent that the true target is not health insurance industry consolidation, but rather underlying regulatory changes that have encouraged that consolidation, scoring political points by impugning mergers threatens both health insurance consumers in the short run, as well as consumers throughout the economy in the long run (by undermining the well-established economic critiques of a reflexive “big is bad” response).

It is simply not clear that ACA-induced health insurance mergers are likely to be anticompetitive. In fact, because the ACA builds on state regulation of insurance providers, requiring greater transparency and regulatory review of pricing and coverage terms, it seems unlikely that health insurers would be free to engage in anticompetitive price increases or reduced coverage that could harm consumers.

On the contrary, the managerial and transactional efficiencies from the proposed mergers, combined with greater bargaining power against now-larger providers are likely to lead to both better quality care and cost savings passed-on to consumers. Increased entry, at least in part due to the ACA in most of the markets in which the merging companies will compete, along with integrated health networks themselves entering and threatening entry into insurance markets, will almost certainly lead to more consumer cost savings. In the current regulatory environment created by the ACA, in other words, insurance mergers have considerable upside potential, with little downside risk.

Conclusion

In sum, regardless of what one thinks about the ACA and its likely effects on consumers, it is not clear that health insurance mergers, especially in a post-ACA world, will be harmful.

Rather, assessing the likely competitive effects of health insurance mergers entails consideration of many complicated (and, unfortunately, politicized) issues. In future blog posts we will discuss (among other things): the proper treatment of efficiencies arising from health insurance mergers, the appropriate geographic and product markets for health insurance merger reviews, the role of state regulations in assessing likely competitive effects, and the strengths and weaknesses of arguments for potential competitive harms arising from the mergers.

Nearly all economists from across the political spectrum agree: free trade is good. Yet free trade agreements are not always the same thing as free trade. Whether we’re talking about the Trans-Pacific Partnership or the European Union’s Digital Single Market (DSM) initiative, the question is always whether the agreement in question is reducing barriers to trade, or actually enacting barriers to trade into law.

It’s becoming more and more clear that there should be real concerns about the direction the EU is heading with its DSM. As the EU moves forward with the 16 different action proposals that make up this ambitious strategy, we should all pay special attention to the actual rules that come out of it, such as the recent Data Protection Regulation. Are EU regulators simply trying to hogtie innovators in the the wild, wild, west, as some have suggested? Let’s break it down. Here are The Good, The Bad, and the Ugly.

The Good

The Data Protection Regulation, as proposed by the Ministers of Justice Council and to be taken up in trilogue negotiations with the Parliament and Council this month, will set up a single set of rules for companies to follow throughout the EU. Rather than having to deal with the disparate rules of 28 different countries, companies will have to follow only the EU-wide Data Protection Regulation. It’s hard to determine whether the EU is right about its lofty estimate of this benefit (€2.3 billion a year), but no doubt it’s positive. This is what free trade is about: making commerce “regular” by reducing barriers to trade between states and nations.

Additionally, the Data Protection Regulation would create a “one-stop shop” for consumers and businesses alike. Regardless of where companies are located or process personal information, consumers would be able to go to their own national authority, in their own language, to help them. Similarly, companies would need to deal with only one supervisory authority.

Further, there will be benefits to smaller businesses. For instance, the Data Protection Regulation will exempt businesses smaller than a certain threshold from the obligation to appoint a data protection officer if data processing is not a part of their core business activity. On top of that, businesses will not have to notify every supervisory authority about each instance of collection and processing, and will have the ability to charge consumers fees for certain requests to access data. These changes will allow businesses, especially smaller ones, to save considerable money and human capital. Finally, smaller entities won’t have to carry out an impact assessment before engaging in processing unless there is a specific risk. These rules are designed to increase flexibility on the margin.

If this were all the rules were about, then they would be a boon to the major American tech companies that have expressed concern about the DSM. These companies would be able to deal with EU citizens under one set of rules and consumers would be able to take advantage of the many benefits of free flowing information in the digital economy.

The Bad

Unfortunately, the substance of the Data Protection Regulation isn’t limited simply to preempting 28 bad privacy rules with an economically sensible standard for Internet companies that rely on data collection and targeted advertising for their business model. Instead, the Data Protection Regulation would set up new rules that will impose significant costs on the Internet ecosphere.

For instance, giving citizens a “right to be forgotten” sounds good, but it will considerably impact companies built on providing information to the world. There are real costs to administering such a rule, and these costs will not ultimately be borne by search engines, social networks, and advertisers, but by consumers who ultimately will have to find either a different way to pay for the popular online services they want or go without them. For instance, Google has had to hire a large “team of lawyers, engineers and paralegals who have so far evaluated over half a million URLs that were requested to be delisted from search results by European citizens.”

Privacy rights need to be balanced with not only economic efficiency, but also with the right to free expression that most European countries hold (though not necessarily with a robust First Amendment like that in the United States). Stories about the right to be forgotten conflicting with the ability of journalists to report on issues of public concern make clear that there is a potential problem there. The Data Protection Regulation does attempt to balance the right to be forgotten with the right to report, but it’s not likely that a similar rule would survive First Amendment scrutiny in the United States. American companies accustomed to such protections will need to be wary operating under the EU’s standard.

Similarly, mandating rules on data minimization and data portability may sound like good design ideas in light of data security and privacy concerns, but there are real costs to consumers and innovation in forcing companies to adopt particular business models.

Mandated data minimization limits the ability of companies to innovate and lessens the opportunity for consumers to benefit from unexpected uses of information. Overly strict requirements on data minimization could slow down the incredible growth of the economy from the Big Data revolution, which has provided a plethora of benefits to consumers from new uses of information, often in ways unfathomable even a short time ago. As an article in Harvard Magazine recently noted,

The story [of data analytics] follows a similar pattern in every field… The leaders are qualitative experts in their field. Then a statistical researcher who doesn’t know the details of the field comes in and, using modern data analysis, adds tremendous insight and value.

And mandated data portability is an overbroad per se remedy for possible exclusionary conduct that could also benefit consumers greatly. The rule will apply to businesses regardless of market power, meaning that it will also impair small companies with no ability to actually hurt consumers by restricting their ability to take data elsewhere. Aside from this, multi-homing is ubiquitous in the Internet economy, anyway. This appears to be another remedy in search of a problem.

The bad news is that these rules will likely deter innovation and reduce consumer welfare for EU citizens.

The Ugly

Finally, the Data Protection Regulation suffers from an ugly defect: it may actually be ratifying a form of protectionism into the rules. Both the intent and likely effect of the rules appears to be to “level the playing field” by knocking down American Internet companies.

For instance, the EU has long allowed flexibility for US companies operating in Europe under the US-EU Safe Harbor. But EU officials are aiming at reducing this flexibility. As the Wall Street Journal has reported:

For months, European government officials and regulators have clashed with the likes of Google, Amazon.com and Facebook over everything from taxes to privacy…. “American companies come from outside and act as if it was a lawless environment to which they are coming,” [Commissioner Reding] told the Journal. “There are conflicts not only about competition rules but also simply about obeying the rules.” In many past tussles with European officialdom, American executives have countered that they bring innovation, and follow all local laws and regulations… A recent EU report found that European citizens’ personal data, sent to the U.S. under Safe Harbor, may be processed by U.S. authorities in a way incompatible with the grounds on which they were originally collected in the EU. Europeans allege this harms European tech companies, which must play by stricter rules about what they can do with citizens’ data for advertising, targeting products and searches. Ms. Reding said Safe Harbor offered a “unilateral advantage” to American companies.

Thus, while “when in Rome…” is generally good advice, the Data Protection Regulation appears to be aimed primarily at removing the “advantages” of American Internet companies—at which rent-seekers and regulators throughout the continent have taken aim. As mentioned above, supporters often name American companies outright in the reasons for why the DSM’s Data Protection Regulation are needed. But opponents have noted that new regulation aimed at American companies is not needed in order to police abuses:

Speaking at an event in London, [EU Antitrust Chief] Ms. Vestager said it would be “tricky” to design EU regulation targeting the various large Internet firms like Facebook, Amazon.com Inc. and eBay Inc. because it was hard to establish what they had in common besides “facilitating something”… New EU regulation aimed at reining in large Internet companies would take years to create and would then address historic rather than future problems, Ms. Vestager said. “We need to think about what it is we want to achieve that can’t be achieved by enforcing competition law,” Ms. Vestager said.

Moreover, of the 15 largest Internet companies, 11 are American and 4 are Chinese. None is European. So any rules applying to the Internet ecosphere are inevitably going to disproportionately affect these important, US companies most of all. But if Europe wants to compete more effectively, it should foster a regulatory regime friendly to Internet business, rather than extend inefficient privacy rules to American companies under the guise of free trade.

Conclusion

Near the end of the The Good, the Bad, and the Ugly, Blondie and Tuco have this exchange that seems apropos to the situation we’re in:

Bloeastwoodndie: [watching the soldiers fighting on the bridge] I have a feeling it’s really gonna be a good, long battle.
Tuco: Blondie, the money’s on the other side of the river.
Blondie: Oh? Where?
Tuco: Amigo, I said on the other side, and that’s enough. But while the Confederates are there we can’t get across.
Blondie: What would happen if somebody were to blow up that bridge?

The EU’s DSM proposals are going to be a good, long battle. But key players in the EU recognize that the tech money — along with the services and ongoing innovation that benefit EU citizens — is really on the other side of the river. If they blow up the bridge of trade between the EU and the US, though, we will all be worse off — but Europeans most of all.

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best😉.

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.

Excerpts:

PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

PRICE DISCRIMINATION AS A PRIVACY HARM

If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.

DATA BARRIER TO ENTRY

Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive

CONCLUSION

Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.

By a 3-2 vote, the Federal Communications Commission (FCC) decided on February 26 to preempt state laws in North Carolina and Tennessee that bar municipally-owned broadband providers from providing services beyond their geographic boundaries.  This decision raises substantial legal issues and threatens economic harm to state taxpayers and consumers.

The narrow FCC majority rested its decision on its authority to remove broadband investment barriers, citing Section 706 of the Telecommunications Act of 1996.  Section 706 requires the FCC to encourage the deployment of broadband to all Americans by using “measures that promote competition in the local telecommunications market, or other regulating methods that remove barriers to infrastructure investment.”  As dissenting Commissioner Ajit Pai pointed out, however, Section 706 contains no specific language empowering it to preempt state laws, and the FCC’s action trenches upon the sovereign power of the states to control their subordinate governmental entities.  Moreover, it is far from clear that authorizing government-owned broadband companies to expand into new territories promotes competition or eliminates broadband investment barriers.  Indeed, the opposite is more likely to be the case.

Simply put, government-owned networks artificially displace market forces and are an affront to a reliance on free competition to provide the goods and services consumers demand – including broadband communications.  Government-owned networks use local taxpayer monies and federal grants (also taxpayer funded, of course) to compete unfairly with existing private sector providers.  Those taxpayer subsidies put privately funded networks at a competitive disadvantage, creating barriers to new private sector entry or expansion, as private businesses decide they cannot fairly compete against government-backed enterprises.  In turn, reduced private sector investment tends to diminish quality and effective consumer choice.

These conclusions are based on hard facts, not mere theory.  There is no evidence that municipal broadband is needed because “market failure” has deterred private sector provision of broadband – indeed, firms such as Verizon, AT&T, and Comcast spend many billions of dollars annually to maintain, upgrade, and expand their broadband networks.  Indeed, far more serious is the risk of “government failure.”  Municipal corporations, free from market discipline and accountability due to their public funding, may be expected to be bureaucratic, inefficient, and slow to react to changing market conditions.  Consistent with this observation, an economic study of government-operated municipal broadband networks reveals failures to achieve universal service in areas that they serve; lack of cost-benefit analysis that has caused costs to outweigh benefits; the inefficient use of scarce resources; the inability to cover costs; anticompetitive behavior fueled by unfair competitive advantages; the inefficient allocation of limited tax revenues that are denied to more essential public services; and the stifling of private firm innovation.  In a time of tight budget constraints, the waste of taxpayer funds and competitive harm stemming from municipal broadband activities is particularly unfortunate.  In short, real world evidence demonstrates that “[i]n a dynamic market such as broadband services, government ownership has proven to be an abject failure.”  What is required is not more government involvement, but, rather, fewer governmental constraints on private sector broadband activities.

Finally, what’s worse, the FCC’s decision has harmful constitutional overtones.  The Chattanooga, Tennessee and Wilson, North Carolina municipal broadband networks that requested FCC preemption impose troublesome speech limitations as conditions of service.  The utility that operates the Chattanooga network may “reject or remove any material residing on or transmitted to or through” the network that violates its “Accepted Use Policy.”  That Policy, among other things, prohibits using the network to send materials that are “threatening, abusive or hateful” or that offend “the privacy, publicity, or other personal rights of others.”  It also bars the posting of messages that are “intended to annoy or harass others.”  In a similar vein, the Wilson network bars transmission of materials that are “harassing, abusive, libelous or obscene” and “activities or actions intended to withhold or cloak any user’s identity or contact information.”  Content-based prohibitions of this type broadly restrict carriage of constitutionally protected speech and, thus, raise serious First Amendment questions.  Other municipal broadband systems may, of course, elect to adopt similarly questionable censorship-based policies.

In short, the FCC’s broadband preemption decision is likely to harm economic welfare and is highly problematic on legal grounds to boot.  The FCC should rescind that decision.  If it fails to do so, and if the courts do not strike the decision down, Congress should consider legislation to bar the FCC from meddling in state oversight of municipal broadband.

Joshua Wright is a Commissioner at the Federal Trade Commission

I’d like to thank Geoff and Thom for organizing this symposium and creating a forum for an open and frank exchange of ideas about the FTC’s unfair methods of competition authority under Section 5.  In offering my own views in a concrete proposed Policy Statement and speech earlier this summer, I hoped to encourage just such a discussion about how the Commission can define its authority to prosecute unfair methods of competition in a way that both strengthens the agency’s ability to target anticompetitive conduct and provides much needed guidance to the business community.  During the course of this symposium, I have enjoyed reading the many thoughtful posts providing feedback on my specific proposal, as well as offering other views on how guidance and limits can be imposed on the Commission’s unfair methods of competition authority.  Through this marketplace of ideas, I believe the Commission can develop a consensus position and finally accomplish the long overdue task of articulating its views on the application of the agency’s signature competition statute.  As this symposium comes to a close, I’d like to make a couple quick observations and respond to a few specific comments about my proposal.

There Exists a Vast Area of Agreement on Section 5

Although conventional wisdom may suggest it will be impossible to reach any meaningful consensus with respect to Section 5, this symposium demonstrates that there actually already exists a vast area of agreement on the subject.  In fact, it appears safe to draw at least two broad conclusions from the contributions that have been offered as part of this symposium.

First, an overwhelming majority of commentators believe that we need guidance on the scope of the FTC’s unfair methods of competition authority.  This is not surprising.  The absence of meaningful limiting principles distinguishing lawful conduct from unlawful conduct under Section 5 and the breadth of the Commission’s authority to prosecute unfair methods of competition creates significant uncertainty among the business community.  Moreover, without a coherent framework for applying Section 5, the Commission cannot possibly hope to fulfill Congress’s vision that Section 5 would play a key role in helping the FTC leverage its unique research and reporting functions to develop evidence-based competition policy.

Second, there is near unanimity that the FTC should challenge only conduct as an unfair method of competition if it results in “harm to competition” as the phrase is understood under the traditional federal antitrust laws.  Harm to competition is a concept that is readily understandable and has been deeply embedded into antitrust jurisprudence.  Incorporating this concept would require that any conduct challenged under Section 5 must both harm the competitive process and harm consumers.  Under this approach, the FTC should not consider non-economic factors, such as whether the practice harms small business or whether it violates public morals, in deciding whether to prosecute conduct as an unfair method of competition.  This is a simple commitment, but one that is not currently enshrined in the law.  By tethering the definition of unfair methods of competition to modern economics and to the understanding of competitive harm articulated in contemporary antitrust jurisprudence, we would ensure Section 5 enforcement focuses upon conduct that actually is anticompetitive.

While it is not surprising that commentators offering a diverse set of perspectives on the appropriate scope of the FTC’s unfair methods of competition authority would agree on these two points, I think it is important to note that this consensus covers much of the Section 5 debate while leaving some room for debate on the margins as to how the FTC can best use its unfair methods of competition authority to complement its mission of protecting competition.

Some Clarifications Regarding My Proposed Policy Statement

In the spirit of furthering the debate along those margins, I also briefly would like to correct the record, or at least provide some clarification, on a few aspects of my proposed Policy Statement.

First, contrary to David Balto’s suggestion, my proposed Policy Statement acknowledges the fact that Congress envisioned Section 5 to be an incipiency statute.  Indeed, the first element of my proposed definition of unfair methods of competition requires the FTC to show that the act or practice in question “harms or is likely to harm competition significantly.”  In fact, it is by prosecuting practices that have not yet resulted in harm to competition, but are likely to result in anticompetitive effects if allowed to continue, that my definition reaches “invitations to collude.”  Paul Denis raises an interesting question about how the FTC should assess the likelihood of harm to competition, and suggests doing so using an expected value test.  My proposed policy statement does just that by requiring the FTC to assess both the magnitude and probability of the competitive harm when determining whether a practice that has not yet harmed competition, but potentially is likely to, is an unfair method of competition under Section 5.  Where the probability of competitive harm is smaller, the Commission should not find an unfair method of competition without reason to believe the conduct poses a substantial harm.  Moreover, by requiring the FTC to show that the conduct in question results in “harm to competition” as that phrase is understood under the traditional federal antitrust laws, my proposal also incorporates all the temporal elements of harm discussed in the antitrust case law and therefore puts the Commission on the same footing as the courts.

Second, both Dan Crane and Marina Lao have suggested that the efficiencies screen I have proposed results in a null (or very small) set of cases because there is virtually no conduct for which some efficiencies cannot be claimed.  This suggestion stems from an apparent misunderstanding of the efficiencies screen.  What these comments fail to recognize is that the efficiencies screen I offer intentionally leverages the Commission’s considerable expertise in identifying the presence of cognizable efficiencies in the merger context and explicitly ties the analysis to the well-developed framework offered in the Horizontal Merger Guidelines.  As any antitrust practitioner can attest, the Commission does not credit “cognizable efficiencies” lightly and requires a rigorous showing that the claimed efficiencies are merger-specific, verifiable, and not derived from an anticompetitive reduction in output or service.  Fears that the efficiencies screen in the Section 5 context would immunize patently anticompetitive conduct because a firm nakedly asserts cost savings arising from the conduct without evidence supporting its claim are unwarranted.  Under this strict standard, the FTC would almost certainly have no trouble demonstrating no cognizable efficiencies exist in Dan’s “blowing up of the competitor’s factory” example because the very act of sabotage amounts to an anticompetitive reduction in output.

Third, Marina Lao further argues that permitting the FTC to challenge conduct as an unfair method of competition only when there are no cognizable efficiencies is too strict a standard and that it would be better to allow the agency to balance the harms against the efficiencies.  The current formulation of the Commission’s unfair methods of competition enforcement has proven unworkable in large part because it lacks clear boundaries and is both malleable and ambiguous.  In my view, in order to make Section 5 a meaningful statute, and one that can contribute productively to the Commission’s competition enforcement mission as envisioned by Congress, the Commission must first confine its unfair methods of competition authority to those areas where it can leverage its unique institutional capabilities to target the conduct most harmful to consumers.  This in no way requires the Commission to let anticompetitive conduct run rampant.  Where the FTC identifies and wants to challenge conduct with both harms and benefits, it is fully capable of doing so successfully in federal court under the traditional antitrust laws.

I cannot think of a contribution the Commission can make to the FTC’s competition mission that is more important than issuing a Policy Statement articulating the appropriate application of Section 5.  I look forward to continuing to exchange ideas with those both inside and outside the agency regarding how the Commission can provide guidance about its unfair methods of competition authority.  Thank you once again to Truth on the Market for organizing and hosting this symposium and to the many participants for their thoughtful contributions.

*The views expressed here are my own and do not reflect those of the Commission or any other Commissioner.

Tad Lipsky is a partner in the law firm of Latham & Watkins LLP.

The FTC’s struggle to provide guidance for its enforcement of Section 5’s Unfair Methods of Competition (UMC) clause (or not – some oppose the provision of forward guidance by the agency, much as one occasionally heard opposition to the concept of merger guidelines in 1968 and again in 1982) could evoke a much broader long-run issue: is a federal law regulating single-firm conduct worth the trouble?  Antitrust law has its hard spots and its soft spots: I imagine that most antitrust lawyers think they can define “naked” price-fixing and other hard-core cartel conduct, and they would defend having a law that prohibits it.  Similarly with a law that prohibits anticompetitive mergers.  Monopolization perhaps not so much: 123 years of Section 2 enforcement and the best our Supreme Court can do is the Grinnell standard, defining monopolization as the “willful acquisition or maintenance of [monopoly] power as distinguished from growth or development as a consequence of a superior product, business acumen, or historic accident.”  Is this Grinnell definition that much better than “unfair methods of competition”?

The Court has created a few specific conduct categories within the Grinnell rubric: sham petitioning (objectively and subjectively baseless appeals for government action), predatory pricing (pricing below cost with a reasonable prospect of recoupment through the exercise of power obtained by achieving monopoly or disciplining competitors), and unlawful tying (using market power over one product to force the purchase of a distinct product – you probably know the rest).  These categories are neither perfectly clear (what measure of cost indicates a predatory price?) nor guaranteed to last (the presumption that a patent bestows market power within the meaning of the tying rule was abandoned in 2005).  At least the more specific categories give some guidance to lower courts, prosecutors, litigants and – most important of all – compliance-inclined businesses.  They provide more useful guidance than Grinnell.

The scope for differences of opinion regarding the definition of monopolization is at an historical zenith.  Some of the least civilized disagreements between the FTC and the Antitrust Division – the Justice Department’s visible contempt for the FTC’s ReaLemon decision in the early 1980’s, or the three-Commissioner vilification of the Justice Department’s 2008 report on unilateral conduct – concern these differences.  The 2009 Justice Department theatrically withdrew the 2008 Justice Department’s report, claiming (against clear objective evidence to the contrary) that the issue was settled in its favor by Lorain Journal, Aspen Skiing, and the D.C. Circuit decision in the main case involving Microsoft.

Although less noted in the copious scholarly output concerning UMC, disputes about the meaning of Section 5 are encouraged by the lack of definitive guidance on monopolization.  For every clarification provided by the Supreme Court, the FTC’s room for maneuver under UMC is reduced.  The FTC could not define sham litigation inconsistently with Professional Real Estate Investors v. Columbia Pictures Industries; it could not read recoupment out of the Brooke Group v. Brown & Williamson Tobacco Co. definition of predatory pricing.

The fact remains that there has been less-than-satisfactory clarification of single-firm conduct standards under either statute.  Grinnell remains the only “guideline” for the vast territory of Section 2 enforcement (aside from the specific mentioned categories), especially since the Supreme Court has shown no enthusiasm for either of the two main appellate-court approaches to a general test for unlawful unilateral conduct under Section 2, the “intent test” and the “essential facilities doctrine.”  (It has not rejected them, either.)  The current differences of opinion – even within the Commission itself, leave aside the appellate courts – are emblematic of a similar failure with regard to UMC.  Failure to clarify rules of such universal applicability has obvious costs and adverse impacts: creative and competitively benign business conduct is deterred (with corresponding losses in innovation, productivity and welfare), and the costs, delays, disruption and other burdens of litigation are amplified.  Are these costs worth bearing?

Years ago I heard it said that a certain old-line law firm had tightened its standards of partner performance: whereas formerly the firm would expel a partner who remained drunk for ten years, the new rule was that a partner could remain drunk only for five years.  The antitrust standards for unilateral conduct have vacillated for over a century.  For a time (as exemplified by United States v. United Shoe Machinery Corp.) any act of self-preservation by a monopolist – even if “honestly industrial” – was presumptively unlawful if not compelled by outside circumstances.  Even Grinnell looks good compared to that, but Grinnell still fails to provide much help in most Section 2 cases; and the debate over UMC says the same about Section 5.  I do not advocate the repeal of either statute, but shouldn’t we expect that someone might want to tighten our standards?  Maybe we can allow a statute a hundred years to be clarified through common-law application.  Section 2 passed that milepost twenty-three years ago, and Section 5 reaches that point next year.  We shouldn’t be surprised if someone wants to pull the plug beyond that point.