Archives For antitrust

In a Heritage Foundation paper released today, I argue that U.S. antidumping law should be reformed to incorporate principles drawn from the antitrust analysis of predatory pricing.  A brief summary of my paper follows.  Such a change would transform antidumping law from a special interest cronyist tool that harms U.S. consumers into a sensible procompetitive provision.

Imports and Dumping

Imported goods and services provide great benefits to the American economy and to American consumers.  Imports contribute to U.S. job creation on a large scale, provide key components incorporated by U.S. manufacturers into their products, and substantially raise the purchasing power of American consumers.

Despite the benefits of imports, well-organized domestic industries have long sought to protect themselves from import competition by convincing governments to impose import restrictions that raise the costs of imported goods and thus reduce the demand for imports.  One of the best known types of import restrictions (one that is allowed under international trade agreements and employed by many other countries as well) is an “antidumping duty,” a special tariff assessed on imported goods that allegedly are set at “unfairly lower” rates than the prices for the same products sold in their domestic market.

Product-specific U.S. antidumping investigations are undertaken by the U.S. Department of Commerce (DOC) and the U.S. International Trade Commission (USITC, an independent federal agency), in response to a petition from a U.S. producer, a group of U.S. producers, or a U.S. labor union.  The DOC determines if dumping has occurred and calculates the “dumping margin” (the difference between a “fair” and an “unfair” price) for the setting of antidumping tariffs.  The USITC decides whether a domestic industry has been “materially injured” by dumping.  If the USITC finds material injury, the DOC publishes an antidumping order, which requires importers of the investigated merchandise to post a cash deposit equal to the estimated dumping duty margins.

Economists define dumping as international “price discrimination”— the charging of lower prices (net of selling expenses and transportation) in a foreign market than in a domestic market for the same product.  Despite its bad-sounding label, price discrimination, whether foreign or domestic, is typically a perfectly legitimate profitable business practice that benefits many consumers.  Price discrimination allows a producer to sell to additional numbers of price-sensitive consumers in the low-priced market, to their benefit:  Those consumers would have bought nothing at all if faced with a uniformly applied higher price.

Dumping harms domestic consumers and the overall economy only when the foreign seller successfully drives domestic producers out of business by charging an overly low “predatory” (below its cost) import price, monopolizes the domestic market, and then raises import prices to monopoly levels, thereby recouping any earlier losses.  In such a situation, domestic consumers pay higher prices over time due to the domestic monopoly, and domestic producers that exited the market due to predation suffer welfare losses as well.

The Problem with Current U.S. Antidumping Law

Although antidumping law originally was aimed at counteracting such predation, antidumping provisions long ago were reformulated to raise the likelihood that dumping would be found in matters under investigation.  In particular, 1974 legislation eliminated consideration of sales made below full production cost in the home market and promoted the use of “constructed value” calculations for home-market sales that included approximations for the cost of production, selling, general and administrative expenses, and an amount for profit.  This methodology, compared to the traditional approach of comparing actual net foreign product prices with net U.S. prices, tended to favor domestic producers by yielding higher margins of dumping.

The favoring of domestic industries continued with the Trade Tariff Act of 1984, which compelled the USITC to use a “cumulation” analysis that could subject multiple countries to anti-dumping penalties if one county’s product was found to cause material injury to the establishment of a domestic industry.  More specifically, under cumulation, if multiple countries are being investigated for dumping the same particular product and if exports from any one of those countries, or all in combination, are found to cause material injury, then all exports are made subject to an antidumping order.  Thus, imports from individual countries that individually could not be shown to cause material injury face a price increase — an anti–American consumer outcome that lacks any legitimate rationale.

These and other developments have further encouraged American industries to invoke antidumping as a protectionist mechanism.  Thus, it is not surprising that in recent decades, there has been a significant increase in the number of U.S. antidumping cases filed and the number of affirmative injury findings.  Also noteworthy is the proliferation of foreign antidumping laws since 1980, which harms American exporters. Overall, the economic impact of antidumping law on the American economy has grown substantially.  In short, antidumping is a cronyist special interest law that harms American consumers.

Moreover, even taking into account domestic industrial interests, prohibiting dumping likely would not have a positive effect on domestic industry as a whole.  Antidumping restrictions on imported raw materials and industrial products used by U.S. firms make it difficult for these firms to compete internationally.  In fact, the USITC is statutorily barred from considering their impact on consuming industries.  These consuming industries are often a larger part of the U.S. economy than the industries benefitting from antidumping regulation, and producers of upstream products have become reliant on restricting customer access to foreign goods rather than better responding to their customers’ needs.

Furthermore, antidumping harms the U.S. economy by reducing American firms’ incentive to produce more efficiently.  Non-predatory dumping spurs domestic firms to produce more efficiently (at lower costs) so that they can reduce prices and compete with imports in order to remain in the market.  Finally, the existence of antidumping law may encourage implicit collusion among domestic firms and foreign firms to soften price competition.  The truth is that when domestic industries complain that non-predatory dumping is “unfair,” they are really objecting to competition on the merits — competition that raises overall long-term American economic welfare.

A New Antitrust-Based Predatory Pricing Test for Dumping

In sum, aggressive price competition by foreign producers benefits American consumers, enhances economic efficiency, and promotes competitive vigor — net benefits to the American economy.  Only below-cost “predatory dumping” by a foreign monopolist that allows it to drive out American producers and then charge monopoly prices to American consumers should be a source of U.S. policy concern and legal prohibition.

A test that would prohibit only harmful predatory dumping can be drawn directly from a standard developed by U.S. courts and scholars for determining illegal price predation under American antitrust law.  Applying that test in antidumping cases, antidumping tariffs would be imposed only when two conditions were satisfied.

First, the government would have to determine that the imports under scrutiny were priced at a below-cost level that caused the foreign producer to incur losses on the production and sale of those imports.  This would be a price below “average avoidable cost,” which would include all the costs that a firm could have avoided incurring by not producing the allegedly dumped products.

Second, if it met the first test, the government would have to show that the firm allegedly doing the dumping would be likely to “recoup” — that is, charge high monopoly prices for future imports that more than make up for its current losses on below cost imports.

This proposed new antidumping methodology would be administrable.  Indeed, because it focuses narrowly and solely on certain readily ascertainable costs and data on domestic industry viability, it should be easier (and thus less costly) to apply than the broad and uncertain methodologies under current law.

Of perhaps greater significance, it could serve as a sign that the U.S. government favors competition on the merits and rejects special-interest cronyism — a message that could prove valuable in international negotiations aimed at having other nations’ antidumping regimes adopt a similar approach.  To the extent that other jurisdictions adopted reforms that emulated the new American approach, U.S. exporters would benefit from reduced barriers to trade, a further boon to the U.S. economy.

Conclusion

U.S. antidumping law should be reformed so that it is subject to a predatory pricing test drawn from American antitrust law.  Application of such a standard would strengthen the American economy and benefit U.S. consumers while precluding any truly predatory dumping designed to destroy domestic industries and monopolize American industrial sectors.

FTC Commissioner Josh Wright has some wise thoughts on how to handle a small GUPPI. I don’t mean the fish. Dissenting in part in the Commission’s disposition of the Family Dollar/Dollar Tree merger, Commissioner Wright calls for creating a safe harbor for mergers where the competitive concern is unilateral effects and the merger generates a low score on the “Gross Upward Pricing Pressure Index,” or “GUPPI.”

Before explaining why Wright is right on this one, some quick background on the GUPPI. In 2010, the DOJ and FTC revised their Horizontal Merger Guidelines to reflect better the actual practices the agencies follow in conducting pre-merger investigations. Perhaps the most notable new emphasis in the revised guidelines was a move away from market definition, the traditional starting point for merger analysis, and toward consideration of potentially adverse “unilateral” effects—i.e., anticompetitive harms that, unlike collusion or even non-collusive oligopolistic pricing, need not involve participation of any non-merging firms in the market. The primary unilateral effect emphasized by the new guidelines is that the merger may put “upward pricing pressure” on brand-differentiated but otherwise similar products sold by the merging firms. The guidelines maintain that when upward pricing pressure seems significant, it may be unnecessary to define the relevant market before concluding that an anticompetitive effect is likely.

The logic of upward pricing pressure is straightforward. Suppose five firms sell competing products (Products A-E) that, while largely substitutable, are differentiated by brand. Given the brand differentiation, some of the products are closer substitutes than others. If the closest substitute to Product A is Product B and vice-versa, then a merger between Producer A and Producer B may result in higher prices even if the remaining producers (C, D, and E) neither raise their prices nor reduce their output. The merged firm will know that if it raises the price of Product A, most of the lost sales will be diverted to Product B, which that firm also produces. Similarly, sales diverted from Product B will largely flow to Product A. Thus, the merged company, seeking to maximize its profits, may face pressure to raise the prices of Products A and/or B.

The GUPPI seeks to assess the likelihood, absent countervailing efficiencies, that the merged firm (e.g., Producer A combined with Producer B) would raise the price of one of its competing products (e.g., Product A), causing some of the lost sales on that product to be diverted to its substitute (e.g., Product B). The GUPPI on Product A would thus consist of:

The Value of Sales Diverted to Product B
Foregone Revenues on Lost Product A Sales.

The value of sales diverted to Product B, the numerator, is equal to the number of units diverted from Product A to Product B times the profit margin (price minus marginal cost) on Product B. The foregone revenues on lost Product A sales, the denominator, is equal to the number of lost Product A sales times the price of Product A. Thus, the fraction set forth above is equal to:

Number of A Sales Diverted to B * Unit Margin on B
Number of A Sales Lost * Price of A.

The Guidelines do not specify how high the GUPPI for a particular product must be before competitive concerns are raised, but they do suggest that at some point, the GUPPI is so small that adverse unilateral effects are unlikely. (“If the value of diverted sales is proportionately small, significant unilateral price effects are unlikely.”) Consistent with this observation, DOJ’s Antitrust Division has concluded that a GUPPI of less than 5% will not give rise to a merger challenge.

Commissioner Wright has split with his fellow commissioners over whether the FTC should similarly adopt a safe harbor for horizontal mergers where the adverse competitive concern is unilateral effects and the GUPPIs are less than 5%. Of the 330 markets in which the Commission is requiring divestiture of stores, 27 involve GUPPIs of less than 5%. Commissioner Wright’s position is that the combinations in those markets should be deemed to fall within a safe harbor. At the very least, he says, there should be some safe harbor for very small GUPPIs, even if it kicks in somewhere below the 5% level. The Commission has taken the position that there should be no safe harbor for mergers where the competitive concern is unilateral effects, no matter how low the GUPPI. Instead, the Commission majority says, GUPPI is just a starting point; once the GUPPIs are calculated, each market should be assessed in light of qualitative factors, and a gestalt-like, “all things considered” determination should be made.

The Commission majority purports to have taken this approach in the Family Dollar/Dollar Tree case. It claims that having used GUPPI to identify some markets that were presumptively troubling (markets where GUPPIs were above a certain level) and others that were presumptively not troubling (low-GUPPI markets), it went back and considered qualitative evidence for each, allowing the presumption to be rebutted where appropriate. As Commissioner Wright observes, though, the actual outcome of this purported process is curious: almost none of the “presumptively anticompetitive” markets were cleared based on qualitative evidence, whereas 27 of the “presumptively competitive” markets were slated for a divestiture despite the low GUPPI. In practice, the Commission seems to be using high GUPPIs to condemn unilateral effects mergers, while not allowing low GUPPIs to acquit them. Wright, by contrast, contends that a low-enough GUPPI should be sufficient to acquit a merger where the only plausible competitive concern is adverse unilateral effects.

He’s right on this, for at least five reasons.

  1. Virtually every merger involves a positive GUPPI. As long as any sales would be diverted from one merging firm to the other and the firms are pricing above cost (so that there is some profit margin on their products), a merger will involve a positive GUPPI. (Recall that the numerator in the GUPPI is “number of diverted sales * profit margin on the product to which sales are diverted.”) If qualitative evidence must be considered and a gestalt-like decision made in even low-GUPPI cases, then that’s the approach that will always be taken and GUPPI data will be essentially irrelevant.
  2. Calculating GUPPIs is hard. Figuring the GUPPI requires the agencies to make some difficult determinations. Calculating the “diversion ratio” (the percentage of lost A sales that are diverted to B when the price of A is raised) requires determinations of A’s “own-price elasticity of demand” as well as the “cross-price elasticity of demand” between A and B. Calculating the profit margin on B requires determining B’s marginal cost. Assessing elasticity of demand and marginal cost is notoriously difficult. This difficulty matters here for a couple of reasons:
    • First, why go through the difficult task of calculating GUPPIs if they won’t simplify the process of evaluating a merger? Under the Commission’s purported approach, once GUPPI is calculated, enforcers still have to consider all the other evidence and make an “all things considered” judgment. A better approach would be to cut off the additional analysis if the GUPPI is sufficiently small.
    • Second, given the difficulty of assessing marginal cost (which is necessary to determine the profit margin on the product to which sales are diverted), enforcers are likely to use a proxy, and the most commonly used proxy for marginal cost is average variable cost (i.e., the total non-fixed costs of producing the products at issue divided by the number of units produced). Average variable cost, though, tends to be smaller than marginal cost over the relevant range of output, which will cause the profit margin (price – “marginal” cost) on the product to which sales are diverted to appear higher than it actually is. And that will tend to overstate the GUPPI. Thus, at some point, a positive but low GUPPI should be deemed insignificant.
  3. The GUPPI is biased toward an indication of anticompetitive effect. GUPPI attempts to assess gross upward pricing pressure. It takes no account of factors that tend to prevent prices from rising. In particular, it ignores entry and repositioning by other product-differentiated firms, factors that constrain the merged firm’s ability to raise prices. It also ignores merger-induced efficiencies, which tend to put downward pressure on the merged firm’s prices. (Granted, the merger guidelines call for these factors to be considered eventually, but the factors are generally subject to higher proof standards. Efficiencies, in particular, are pretty difficulty to establish under the guidelines.) The upshot is that the GUPPI is inherently biased toward an indication of anticompetitive harm. A safe harbor for mergers involving low GUPPIs would help counter-balance this built-in bias.
  4. Divergence from DOJ’s approach will create an arbitrary result. The FTC and DOJ’s Antitrust Division share responsibility for assessing proposed mergers. Having the two enforcement agencies use different standards in their evaluations injects a measure of arbitrariness into the law. In the interest of consistency, predictability, and other basic rule of law values, the agencies should get on the same page. (And, for reasons set forth above, DOJ’s is the better one.)
  5. A safe harbor is consistent with the Supreme Court’s decision-theoretic antitrust jurisprudence. In recent years, the Supreme Court has generally crafted antitrust rules to optimize the costs of errors and of making liability judgments (or, put differently, to “minimize the sum of error and decision costs”). On a number of occasions, the Court has explicitly observed that it is better to adopt a rule that will allow the occasional false acquittal if doing so will prevent greater costs from false convictions and administration. The Brooke Group rule that there can be no predatory pricing liability absent below-cost pricing, for example, is expressly not premised on the belief that low, but above-cost, pricing can never be anticompetitive; rather, the rule is justified on the ground that the false negatives it allows are less costly than the false positives and administrative difficulties a more “theoretically perfect” rule would generate. Indeed, the Supreme Court’s antitrust jurisprudence seems to have wholeheartedly endorsed Voltaire’s prudent aphorism, “The perfect is the enemy of the good.” It is thus no answer for the Commission to observe that adverse unilateral effects can sometimes occur when a combination involves a low (<5%) GUPPI. Low but above-cost pricing can sometimes be anticompetitive, but Brooke Group’s safe harbor is sensible and representative of the approach the Supreme Court thinks antitrust should take. The FTC should get on board.

One final point. It is important to note that Commissioner Wright is not saying—and would be wrong to say—that a high GUPPI should be sufficient to condemn a merger. The GUPPI has never been empirically verified as a means of identifying anticompetitive mergers. As Dennis Carlton observed, “[T]he use of UPP as a merger screen is untested; to my knowledge, there has been no empirical analysis that has been performed to validate its predictive value in assessing the competitive effects of mergers.” Dennis W. Carlton, Revising the Horizontal Merger Guidelines, 10 J. Competition L. & Econ. 1, 24 (2010). This dearth of empirical evidence seems especially problematic in light of the enforcement agencies’ spotty track record in predicting the effects of mergers. Craig Peters, for example, found that the agencies’ merger simulations produced wildly inaccurate predictions about the price effects of airline mergers. See Craig Peters, Evaluating the Performance of Merger Simulation: Evidence from the U.S. Airline Industry, 49 J.L. & Econ. 627 (2006). Professor Carlton thus warns (Carlton, supra, at 32):

UPP is effectively a simplified version of merger simulation. As such, Peters’s findings tell a cautionary tale—more such studies should be conducted before one treats UPP, or any other potential merger review method, as a consistently reliable methodology by which to identify anticompetitive mergers.

The Commission majority claims to agree that a high GUPPI alone should be insufficient to condemn a merger. But the actual outcome of the analysis in the case at hand—i.e., finding almost all combinations involving high GUPPIs to be anticompetitive, while deeming the procompetitive presumption to be rebutted in 27 low-GUPPI cases—suggests that the Commission is really allowing high GUPPIs to “prove” that anticompetitive harm is likely.

The point of dispute between Wright and the other commissioners, though, is about how to handle low GUPPIs. On that question, the Commission should either join the DOJ in recognizing a safe harbor for low-GUPPI mergers or play it straight with the public and delete the Horizontal Merger Guidelines’ observation that “[i]f the value of diverted sales is proportionately small, significant unilateral price effects are unlikely.” The better approach would be to affirm the Guidelines and recognize a safe harbor.

On Thursday I will be participating in an ABA panel discussion on the Apple e-books case, along with Mark Ryan (former DOJ attorney) and Fiona Scott-Morton (former DOJ economist), both of whom were key members of the DOJ team that brought the case. Details are below. Judging from the prep call, it should be a spirited discussion!

Readers looking for background on the case (as well as my own views — decidedly in opposition to those of the DOJ) can find my previous commentary on the case and some of the issues involved here:

Other TOTM authors have also weighed in. See, e.g.:

DETAILS:

ABA Section of Antitrust Law

Federal Civil abaantitrustEnforcement Committee, Joint Conduct, Unilateral Conduct, and Media & Tech Committees Present:

“The 2d Cir.’s Apple E-Books decision: Debating the merits and the meaning”

July 16, 2015
12:00 noon to 1:30 pm Eastern / 9:00 am to 10:30 am Pacific

On June 30, the Second Circuit affirmed DOJ’s trial victory over Apple in the Ebooks Case. The three-judge panel fractured in an interesting way: two judges affirmed the finding that Apple’s role in a “hub and spokes” conspiracy was unlawful per se; one judge also would have found a rule-of-reason violation; and the dissent — stating Apple had a “vertical” position and was challenging the leading seller’s “monopoly” — would have found no liability at all. What is the reasoning and precedent of the decision? Is “marketplace vigilantism” (the concurring judge’s phrase) ever justified? Our panel — which includes the former DOJ head of litigation involved in the case — will debate the issues.

Moderator

  • Ken Ewing, Steptoe & Johnson LLP

Panelists

  • Geoff Manne, International Center for Law & Economics
  • Fiona Scott Morton, Yale School of Management
  • Mark Ryan, Mayer Brown LLP

Register HERE

The most welfare-inimical restrictions on competition stem from governmental action, and the Organization for Economic Cooperation and Development’s newly promulgated “Competition Assessment Toolkit, Volume 3: Operational Manual” (“Toolkit 3,” approved by the OECD in late June 2015) provides useful additional guidance on how to evaluate and tackle such harmful market distortions. Toolkit 3 is a very helpful supplement to the second and third volumes of the Competition Assessment Toolkit. Commendably, Toolkit 3 promotes itself generally as a tool that can be employed by well-intentioned governments, rather than merely marketing itself as a manual for advocacy by national competition agencies (which may lack the political clout to sell reforms to other government bureaucracies or to legislators). It is a succinct non-highly-technical document that can be used by a wide range of governments, and applied flexibly, in light of their resource constraints and institutional capacities. Let’s briefly survey Toolkit 3’s key provisions.

Toolkit 3 begins with a “competition checklist” that states that a competition assessment should be undertaken if a regulatory or legislative proposal has any one of four effects: (1) it limits the number or range of suppliers; (2) it limits the ability of suppliers to compete; (3) it reduces the incentive of suppliers to compete; or (4) it limits the choices and information available to consumers. The Toolkit then sets forth basic guidance on competition assessments in seven relatively short, clearly written chapters.

Chapter one begins by explaining that Toolkit 3 “shows how to assess laws, regulations, and policies for their competition effects, and how to revise regulations or policies to make them more procompetitive.” To that end, the chapter introduces the concept of market studies and sectoral reviews, and outlines a six-part process for carrying out competition assessments: (1) identify policies to assess; (2) apply the competition checklist (see above); (3) identify alternative options for achieving a policy objective; (4) select the best option; (5) implement the best option; and (6) review the impacts of an option once it has been implemented.

Chapter two provides general guidance on the selection of public policies for examination, with particular attention to the identification of sectors of the economy, that have the greatest restraints on competition and a major impact on economic output and efficiency.

Chapter three focuses on competition screening through use of threshold questions embodied in the four-part competition checklist. It also provides examples of the sorts of regulations that fall into each category covered by the checklist.

Chapter four sets forth guidance for the examination of potential restrictions that have been flagged for evaluation by the checklist. It provides indicators for deciding whether or not “in-depth analysis” is required, delineates specific considerations that should be brought to bear in conducting an analysis, and provides a detailed example of the steps to be followed in assessing a hypothetical drug patent law (beginning with a preliminary assessment, followed by a detailed analysis, and ending with key findings).

Chapter five centers on identifying the policy options that allow a policymaker to achieve a desired objective with a minimum distortion of competition. It discusses: (1) identifying the purpose of a policy; (2) identifying the competition problems caused by the policy under examination and whether it is necessary to achieve the desired objective; (3) evaluating the technical features of the subject matter being regulated; (4) accounting for features of the broader regulatory environment that have an effect on the market in question, in order to develop alternatives; (5) understanding changes in the business or market environment that have occurred since the last policy implementation; and (6) identifying specific techniques that allow an objective to be achieved with a minimum distortion of competition. The chapter closes by briefly describing various policy approaches for achieving a hypothetical desired reform objective (promotion of generic drug competition).

Chapter six provides guidance on comparing the policy options that have been identified. After summarizing background concepts, it discusses qualitative analysis, quantitative analysis, and the measurement of costs and benefits. The cost-benefits section is particularly thorough, delving into data gathering, techniques of measurement, estimates of values, adjustments to values, and accounting for risk and uncertainty. These tools are then applied to a specific hypothetical involving pharmaceutical regulation, featuring an assessment of the advantages and disadvantages of alternative options.

Chapter seven outlines the steps that should be taken in submitting a recommendation for government action. Those involve: (1) selecting the best policy option; (2) presenting the recommendation to a decision-maker; (3) drafting a regulation that is needed to effectuate the desired policy option; (4) obtaining final approval; and (5) implementing the regulation. The chapter closes by applying this framework to hypothetical regulations.

Chapter 8 discusses the performance of ex post evaluations of competition assessments, in order to determine whether the option chosen following the review process had the anticipated effects and was most appropriate. Four examples of ex post evaluations are summarized.

Toolkit 3 closes with a brief annex that describes mathematically and graphically the consumer benefits that arise when moving from a restrictive market equilibrium to a competitive equilibrium.

In sum, the release of Toolkit 3 is best seen as one more small step forward in the long-term fight against state-managed regulatory capitalism and cronyism, on a par with increased attention to advocacy initiatives within the International Competition Network and growing World Bank efforts to highlight the welfare harm due to governmental regulatory impediments. Although anticompetitive government market distortions will remain a huge problem for the foreseeable future, at least international organizations are starting to acknowledge their severity and to provide conceptual tools for combatting them. Now it is up to free market proponents to work in the trenches to secure the political changes needed to bring such distortions – and their rent-seeking advocates – to heel. This is a long-term fight, but well worth the candle.

Today, in Kimble v. Marvel Entertainment, a case involving the technology underlying the Spider-Man Web-Blaster, the Supreme Court invoked stare decisis to uphold an old precedent based on bad economics. In so doing, the Court spun a tangled web of formalism that trapped economic common sense within it, forgetting that, as Spider-Man was warned in 1962, “with great power there must also come – great responsibility.”

In 1990, Stephen Kimble obtained a patent on a toy that allows children (and young-at-heart adults) to role-play as “a spider person” by shooting webs—really, pressurized foam string—“from the palm of [the] hand.” Marvel Entertainment made and sold a “Web-Blaster” toy based on Kimble’s invention, without remunerating him. Kimble sued Marvel for patent infringement in 1997, and the parties settled, with Marvel agreeing to buy Kimble’s patent for a lump sum (roughly a half-million dollars) plus a 3% royalty on future sales, with no end date set for the payment of royalties.

Marvel subsequently sought a declaratory judgment in federal district court confirming that it could stop paying Kimble royalties after the patent’s expiration date. The district court granted relief, the Ninth Circuit Court of Appeals affirmed, and the Supreme Court affirmed the Ninth Circuit. In an opinion by Justice Kagan, joined by Justices Scalia, Kennedy, Ginsburg, Breyer, and Sotomayor, the Court held that a patentee cannot continue to receive royalties for sales made after his patent expires. Invoking stare decisis, the Court reaffirmed Brulotte v. Thys (1964), which held that a patent licensing agreement that provided for the payment of royalties accruing after the patent’s expiration was illegal per se, because it extended the patent monopoly beyond its statutory time period. The Kimble Court stressed that stare decisis is “the preferred course,” and noted that though the Brulotte rule may prevent some parties from entering into deals they desire, parties can often find ways to achieve similar outcomes.

Justice Alito, joined by Chief Justice Roberts and Justice Thomas, dissented, arguing that Brulotte is a “baseless and damaging precedent” that interferes with the ability of parties to negotiate licensing agreements that reflect the true value of a patent. More specifically:

“There are . . . good reasons why parties sometimes prefer post-expiration royalties over upfront fees, and why such arrangements have pro-competitive effects. Patent holders and licensees are often unsure whether a patented idea will yield significant economic value, and it often takes years to monetize an innovation. In those circumstances, deferred royalty agreements are economically efficient. They encourage innovators, like universities, hospitals, and other institutions, to invest in research that might not yield marketable products until decades down the line. . . . And they allow producers to hedge their bets and develop more products by spreading licensing fees over longer periods. . . . By prohibiting these arrangements, Brulotte erects an obstacle to efficient patent use. In patent law and other areas, we have abandoned per se rules with similarly disruptive effects. . . . [T]he need to avoid Brulotte is an economic inefficiency in itself. . . . And the suggested alternatives do not provide the same benefits as post-expiration royalty agreements. . . . The sort of agreements that Brulotte prohibits would allow licensees to spread their costs, while also allowing patent holders to capitalize on slow-developing inventions.”

Furthermore, the Supreme Court was willing to overturn a nearly century-old antitrust precedent that absolutely barred resale price maintenance in the Leegin case, despite the fact that the precedent was extremely well know (much better known than the Brulotte rule) and had prompted a vast array of contractual workarounds. Given the seemingly greater weight of the Leegin precedent, why was stare decisis set aside in Leegin, but not in Kimble? The Kimble majority’s argument that stare decisis should weigh more heavily in patent than in antitrust because, unlike the antitrust laws, “the patent laws do not turn over exceptional law-shaping authority to the courts”, is unconvincing. As the dissent explains:

“[T]his distinction is unwarranted. We have been more willing to reexamine antitrust precedents because they have attributes of common-law decisions. I see no reason why the same approach should not apply where the precedent at issue, while purporting to apply a statute, is actually based on policy concerns. Indeed, we should be even more willing to reconsider such a precedent because the role implicitly assigned to the federal courts under the Sherman [Antitrust] Act has no parallel in Patent Act cases.”

Stare decisis undoubtedly promotes predictability and the rule of law and, relatedly, institutional stability and efficiency – considerations that go to the costs of administering the legal system and of formulating private conduct in light of prior judicial precedents. The cost-based efficiency considerations underlying applying stare decisis to any particular rule, must, however, be weighed against the net economic benefits associated with abandonment of that rule. The dissent in Kimble did this, but the majority opinion regrettably did not.

In sum, let us hope that in the future the Court keeps in mind its prior advice, cited in Justice Alito’s dissent, that “stare decisis is not an ‘inexorable command’,” and that “[r]evisiting precedent is particularly appropriate where . . . a departure would not upset expectations, the precedent consists of a judge-made rule . . . , and experience has pointed up the precedent’s shortcomings.”

The FTC recently required divestitures in two merger investigations (here and here), based largely on the majority’s conclusion that

[when] a proposed merger significantly increases concentration in an already highly concentrated market, a presumption of competitive harm is justified under both the Guidelines and well-established case law.” (Emphasis added).

Commissioner Wright dissented in both matters (here and here), contending that

[the majority’s] reliance upon such shorthand structural presumptions untethered from empirical evidence subsidize a shift away from the more rigorous and reliable economic tools embraced by the Merger Guidelines in favor of convenient but obsolete and less reliable economic analysis.

Josh has the better argument, of course. In both cases the majority relied upon its structural presumption rather than actual economic evidence to make out its case. But as Josh notes in his dissent in In the Matter of ZF Friedrichshafen and TRW Automotive (quoting his 2013 dissent in In the Matter of Fidelity National Financial, Inc. and Lender Processing Services):

there is no basis in modern economics to conclude with any modicum of reliability that increased concentration—without more—will increase post-merger incentives to coordinate. Thus, the Merger Guidelines require the federal antitrust agencies to develop additional evidence that supports the theory of coordination and, in particular, an inference that the merger increases incentives to coordinate.

Or as he points out in his dissent in In the Matter of Holcim Ltd. and Lafarge S.A.

The unifying theme of the unilateral effects analysis contemplated by the Merger Guidelines is that a particularized showing that post-merger competitive constraints are weakened or eliminated by the merger is superior to relying solely upon inferences of competitive effects drawn from changes in market structure.

It is unobjectionable (and uninteresting) that increased concentration may, all else equal, make coordination easier, or enhance unilateral effects in the case of merger to monopoly. There are even cases (as in generic pharmaceutical markets) where rigorous, targeted research exists, sufficient to support a presumption that a reduction in the number of firms would likely lessen competition. But generally (as in these cases), absent actual evidence, market shares might be helpful as an initial screen (and may suggest greater need for a thorough investigation), but they are not analytically probative in themselves. As Josh notes in his TRW dissent:

The relevant question is not whether the number of firms matters but how much it matters.

The majority in these cases asserts that it did find evidence sufficient to support its conclusions, but — and this is where the rubber meets the road — the question remains whether its limited evidentiary claims are sufficient, particularly given analyses that repeatedly come back to the structural presumption. As Josh says in his Holcim dissent:

it is my view that the investigation failed to adduce particularized evidence to elevate the anticipated likelihood of competitive effects from “possible” to “likely” under any of these theories. Without this necessary evidence, the only remaining factual basis upon which the Commission rests its decision is the fact that the merger will reduce the number of competitors from four to three or three to two. This is simply not enough evidence to support a reason to believe the proposed transaction will violate the Clayton Act in these Relevant Markets.

Looking at the majority’s statements, I see a few references to the kinds of market characteristics that could indicate competitive concerns — but very little actual analysis of whether these characteristics are sufficient to meet the Clayton Act standard in these particular markets. The question is — how much analysis is enough? I agree with Josh that the answer must be “more than is offered here,” but it’s an important question to explore more deeply.

Presumably that’s exactly what the ABA’s upcoming program will do, and I highly recommend interested readers attend or listen in. The program details are below.

The Use of Structural Presumptions in Merger Analysis

June 26, 2015, 12:00 PM – 1:15 PM ET

Moderator:

  • Brendan Coffman, Wilson Sonsini Goodrich & Rosati LLP

Speakers:

  • Angela Diveley, Office of Commissioner Joshua D. Wright, Federal Trade Commission
  • Abbott (Tad) Lipsky, Latham & Watkins LLP
  • Janusz Ordover, Compass Lexecon
  • Henry Su, Office of Chairwoman Edith Ramirez, Federal Trade Commission

In-person location:

Latham & Watkins
555 11th Street,NW
Ste 1000
Washington, DC 20004

Register here.

During the recent debate over whether to grant the Obama Administration “trade promotion authority” (TPA or fast track) to enter into major international trade agreements (such as the Trans-Pacific Partnership, or TPP), little attention has been directed to the problem of remaining anticompetitive governmental regulatory obstacles to liberalized trade and free markets.  Those remaining obstacles, which merit far more public attention, are highlighted in an article coauthored by Shanker Singham and me on competition policy and international trade distortions.

As our article explains, international trade agreements simply do not reach a variety of anticompetitive welfare-reducing government measures that create de facto trade barriers by favoring domestic interests over foreign competitors.  Moreover, many of these restraints are not in place to discriminate against foreign entities, but rather exist to promote certain favored firms. We dub these restrictions “anticompetitive market distortions” or “ACMDs,” in that they involve government actions that empower certain private interests to obtain or retain artificial competitive advantages over their rivals, be they foreign or domestic.  ACMDs are often a manifestation of cronyism, by which politically-connected enterprises successfully pressure government to shield them from effective competition, to the detriment of overall economic growth and welfare.  As we emphasize in our article, existing international trade rules have been able to reach ACMDs, which include: (1) governmental restraints that distort markets and lessen competition; and (2) anticompetitive private arrangements that are backed by government actions, have substantial effects on trade outside the jurisdiction that imposes the restrictions, and are not readily susceptible to domestic competition law challenge.  Among the most pernicious ACMDs are those that artificially alter the cost-base as between competing firms. Such cost changes will have large and immediate effects on market shares, and therefore on international trade flows.

Likewise, with the growing internationalization of commerce, ACMDs not only diminish domestic consumer welfare – they increasingly may have a harmful effect on foreign enterprises that seek to do business in the country imposing the restraint.  The home nations of the affected foreign enterprises, moreover, may as a practical matter find it not feasible to apply their competition laws extraterritorially to curb the restraint, given issues of jurisdictional reach and comity (particularly if the restraint flies under the colors of domestic law).  Because ACMDs also have not been constrained by international trade liberalization initiatives, they pose a serious challenge to global welfare enhancement by curtailing potential trade and investment opportunities.

Interest group politics and associated rent-seeking by well-organized private actors are endemic to modern economic life, guaranteeing that ACMDs will not easily be dismantled.  What is to be done, then, to curb ACMDs?

As a first step, Shanker Singham and I have proposed the development of a metric to estimate the net welfare costs of ACMDs.  Such a metric could help strengthen the hand of international organizations (including the International Competition Network, the World Bank, and the OECD) – and of reform-minded public officials – in building the case for dismantling these restraints, or (as a last resort) replacing them with less costly means for benefiting favored constituencies.  (Singham, two other coauthors, and I have developed a draft paper that delineates a specific metric, which we hope will be suitable for public release in the near future.)

Furthermore, free market-oriented think tanks can also be helpful by highlighting the harm special interest governmental restraints impose on the economy and on economic freedom.  In that regard, the Heritage Foundation’s excellent work in opposing cronyism deserves special mention.

Working to eliminate ACMDs and thereby promoting economic liberty is an arduous long-term task – one that will only succeed in increments, one battle at a time (the current principled effort to eliminate the Ex-Im Bank, strongly supported by the Heritage Foundation, is one such example).  Nevertheless, it is very much worth the candle.

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.

Excerpts:

PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

PRICE DISCRIMINATION AS A PRIVACY HARM

If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.

DATA BARRIER TO ENTRY

Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive

CONCLUSION

Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.

On April 17, the Federal Trade Commission (FTC) voted three-to-two to enter into a consent agreement In the Matter of Cardinal Health, Inc., requiring Cardinal Health to disgorge funds as part of the settlement in this monopolization case.  As ably explained by dissenting Commissioners Josh Wright and Maureen Ohlhausen, the U.S. Federal Trade Commission (FTC) wrongly required the disgorgement of funds in this case.  The settlement reflects an overzealous application of antitrust enforcement to unilateral conduct that may well be efficient.  It also manifests a highly inappropriate application of antitrust monetary relief that stands to increase private uncertainty, to the detriment of economic welfare.

The basic facts and allegations in this matter, drawn from the FTC’s statement accompanying the settlement, are as follows.  Through separate acquisitions in 2003 and 2004, Cardinal Health became the largest operator of radiopharmacies in the United States and the sole radiopharmacy operator in 25 relevant markets addressed by this settlement.  Radiopharmacies distribute and sell radiopharmaceuticals, which are drugs containing radioactive isotopes, used by hospitals and clinics to diagnose and treat diseases.  Notably, they typically derive at least of 60% of their revenues from the sale of heart perfusion agents (“HPAs”), a type of radiopharmaceutical that healthcare providers use to conduct heart stress tests.  A practical consequence is that radiopharmacies cannot operate a financially viable and competitive business without access to an HPA.  Between 2003 and 2008, Cardinal allegedly employed various tactics to induce the only two manufacturers of HPAs in the United States, BMS and GEAmersham, to withhold HPA distribution rights from would-be radiopharmacy market entrants in violation of Section 2 of the Sherman Act.  Through these tactics Cardinal allegedly maintained exclusive dealing rights, denied its customers the benefits of competition, and profited from the monopoly prices it charged for all radiopharmaceuticals, including HPAs, in the relevant markets.  Importantly, according to the FTC, there was no efficiency benefit or legitimate business justification for Cardinal simultaneously maintaining exclusive distribution rights to the only two HPAs then available in the relevant markets.

This settlement raises two types of problems.

First, this was a single firm conduct exclusive dealing case involving (at best) questionable anticompetitive effectsAs Josh Wright (citing the economics literature) pointed out in his dissent, “there are numerous plausible efficiency justifications for such [exclusive dealing] restraints.”  (Moreover, as Josh Wright and I stressed in an article on tying and exclusive dealing, “[e]xisting empirical evidence of the impact of exclusive dealing is scarce but generally favors the view that exclusive dealing is output‐enhancing”, suggesting that a (rebuttable) presumption of legality would be appropriate in this area.)  Indeed, in this case, Commissioner Wright explained that “[t]he tactics the Commission challenges could have been output-enhancing” in various markets.  Furthermore, Commissioner Wright emphasized that the data analysis showing that Cardinal charged higher prices in monopoly markets was “very fragile.  The data show that the impact of a second competitor on Cardinal’s prices is small, borderline statistically significant, and not robust to minor changes in specification.”  Commissioner Ohlhausen’s dissent reinforced Commissioner Wright’s critique of the majority’s exclusive dealing theory.  As she put it:

“[E]even if the Commission could establish that Cardinal achieved some type of de facto exclusivity with both Bristol-Myers Squibb and General Electric Co. during the relevant time period (and that is less than clear), it is entirely unclear that such exclusivity – rather than, for example, insufficient demand for more than one radiopharmacy – caused the lack of entry within each of the relevant markets. That alternative explanation seems especially likely in the six relevant markets in which ‘Cardinal remains the sole or dominant radiopharmacy,’ notwithstanding the fact that whatever exclusivity Cardinal may have achieved admittedly expired in early 2008.  The complaint provides no basis for the assertion that Cardinal’s conduct during the 2003-2008 period has caused the lack of entry in those six markets during the past seven years.”

Furthermore, Commissioner Ohlhausen underscored Commissioner Wright’s critique of the empirical evidence in this case:  “[T]he evidence of anticompetitive effects in the relevant markets at issue is significantly lacking.  It is largely based on non-market-specific documentary evidence. The market-specific empirical evidence we do have implies very small (i.e. low single-digit) and often statistically insignificant price increases or no price increases at all.”

Second, the FTC’s requirement that Cardinal Health disgorge $26.8 million into a fund for allegedly injured consumers is unmeritorious and inappropriately chills potentially procompetitive behavior.  Commissioner Ohlhausen focused on how this case ran afoul of the FTC’s 2003 Policy Statement on Monetary Equitable Remedies in Competition Cases (Policy Statement) (withdrawn by the FTC in 2012, over Commissioner Ohlhausen’s dissent), which reserves disgorgement for cases in which the underlying violation is clear and there is a reasonable basis for calculating the amount of a remedial payment.  As Ohlhausen explained, this case violates those principles because (1) it does not involve a clear violation of the antitrust laws (see above) and, given the lack of anticompetitive effects evidence (see above), (2) there is no reasonable basis for calculating the disgorgement amount (indeed, there is “the real possibility of no ill-gotten gains for Cardinal”).  Furthermore:

“The lack of guidance from the Commission on the use of its disgorgement authority [following withdrawal of the Policy Statement] makes any such use inherently unpredictable and thus unfair. . . .  The Commission therefore ought to   reinstate the Policy Statement – either in its original form or in some modified form that the current Commissioners can agree on – or provide some additional guidance on when it plans to seek the extraordinary remedy of disgorgement in antitrust cases.”

In his critique of disgorgement, Commissioner Wright deployed law and economics analysis (and, in particular, optimal deterrence theory).  He explained that regulators should be primarily concerned with over-deterrence in single-firm conduct cases such as this one, which raise the possibility of private treble damage actions.  Wright stressed:

“I would . . . pursue disgorgement only against naked price fixing agreements among competitors or, in the case of single-firm conduct, only if the monopolist’s conduct violates the Sherman Act and has no plausible efficiency justification. . . .  This case does not belong in that category. Declining to pursue disgorgement in most cases involving vertical restraints has the virtue of taking the remedy off the table – and thus reducing the risk of over-deterrence – in the cases that present the most difficulty in distinguishing between anticompetitive conduct that harms consumers and procompetitive conduct that benefits them, such as the present case.”

Commissioner Wright also shared Commissioner Ohlhausen’s concern about the lack of meaningful FTC guidance regarding when and whether it will seek disgorgement, and agreed with her that the FTC should reinstate the Policy Statement or provide new specific guidance in this area.  (See my 2012 ABA Antitrust Source article for a more fulsome critique of the antitrust error costs, chilling effects, and harmful international ramifications associated with the withdrawal of the Policy Statement.)

In sum, one may hope that in the future the FTC:  (1) will be more attentive to the potential efficiencies of exclusive dealing; (2) will proceed far more cautiously before proposing an enforcement action in the exclusive dealing area; (3) will avoid applying disgorgement in exclusive dealing cases; and (4) will promulgate a new disgorgement policy statement that reserves disgorgement for unequivocally illegal antitrust offenses in which economic harm can readily be calculated with a high degree of certainty.

The FCC’s proposed “Open Internet Order,” which would impose heavy-handed “common carrier” regulation of Internet service providers (the Order is being appealed in federal court and there are good arguments for striking it down) in order to promote “net neutrality,” is fundamentally misconceived.  If upheld, it will slow innovation, impose substantial costs, and harm consumers (see Heritage Foundation commentaries on FCC Internet regulation here, here, here, and here).  What’s more, it is not needed to protect consumers and competition from potential future abuse by Internet firms.  As I explain in a Heritage Foundation Legal Memorandum published yesterday, should the Open Internet Order be struck down, the U.S. Federal Trade Commission (FTC) has ample authority under Section 5 of the Federal Trade Commission Act (FTC Act) to challenge any harmful conduct by entities involved in Internet broadband services markets when such conduct undermines competition or harms consumers.

Section 5 of the FTC Act authorizes the FTC to prevent persons, partnerships, or corporations from engaging in “unfair methods of competition” or “unfair or deceptive acts or practices” in or affecting commerce.  This gives it ample authority to challenge Internet abuses raising antitrust (unfair methods) and consumer protection (unfair acts or practices) issues.

On the antitrust side, in evaluating individual business restraints under a “rule of reason,” the FTC relies on objective fact-specific analyses of the actual economic and consumer protection implications of a particular restraint.  Thus, FTC evaluations of broadband industry restrictions are likely to be more objective and predictable than highly subjective “public interest” assessments by the FCC, leading to reduced error and lower planning costs for purveyors of broadband and related services.  Appropriate antitrust evaluation should accord broad leeway to most broadband contracts.  As FTC Commissioner Josh Wright put it in testifying before Congress, “fundamental observation and market experience [demonstrate] that the business practices at the heart of the net neutrality debate are generally procompetitive.”  This suggests application of a rule of reason that will fully weigh efficiencies but not shy away from challenging broadband-related contractual arrangements that undermine the competitive process.

On the consumer protection side, the FTC can attack statements made by businesses that mislead and thereby impose harm on consumers (including business purchasers) who are acting reasonably.  It can also challenge practices that, though not literally false or deceptive, impose substantial harm on consumers (including business purchasers) that they cannot reasonably avoid, assuming the harm is greater than any countervailing benefits.  These are carefully designed and cabined sources of authority that require the FTC to determine the presence of actual consumer harm before acting.  Application of the FTC’s unfairness and deception powers therefore lacks the uncertainty associated with the FCC’s uncabined and vague “public interest” standard of evaluation.  As in the case of antitrust, the existence of greater clarity and a well-defined analytic methodology suggests that reliance on FTC rather than FCC enforcement in this area is preferable from a policy standpoint.

Finally, arguments for relying on FTC Internet policing are based on experience as well – the FTC is no Internet policy novice.  It closely monitors Internet activity and, over the years, it has developed substantial expertise in Internet topics through research, hearings, and enforcement actions.

Most recently, for example, the FTC sued AT&T in federal court for allegedly slowing wireless customers’ Internet speeds, although the customers had subscribed to “unlimited” data usage plans.  The FTC asserted that in offering renewals to unlimited-plan customers, AT&T did not adequately inform them of a new policy to “throttle” (drastically reduce the speed of) customer data service once a certain monthly data usage cap was met. The direct harm of throttling was in addition to the high early termination fees that dissatisfied customers would face for early termination of their services.  The FTC characterized this behavior as both “unfair” and “deceptive.”  Moreover, the commission claimed that throttling-related speed reductions and data restrictions were not determined by real-time network congestion and thus did not even qualify as reasonable network management activity.  This case illustrates that the FTC is perfectly capable of challenging potential “network neutrality” violations that harm consumer welfare (since “throttled” customers are provided service that is inferior to the service afforded customers on “tiered” service plans) and thus FCC involvement is unwarranted.

In sum, if a court strikes down the latest FCC effort to regulate the Internet, the FTC has ample authority to address competition and consumer protection problems in the area of broadband, including questions related to net neutrality.  The FTC’s highly structured, analytic, fact-based approach to these issues is superior to FCC net neutrality regulation based on vague and unfocused notions of the public interest.  If a court does not act, Congress might wish to consider legislation to prohibit FCC Internet regulation and leave oversight of potential competitive and consumer abuses to the FTC.