Archives For settlements

I remain deeply skeptical of any antitrust challenge to the AT&T/Time Warner merger.  Vertical mergers like this one between a content producer and a distributor are usually efficiency-enhancing.  The theories of anticompetitive harm here rely on a number of implausible assumptions — e.g., that the combined company would raise content prices (currently set at profit-maximizing levels so that any price increase would reduce profits on content) in order to impair rivals in the distribution market and enhance profits there.  So I’m troubled that DOJ seems poised to challenge the merger.

I am, however, heartened — I think — by a speech Assistant Attorney General Makan Delrahim recently delivered at the ABA’s Antitrust Fall Forum. The crux of the speech, which is worth reading in its entirety, was that behavioral remedies — effectively having the government regulate a merged company’s day-to-day business decisions — are almost always inappropriate in merger challenges.

That used to be DOJ’s official position.  The Antitrust Division’s 2004 Remedies Guide proclaimed that “[s]tructural remedies are preferred to conduct remedies in merger cases because they are relatively clean and certain, and generally avoid costly government entanglement in the market.”

During the Obama administration, DOJ changed its tune.  Its 2011 Remedies Guide removed the statement quoted above as well as an assertion that behavioral remedies would be appropriate only in limited circumstances.  The 2011 Guide instead remained neutral on the choice between structural and conduct remedies, explaining that “[i]n certain factual circumstances, structural relief may be the best choice to preserve competition.  In a different set of circumstances, behavioral relief may be the best choice.”  The 2011 Guide also deleted the older Guide’s discussion of the limitations of conduct remedies.

Not surprisingly in light of the altered guidance, several of the Obama DOJ’s merger challenges—Ticketmaster/Live Nation, Comcast/NBC Universal, and Google/ITA Software, for example—resulted in settlements involving detailed and significant regulation of the combined firm’s conduct.  The settlements included mandatory licensing requirements, price regulation, compulsory arbitration of pricing disputes with recipients of mandated licenses, obligations to continue to develop and support certain products, the establishment of informational firewalls between divisions of the merged companies, prohibitions on price and service discrimination among customers, and various reporting requirements.

Settlements of such sort move antitrust a long way from the state of affairs described by then-professor Stephen Breyer, who wrote in his classic book Regulation and Its Reform:

[I]n principle the antitrust laws differ from classical regulation both in their aims and in their methods.  The antitrust laws seek to create or maintain the conditions of a competitive marketplace rather than replicate the results of competition or correct for the defects of competitive markets.  In doing so, they act negatively, through a few highly general provisions prohibiting certain forms of private conduct.  They do not affirmatively order firms to behave in specified ways; for the most part, they tell private firms what not to do . . . .  Only rarely do the antitrust enforcement agencies create the detailed web of affirmative legal obligations that characterizes classical regulation.

I am pleased to see Delrahim signaling a move away from behavioral remedies.  As Alden Abbott and I explained in our article, Recognizing the Limits of Antitrust: The Roberts Court Versus the Enforcement Agencies,

[C]onduct remedies present at least four difficulties from a limits of antitrust perspective.  First, they may thwart procompetitive conduct by the regulated firm.  When it comes to regulating how a firm interacts with its customers and rivals, it is extremely difficult to craft rules that will ban the bad without also precluding the good.  For example, requiring a merged firm to charge all customers the same price, a commonly imposed conduct remedy, may make it hard for the firm to serve clients who impose higher costs and may thwart price discrimination that actually enhances overall market output.  Second, conduct remedies entail significant direct implementation costs.  They divert enforcers’ attention away from ferreting out anticompetitive conduct elsewhere in the economy and require managers of regulated firms to focus on appeasing regulators rather than on meeting their customers’ desires.  Third, conduct remedies tend to grow stale.  Because competitive conditions are constantly changing, a conduct remedy that seems sensible when initially crafted may soon turn out to preclude beneficial business behavior.  Finally, by transforming antitrust enforcers into regulatory agencies, conduct remedies invite wasteful lobbying and, ultimately, destructive agency capture.

The first three of these difficulties are really aspects of F.A. Hayek’s famous knowledge problem.  I was thus particularly heartened by this part of Delrahim’s speech:

The economic liberty approach to industrial organization is also good economic policy.  F. A. Hayek won the 1974 Nobel Prize in economics for his work on the problems of central planning and the benefits of a decentralized free market system.  The price system of the free market, he explained, operates as a mechanism for communicating disaggregated information.  “[T]he ultimate decisions must be left to the people who are familiar with the[] circumstances.”  Regulation, I humbly submit in contrast, involves an arbiter unfamiliar with the circumstances that cannot possibly account for the wealth of information and dynamism that the free market incorporates.

So why the reservation in my enthusiasm?  Because eschewing conduct remedies may result in barring procompetitive mergers that might have been allowed with behavioral restraints.  If antitrust enforcers are going to avoid conduct remedies on Hayekian and Public Choice grounds, then they should challenge a merger only if they are pretty darn sure it presents a substantial threat to competition.

Delrahim appears to understand the high stakes of a “no behavioral remedies” approach to merger review:  “To be crystal clear, [having a strong presumption against conduct remedies] cuts both ways—if a merger is illegal, we should only accept a clean and complete solution, but if the merger is legal we should not impose behavioral conditions just because we can do so to expand our power and because the merging parties are willing to agree to get their merger through.”

The big question is whether the Trump DOJ will refrain from challenging mergers that do not pose a clear and significant threat to competition and consumer welfare.  On that matter, the jury is out.

Ioannis Lianos is Professor of Global Competition Law and Public Policy, UCL Faculty of Laws and Chief Researcher, HSE-Skolkovo Institute for Law and Development

The recently notified mergers in the seed and agro-chem industry raise difficult questions that competition authorities around the world would need to tackle in the following months. Because of the importance of their markets’ size, the decision reached by US and EU competition authorities would be particularly significant for the merging parties, but the perspective of a number of other competition authorities in emerging and developing economies, in particular the BRICS, will also play an important role if the transactions are to move forward.

The factors of production segment of the food value chain, which has been the focus of most recent merger activity, has been marked by profound transformations the last three decades. One may note the development of new technologies, starting with deliberate hybridization to marker-assisted breeding and the most recent advances in genetic engineering or genetic editing with CRISPR/Cas technology, as well as the advent of “digital agriculture” and “precision farming”. These technologies are of course protected by IP rights consisting of patents, plant variety rights, trademarks, trade secrets, and geographical indications.

These IP rights enable seed companies to prevent farmers from saving seeds of the protected variety, sharing it with their neighbours or selling it informally (“brown bagging”), but also to prevent competing plant breeders from using a protected variety in the development of a new variety (cumulative innovation), as well as to prevent competing seed producers from multiplying and marketing the protected variety without a license or using a protected product name and logos. Seed laws requiring compulsory seed certification with the aim to police seed quality also offer some form of protection to breeders, in the absence of IPRs.

Technology-driven growth has not been the only major transformation of this economic sector. Its consolidation, in particular in the factors of production segment, has been particularly important in recent years.

The consolidation of the factors of production segment

Concentration in the world and EU markets for seeds

In the seeds sector, a number of merger waves, starting in the mid-1980s, have led to the emergence of a relatively concentrated market structure of 6 big players thirty years later (Monsanto, Syngenta, DuPont, BASF, Bayer, and Dow).

The most recent merger wave started in July 2014 when Monsanto made a number of acquisition offers to Syngenta. These offers were rejected, but the Monsanto bid triggered a number of other M&A transactions that were announced in 2015 and 2016 between the various market leaders in the factors of production segment. In November 2015, Syngenta accepted the offer of ChemChina (which owns ADAMA, one of the largest agrochemical companies in the world). In December 2015, Dupont and Dow announced their merger. In September 2016, Bayer put forward a merger deal with Monsanto. During the same month, a deal was announced between two of the leaders in the market for fertilizers, Potash Corp and Agrium. In November 2015, it was reported that Deere & Co. (the leader in agricultural machinery) had agreed to buy Monsanto’s precision farming business. This deal was opposed by the US Department of Justice as it would have led Deere to control a significant part of the already highly concentrated US high-speed precision planting systems market.

The level of concentration varies according to the geographical market and the type of crop. If one looks at the situation in Europe, with regard to the sale of seeds, the market appears to be less concentrated than the global seed market. The picture is also slightly different for certain types of crop. For instance, it is reported that the seed market for sugar beets shows the largest concentration, with the first three companies (CR3) controlling a staggering 79% of the market (HHI: 2444), while for Maize seeds CR3 is 56% (HHI: 1425). High levels of concentration are also noted in the market for tomato seeds with Monsanto controlling 20% of registered seed varieties. What is more striking, however, is the speed of this consolidation process, as the bulk of this increase in the concentration level of the industry occurred in the last twenty years, the levels of concentration in the mid-1990s being close to those in 1985.

But the existence of a relatively concentrated market constitutes the tip of a much bigger consolidation iceberg between the market leaders that takes various forms: joint ventures, various cross-licensing and trait licensing agreements between the “Big Six”, distribution agreements, collaborations, research agreements and R&D strategic alliances, patent litigation settlements, to which one may add the recently concluded post-patent genetic trait agreements. Furthermore, one may not exclude the possibility of consolidation by stealth, in view of the important growth in common ownership in various sectors of the economy, as institutional investors simultaneously hold large blocks of other same-industry firms.

Which concentration level will be considered for merger purposes?

Market structure and concentration is, of course, just one step in the assessment of mergers and should be followed by a more thorough analysis of the possible anticompetitive effects and efficiencies, if the level of concentration resulting from the merger raises concerns. While the EU market for seeds could not be characterized as highly concentrated before this most recent merger wave, if one applies the conventional HHI measure, it remains possible that if the mergers first notified to the European Commission are approved without conditions with regard to seed markets, the concentration level that the Commission will consider when assessing the following notified merger will respectively increase. One may project that, as the Dow/Dupont merger has been recently cleared without conditions relating to the seed industry, it will be more difficult for the ChemChina/Syngenta merger to be approved without conditions, and even more so for the Bayer/Monsanto merger that will be the last one examined. Indeed, as the Commission made clear in its press release announcing its decision on the Dow/Dupont transaction,

The Commission examines each case on its own merits. In line with its case practice, the Commission assesses parallel transactions according to the so-called “priority rule” – first come, first served. The assessment of the merger between Dow and DuPont has been based on the currently prevailing market situation.

The assessment as to whether a merger would give rise to a Significant Impediment of Effective Competition (SIEC) is based on a counterfactual analysis where the post-merger scenario is compared to a hypothetical scenario absent the merger in question. The latter is normally taken to be the same as the situation before the merger is consummated. However, the Commission may take into account future changes to the market that can “reasonably be foreseen”. The identification of the proper counterfactual can be complicated by the fact that there can be more than one merger occurring in parallel in the same relevant market. Under the mandatory notification regime, the Commission does not factor into the counterfactual analysis a merger notified after the one under assessment. On the basis of the identified counterfactual, the Commission then proceeds with the definition of the relevant product and geographic market. That means that when assessing the Dow/Dupont merger, the Commission did not take into account the (future) market situation that would result from the notified merger between ChemChina and Syngenta, which was a known fact during the period of the assessment of the Dow/Dupont merger, as this was notified a few months after the notification of the Dow/Dupont transaction.

Explaining concentration levels

The consolidation of the industry may be explained by various factors at play. One may put forward a “natural” causes explanation, in view of the existence of endogenous sunk costs that may lead to a reduction in the number of firms active in this industry. John Sutton has famously argued that high concentration may persist in many manufacturing industries, even in the presence of a substantial increase in demand and output, when firms in the industry decide to incur, in addition to “exogenous sunk costs”, that is the costs that any firm will have to incur upon entry into the market, “endogenous sunk costs”, which include cost for R&D and other process innovations, with the aim to increase their price-cost margin. If all firms invest in endogenous sunk costs, in the long run this investment will produce little or no profit, as the competitive advantage gained by each firm’s investment will be largely ineffective if all other firms make a similar investment. This may lead to a fall in the industry’s profitability in the long-term and to a concentrated market. The recent consolidation movement in the industry may also be understood as a way to deal with externalities arising out of the expansion of the IP protection in recent decades.

Consolidation may also occur because of the merging companies’ quest for market share by purchasing potential competition, acquiring local market leaders or companies with diversified distribution networks and an established customer base. Market leaders may also strive to constitute one-stop shop platforms for farmers, combining an offering of seeds, traits, and chemicals, that would enhance the farmers’ technological dependence vis-à-vis large agrochemical and seed companies.

These large agro-chem groups forming a tight oligopoly will be able to exploit eventual network effects that may result from the shift towards data-driven agriculture and to block new entry in the factors of production markets. It is increasingly clear that market players in this industry have made the choice of positioning themselves as fully integrated providers, or the orchestrators/partners of an established network, offering a package of genetic transformation technology and genomics, traits, seeds, and chemicals. One may argue that this package of ‘complementary’ products and technologies may form a system competing with other systems (‘systems competition’). A question that would need to be tackled, when assessing the plausibility of the “system competition” thesis, would be to determine the existence of distinct relevant markets affected by the mergers. Could research, breeding and development/marketing of the various kinds of seeds be considered as part of the same or of different relevant markets? I address this question and the effects of these mergers on output, prices, and consumer choice in more detail in a separate paper (I. Lianos & D. Katalevsky, Merger Activity in the Factors of Production Segments of the Food Value Chain: A Critical Assessment (forthcoming)).

Theories and assessment of harm to innovation

Because of space constraints, I will only focus here on the assessment of the possible effects of these mergers on innovation. The emergence of integrated technology/traits/seeds/chemicals platforms may place barriers to new entry, as companies wishing to enter the market(s) would need to offer an integrated solution to farmers. This may stifle disruptive innovation if, in the absence of the merger, firms were able to enter one or two segments of the market (e.g. research and breeding) without the need to offer an “integrated” platform product. One should also take note of the fact that although traditional breeding methods required important resources and a considerable investment of time (because of long breeding cycles) and thus provided large economies of scale leading to the emergence of large market players, the latest genome-editing technologies, particularly CRISPR/Cas, may constitute more efficient and less resource intensive and time-consuming breeding methods, that offer opportunities for the emergence of more competitive and less integrated market structures in the traits/seeds segment(s).

Assessing the effects on innovation will be a crucial part of the merger assessment, for the European Commission as well as for all other competition authorities with jurisdiction to examine the specific merger(s). It is true that the EU market is mainly a conventional seed market, and not a GM seeds market, but it is also clear that all of the Big Six have an integrated strategy for R&D for all types of crops, working on “traditional” marker-assisted breeding, or the more recent forms of predictive breeding that have become commercially possible with the reduction of the cost of genome sequencing and the use of IT, but also on genetically engineered seeds. Assessing the possible effects of each merger on innovation will be a quite complex exercise in view of the need to focus not only on existing technologies but also on the possibility of new technologies emerging in the future.

Competition authorities may use different methodologies to assess these future effects: the definition of innovation markets as it is the case in the US, or a more general assessment of the existence of an effect on innovation constituting a SIEC in Europe. In its recent decision on the Dow/Dupont merger, the European Commission found that the merger may have reduced innovation competition for pesticides by looking to the ability and the incentive of the parties to innovate. The Commission emphasised that this analysis was not general but was based on “specific evidence that the merged entity would have lower incentives and a lower ability to innovate than Dow and DuPont separately” and “that the merged entity would have cut back on the amount they spent on developing innovative products”. That said, the Commission also mentioned the following, which I think may be of relevance to the competition assessment of the other pending mergers:

Only five companies (BASF, Bayer, Syngenta and the merging parties) are globally active throughout the entire R&D process, from discovery of new active ingredients (molecules producing the desired biological effect), their development, testing and regulatory registration, to the manufacture and sale of final formulated products through national distribution channels. Other competitors have no or more limited R&D capabilities (e.g. as regards geographic focus or product range). After the merger, only three global integrated players would remain to compete with the merged company, in an industry with very high barriers to entry. The number of players active in specific innovation areas would be even lower than at the overall industry level.

This type of assessment looks close to the filter of the existence of at least four independent technologies that constitute a commercially viable alternative, in addition to the licensed technology controlled by the parties to the agreement, that the Commission usually employs in its Transfer of Technology Guidelines in order to exclude the possibility that a licensing agreement may restrict competition and thus infringe Article 101 TFEU. There is no reason why the Commission would apply a different approach in the context of merger control. The above indicate that the Commission may view more negatively mergers that lead to less than four or three independent technologies in the relevant market(s).

Hidden/Not usually considered social costs

One may also assess the mergers in the seeds and agro-chem market from a public interest perspective, in view of the broader concerns animating public policy in this context and the existence of a nexus of international commitments with regard to biodiversity, sustainability, the right to food, as well as the emphasis put by some competition law regimes on public interest analysis (e.g. South Africa). The aim will be to assess the full social costs of these transactions, to the extent, of course, this is practically possible. This may be more achievable in merger control regimes where it is not courts that make the final decisions to clear, or not to clear, the merger, as there may be limits to the adjudication of certain broader public interest concerns, but integrated competition law agencies, or branches of the executive power, as it is formally the case in the EU.

Although public interest considerations do not form part of the substantive test of EU merger control, Article 21(4) EUMR includes a legitimate interest clause, which provides that Member States may take appropriate measures to protect three specified legitimate interests: public security, plurality of the media and prudential rules, and other unspecified public interests that are recognised by the Commission after notification by the Member State. If a Member State wishes to claim an additional legitimate interest, other than the ones listed above, it shall communicate this to the Commission. And the Commission must then decide, within 25 working days, whether the additional interest is compatible with EU law, and qualifies as an article 21(4) legitimate interest. This should not be excluded a priori, in particular in view of the importance of biodiversity, environmental protection, and employment in the EU treaties as well as broader international commitments to the right to food.

Food production is, of course, an area of great economic and geopolitical importance. According to UN estimates, by 2050 the world population will increase to nine billion, and catering to this additional demand would require an increase of 70% more food. This puts a strong pressure to increase output, which intensifies even more environmental impact, given increasing sustainability challenges (degradation of soil and reduction of arable land due to urban sprawl, water scarcity, biofuel consumption, climate change, etc.). Food security becomes an increasingly important issue on the agenda of the developing world.

The projected mergers in the seed and agro-chem industry will greatly affect the future control of food production and innovation in order to improve yields and feed the world. One may ask if such important decisions should be based on a narrowly confined test that mostly focuses on effects on output, price and to a certain extent innovation, or if one should adopt a broader consideration of the full social costs of such transactions, to the extent that these may be assessed and eventually quantified.

This may have the additional benefit to enable the participation in the merger process as third parties of a number of NGOs representing broader citizens’ interests in environmental protection and biodiversity, which is currently impossible with the quite narrow procedural requirements for third party intervenors in EU merger control (as the test for admission as third party intervenors is usually met only by competitors, suppliers, and customers). I think that all the affected interests and stakeholders should be offered an opportunity to participate in the decision-making process, thus increasing its efficiency (if one takes a participation-centred approach) and legitimacy, in particular for matters of major social importance as is the control of the global food supply chain(s).

It may be argued, if one takes a pessimistic, Malthusian perspective, that we are doomed to face famine and malnutrition, unless considerable amounts of investment are made in R&D in this sector. In view of the fall of public investments and the important role private investments have played in this area, one may argue that higher levels of consolidation in the sector could lead to higher profitability (at the expense of farmers) without necessarily leading to immediate effects on food prices, as the farming segment is driven by atomistic competition in most markets, and therefore farmers will not have the ability to pass on, at least in the short term, the eventual overcharges to the final consumers. Of course, such an approach may not factor in the effects of these mergers to the livelihood of around half a billion farmers in the world and their families, most of whom do not benefit from subsidies guaranteeing an acceptable standard of living.

It also assumes that higher profitability would lead to higher investments in R&D, a claim that has been recently questioned by research indicating that large firms prefer to retain earnings and distribute them to shareholders and the management rather than invest them in R&D. But, more generally, a simple question that one may ask is “are the projected mergers necessary in order to promote innovation in this sector”? Answering this question may bring a great sense of clarity as to the various dimensions of these mergers competition authorities would need to take into account. And the burden of proof to provide a convincing answer to this question remains on the notifying parties!

The American Bar Association Antitrust Section’s Presidential Transition Report (“Report”), released on January 24, provides a helpful practitioners’ perspective on the state of federal antitrust and consumer protection enforcement, and propounds a variety of useful recommendations for marginal improvements in agency practices, particularly with respect to improving enforcement transparency and reducing enforcement-related costs.  It also makes several good observations on the interplay of antitrust and regulation, and commendably notes the importance of promoting U.S. leadership in international antitrust policy.  This is all well and good.  Nevertheless, the Report’s discussion of various substantive topics poses a number of concerns that seriously detract from its utility, which I summarize below.  Accordingly, I recommend that the new Administration accord respectful attention to the Report’s discussion of process improvements, and international developments, but ignore the Report’s discussion of novel substantive antitrust theories, vertical restraints, and intellectual property.

1.  The Big Picture: Too Much Attention Paid to Antitrust “Possibility Theorems”

In discussing substance, the Report trots out all the theoretical stories of possible anticompetitive harm raised over the last decade or so, such as “product hopping” (“minor” pharmaceutical improvements based on new patents that are portrayed as exclusionary devices), “contracts that reference rivals” (discount schemes that purportedly harm competition by limiting sourcing from a supplier’s rivals), “hold-ups” by patentees (demands by patentees for “overly high” royalties on their legitimate property rights), and so forth.  What the Report ignores is the costs that these new theories impose on the competitive system, and, in particular, on incentives to innovate.  These new theories often are directed at innovative novel business practices that may have the potential to confer substantial efficiency benefits – including enhanced innovation and economic growth – on the American economy.  Unproven theories of harm may disincentivize such practices and impose a hidden drag on the economy.  (One is reminded of Nobel Laureate Ronald Coase’s lament (see here) that “[i]f an economist finds something . . . that he does not understand, he looks for a monopoly explanation. And as in this field we are rather ignorant, the number of ununderstandable practices tends to be rather large, and the reliance on monopoly explanations frequent.”)  Although the Report generally avoids taking a position on these novel theories, the lip service it gives implicitly encourages federal antitrust agency investigations designed to deploy these shiny new antitrust toys.  This in turn leads to a misallocation of resources (unequivocally harmful activity, especially hard core cartel conduct, merits the highest priority) and generates potentially high error and administrative costs, at odds with a sensible decision-theoretic approach to antitrust administration (see here and here).  In sum, the Trump Administration should pay no attention to the Report’s commentary on new substantive antitrust theories.

2.  Vertical Contractual Restraints

The Report inappropriately (and, in my view, amazingly) suggests that antitrust enforcers should give serious attention to vertical contractual restraints:

Recognizing that the current state of RPM law in both minimum and maximum price contexts requires sophisticated balancing of pro- and anti-competitive tendencies, the dearth of guidance from the Agencies in the form of either guidelines or litigated cases leaves open important questions in an area of law that can have a direct and substantial impact on consumers. For example, it would be beneficial for the Agencies to provide guidance on how they think about balancing asserted quality and service benefits that can flow from maintaining minimum prices for certain types of products against the potential that RPM reduces competition to the detriment of consumers. Perhaps equally important, the Agencies should provide guidance on how they would analyze the vigor of interbrand competition in markets where some producers have restricted intrabrand competition among distributors of their products.    

The U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) largely have avoided bringing pure contractual vertical restraints cases in recent decades, and for good reason.  Although vertical restraints theoretically might be used to facilitate horizontal collusion (say, to enforce a distributors’ cartel) or anticompetitive exclusion (say, to enable a dominant manufacturer to deny rivals access to efficient distribution), such cases appear exceedingly rare.  Real world empirical research suggests vertical restraints generally are procompetitive (see, for example, here).  What’s more, a robust theoretical literature supports efficiency-based explanations for vertical restraints (see, for example, here), as recognized by the U.S. Supreme Court in its 2007 Leegin decision.  An aggressive approach to vertical restraints enforcement would ignore this economic learning, likely yield high error costs, and dissuade businesses from considering efficient vertical contracts, to the detriment of social welfare.  Moreover, antitrust prosecutorial resources are limited, and optimal policy indicates they should be directed to the most serious competitive problems.  The Report’s references to “open important questions” and the need for “guidance” on vertical restraints appears oblivious to these realities.  Furthermore, the Report’s mention of “balancing” interbrand versus intrabrand effects reflects a legalistic approach to vertical contracts that is at odds with modern economic analysis.

In short, the Report’s discussion of vertical restraints should be accorded no weight by new enforcers, and antitrust prosecutors would be well advised not to include vertical restraints investigations on their list of priorities.

3.  IP Issues

The Report recommends that the DOJ and FTC (“Agencies”) devote substantial attention to issues related to the unilateral exercise of patent rights, “holdup” and “holdout”:

We . . . recommend that the Agencies gather reliable and credible information on—and propose a framework for evaluating—holdup and holdout, and the circumstances in which either may be anticompetitive. The Agencies are particularly well-suited to gather evidence and assess competitive implications of such practices, which could then inform policymaking, advocacy, and potential cases. The Agencies’ perspectives could contribute valuable insights to the larger antitrust community.

Gathering information with an eye to bringing potential antitrust cases involving the unilateral exercise of patent rights through straightforward patent licensing involves a misapplication of resources.  As Professor Josh Wright and Judge Douglas Ginsburg, among others, have pointed out, antitrust is not well-suited to dealing with disputes between patentees and licensees over licensing rates – private law remedies are best designed to handle such contractual controversies (see, for example, here).  Furthermore, using antitrust law to depress returns to unilateral patent licenses threatens to reduce dynamic efficiency and create disincentives for innovation (see FTC Commissioner (and currently Acting Chairman) Maureen Ohlhausen’s thoughtful article, here).  The Report regrettably ignores this important research.  The Report instead should have called upon the FTC and DOJ to drop their ill-conceived recent emphasis on unilateral patent exploitation, and to focus instead on problems of collusion among holders of competing patented technologies.

That is not all.  The Report’s “suggest[ion] that the [federal antitrust] Agencies consider offering guidance to the ITC [International Trade Commission] about potential SEP holdup and holdout” is a recipe for weakening legitimate U.S. patent rights that are threatened by foreign infringers.  American patentees already face challenges from over a decade’s worth of Supreme Court decisions that have constrained the value of their holdings.  As I have explained elsewhere, efforts to limit the ability of the ITC to issue exclusion orders in the face of infringement overseas further diminishes the value of American patents and disincentivizes innovation (see here).  What’s worse, the Report is not only oblivious of this reality, it goes out of its way to “put a heavy thumb on the scale” in favor of patent infringers, stating (footnote omitted):

If the ITC were to issue exclusion orders to SEP owners under circumstances in which injunctions would not be appropriate under the [Supreme Court’s] eBay standard [for patent litigation], the inconsistency could induce SEP owners to strategically use the ITC in an effort to achieve settlements of patent disputes on terms that might require payment of supracompetitive royalties.  Though it is not likely how likely this is or whether the risk has led to supracompetitive prices in the past, this dynamic could lead to holdup by SEP owners and unconscionably higher royalties.

This commentary on the possibility of “unconscionable” royalties reads like a press release authored by patent infringers.  In fact, there is a dearth of evidence of hold-up, let alone hold-up-related “unconscionable” royalties.  Moreover, it is most decidedly not the role of antitrust enforcers to rule on the “unconscionability” of the unilateral pricing decision of a patent holder (apparently the Report writers forgot to consult Justice Scalia’s Trinko opinion, which emphasizes the right of a monopolist to charge a monopoly price).  Furthermore, not only is this discussion wrong-headed, it flies in the face of concerns expressed elsewhere in the Report regarding ill-advised mandates imposed by foreign antitrust enforcement authorities.  (Recently certain foreign enforcers have shown themselves all too willing to countenance “excessive” patent royalty claims in cases involving American companies).

Finally, other IP-related references in the Report similarly show a lack of regulatory humility.  Theoretical harms from the disaggregation of complementary patents, and from “product hopping” patents (see above), among other novel practices, implicitly encourage the FTC and DOJ (not to mention private parties) to consider bringing cases based on expansive theories of liability, without regard to the costs of the antitrust system as a whole (including the chilling of innovative business activity).  Such cases might benefit the antitrust bar, but prioritizing them would be at odds with the key policy objective of antitrust, the promotion of consumer welfare.

 

Over the weekend, Senator Al Franken and FCC Commissioner Mignon Clyburn issued an impassioned statement calling for the FCC to thwart the use of mandatory arbitration clauses in ISPs’ consumer service agreements — starting with a ban on mandatory arbitration of privacy claims in the Chairman’s proposed privacy rules. Unfortunately, their call to arms rests upon a number of inaccurate or weak claims. Before the Commissioners vote on the proposed privacy rules later this week, they should carefully consider whether consumers would actually be served by such a ban.

FCC regulations can’t override congressional policy favoring arbitration

To begin with, it is firmly cemented in Supreme Court precedent that the Federal Arbitration Act (FAA) “establishes ‘a liberal federal policy favoring arbitration agreements.’” As the Court recently held:

[The FAA] reflects the overarching principle that arbitration is a matter of contract…. [C]ourts must “rigorously enforce” arbitration agreements according to their terms…. That holds true for claims that allege a violation of a federal statute, unless the FAA’s mandate has been “overridden by a contrary congressional command.”

For better or for worse, that’s where the law stands, and it is the exclusive province of Congress — not the FCC — to change it. Yet nothing in the Communications Act (to say nothing of the privacy provisions in Section 222 of the Act) constitutes a “contrary congressional command.”

And perhaps that’s for good reason. In enacting the statute, Congress didn’t demonstrate the same pervasive hostility toward companies and their relationships with consumers that has characterized the way this FCC has chosen to enforce the Act. As Commissioner O’Rielly noted in dissenting from the privacy NPRM:

I was also alarmed to see the Commission acting on issues that should be completely outside the scope of this proceeding and its jurisdiction. For example, the Commission seeks comment on prohibiting carriers from including mandatory arbitration clauses in contracts with their customers. Here again, the Commission assumes that consumers don’t understand the choices they are making and is willing to impose needless costs on companies by mandating how they do business.

If the FCC were to adopt a provision prohibiting arbitration clauses in its privacy rules, it would conflict with the FAA — and the FAA would win. Along the way, however, it would create a thorny uncertainty for both companies and consumers seeking to enforce their contracts.  

The evidence suggests that arbitration is pro-consumer

But the lack of legal authority isn’t the only problem with the effort to shoehorn an anti-arbitration bias into the Commission’s privacy rules: It’s also bad policy.

In its initial broadband privacy NPRM, the Commission said this about mandatory arbitration:

In the 2015 Open Internet Order, we agreed with the observation that “mandatory arbitration, in particular, may more frequently benefit the party with more resources and more understanding of the dispute procedure, and therefore should not be adopted.” We further discussed how arbitration can create an asymmetrical relationship between large corporations that are repeat players in the arbitration system and individual customers who have fewer resources and less experience. Just as customers should not be forced to agree to binding arbitration and surrender their right to their day in court in order to obtain broadband Internet access service, they should not have to do so in order to protect their private information conveyed through that service.

The Commission may have “agreed with the cited observations about arbitration, but that doesn’t make those views accurate. As one legal scholar has noted, summarizing the empirical data on the effects of arbitration:

[M]ost of the methodologically sound empirical research does not validate the criticisms of arbitration. To give just one example, [employment] arbitration generally produces higher win rates and higher awards for employees than litigation.

* * *

In sum, by most measures — raw win rates, comparative win rates, some comparative recoveries and some comparative recoveries relative to amounts claimed — arbitration generally produces better results for claimants [than does litigation].

A comprehensive, empirical study by Northwestern Law’s Searle Center on AAA (American Arbitration Association) cases found much the same thing, noting in particular that

  • Consumer claimants in arbitration incur average arbitration fees of only about $100 to arbitrate small (under $10,000) claims, and $200 for larger claims (up to $75,000).
  • Consumer claimants also win attorneys’ fees in over 60% of the cases in which they seek them.
  • On average, consumer arbitrations are resolved in under 7 months.
  • Consumers win some relief in more than 50% of cases they arbitrate…
  • And they do almost exactly as well in cases brought against “repeat-player” business.

In short, it’s extremely difficult to sustain arguments suggesting that arbitration is tilted against consumers relative to litigation.

(Upper) class actions: Benefitting attorneys — and very few others

But it isn’t just any litigation that Clyburn and Franken seek to preserve; rather, they are focused on class actions:

If you believe that you’ve been wronged, you could take your service provider to court. But you’d have to find a lawyer willing to take on a multi-national telecom provider over a few hundred bucks. And even if you won the case, you’d likely pay more in legal fees than you’d recover in the verdict.

The only feasible way for you as a customer to hold that corporation accountable would be to band together with other customers who had been similarly wronged, building a case substantial enough to be worth the cost—and to dissuade that big corporation from continuing to rip its customers off.

While — of course — litigation plays an important role in redressing consumer wrongs, class actions frequently don’t confer upon class members anything close to the imagined benefits that plaintiffs’ lawyers and their congressional enablers claim. According to a 2013 report on recent class actions by the law firm, Mayer Brown LLP, for example:

  • “In [the] entire data set, not one of the class actions ended in a final judgment on the merits for the plaintiffs. And none of the class actions went to trial, either before a judge or a jury.” (Emphasis in original).
  • “The vast majority of cases produced no benefits to most members of the putative class.”
  • “For those cases that do settle, there is often little or no benefit for class members. What is more, few class members ever even see those paltry benefits — particularly in consumer class actions.”
  • “The bottom line: The hard evidence shows that class actions do not provide class members with anything close to the benefits claimed by their proponents, although they can (and do) enrich attorneys.”

Similarly, a CFPB study of consumer finance arbitration and litigation between 2008 and 2012 seems to indicate that the class action settlements and judgments it studied resulted in anemic relief to class members, at best. The CFPB tries to disguise the results with large, aggregated and heavily caveated numbers (never once actually indicating what the average payouts per person were) that seem impressive. But in the only hard numbers it provides (concerning four classes that ended up settling in 2013), promised relief amounted to under $23 each (comprising both cash and in-kind payment) if every class member claimed against the award. Back-of-the-envelope calculations based on the rest of the data in the report suggest that result was typical.

Furthermore, the average time to settlement of the cases the CFPB looked at was almost 2 years. And somewhere between 24% and 37% involved a non-class settlement — meaning class members received absolutely nothing at all because the named plaintiff personally took a settlement.

By contrast, according to the Searle Center study, the average award in the consumer-initiated arbitrations it studied (admittedly, involving cases with a broader range of claims) was almost $20,000, and the average time to resolution was less than 7 months.

To be sure, class action litigation has been an important part of our system of justice. But, as Arthur Miller — a legal pioneer who helped author the rules that make class actions viable — himself acknowledged, they are hardly a panacea:

I believe that in the 50 years we have had this rule, that there are certain class actions that never should have been brought, admitted; that we have burdened our judiciary, yes. But we’ve had a lot of good stuff done. We really have.

The good that has been done, according to Professor Miller, relates in large part to the civil rights violations of the 50’s and 60’s, which the class action rules were designed to mitigate:

Dozens and dozens and dozens of communities were desegregated because of the class action. You even see desegregation decisions in my old town of Boston where they desegregated the school system. That was because of a class action.

It’s hard to see how Franken and Clyburn’s concern for redress of “a mysterious 99-cent fee… appearing on your broadband bill” really comes anywhere close to the civil rights violations that spawned the class action rules. Particularly given the increasingly pervasive role of the FCC, FTC, and other consumer protection agencies in addressing and deterring consumer harms (to say nothing of arbitration itself), it is manifestly unclear why costly, protracted litigation that infrequently benefits anyone other than trial attorneys should be deemed so essential.

“Empowering the 21st century [trial attorney]”

Nevertheless, Commissioner Clyburn and Senator Franken echo the privacy NPRM’s faulty concerns about arbitration clauses that restrict consumers’ ability to litigate in court:

If you’re prohibited from using our legal system to get justice when you’re wronged, what’s to protect you from being wronged in the first place?

Well, what do they think the FCC is — chopped liver?

Hardly. In fact, it’s a little surprising to see Commissioner Clyburn (who sits on a Commission that proudly proclaims that “[p]rotecting consumers is part of [its] DNA”) and Senator Franken (among Congress’ most vocal proponents of the FCC’s claimed consumer protection mission) asserting that the only protection for consumers from ISPs’ supposed depredations is the cumbersome litigation process.

In fact, of course, the FCC has claimed for itself the mantle of consumer protector, aimed at “Empowering the 21st Century Consumer.” But nowhere does the agency identify “promoting and preserving the rights of consumers to litigate” among its tools of consumer empowerment (nor should it). There is more than a bit of irony in a federal regulator — a commissioner of an agency charged with making sure, among other things, that corporations comply with the law — claiming that, without class actions, consumers are powerless in the face of bad corporate conduct.

Moreover, even if it were true (it’s not) that arbitration clauses tend to restrict redress of consumer complaints, effective consumer protection would still not necessarily be furthered by banning such clauses in the Commission’s new privacy rules.

The FCC’s contemplated privacy regulations are poised to introduce a wholly new and untested regulatory regime with (at best) uncertain consequences for consumers. Given the risk of consumer harm resulting from the imposition of this new regime, as well as the corollary risk of its excessive enforcement by complainants seeking to test or push the boundaries of new rules, an agency truly concerned with consumer protection would tread carefully. Perhaps, if the rules were enacted without an arbitration ban, it would turn out that companies would mandate arbitration (though this result is by no means certain, of course). And perhaps arbitration and agency enforcement alone would turn out to be insufficient to effectively enforce the rules. But given the very real costs to consumers of excessive, frivolous or potentially abusive litigation, cabining the litigation risk somewhat — even if at first it meant the regime were tilted slightly too much against enforcement — would be the sensible, cautious and pro-consumer place to start.

____

Whether rooted in a desire to “protect” consumers or not, the FCC’s adoption of a rule prohibiting mandatory arbitration clauses to address privacy complaints in ISP consumer service agreements would impermissibly contravene the FAA. As the Court has made clear, such a provision would “‘stand[] as an obstacle to the accomplishment and execution of the full purposes and objectives of Congress’ embodied in the Federal Arbitration Act.” And not only would such a rule tend to clog the courts in contravention of the FAA’s objectives, it would do so without apparent benefit to consumers. Even if such a rule wouldn’t effectively be invalidated by the FAA, the Commission should firmly reject it anyway: A rule that operates primarily to enrich class action attorneys at the expense of their clients has no place in an agency charged with protecting the public interest.

On October 6, 2016, the U.S. Federal Trade Commission (FTC) issued Patent Assertion Entity Activity: An FTC Study (PAE Study), its much-anticipated report on patent assertion entity (PAE) activity.  The PAE Study defined PAEs as follows:

Patent assertion entities (PAEs) are businesses that acquire patents from third parties and seek to generate revenue by asserting them against alleged infringers.  PAEs monetize their patents primarily through licensing negotiations with alleged infringers, infringement litigation, or both. In other words, PAEs do not rely on producing, manufacturing, or selling goods.  When negotiating, a PAE’s objective is to enter into a royalty-bearing or lump-sum license.  When litigating, to generate any revenue, a PAE must either settle with the defendant or ultimately prevail in litigation and obtain relief from the court.

The FTC was mindful of the costs that would be imposed on PAEs, required by compulsory process to respond to the agency’s requests for information.  Accordingly, the FTC obtained information from only 22 PAEs, 18 of which it called “Litigation PAEs” (which “typically sued potential licensees and settled shortly afterward by entering into license agreements with defendants covering small portfolios,” usually yielding total royalties of under $300,000) and 4 of which it dubbed “Portfolio PAEs” (which typically negotiated multimillion dollars licenses covering large portfolios of patents and raised their capital through institutional investors or manufacturing firms).

Furthermore, the FTC’s research was narrowly targeted, not broad-based.  The agency explained that “[o]f all the patents held by PAEs in the FTC’s study, 88% fell under the Computers & Communications or Other Electrical & Electronic technology categories, and more than 75% of the Study PAEs’ overall holdings were software-related patents.”  Consistent with the nature of this sample, the FTC concentrated primarily on a case study of PAE activity in the wireless chipset sector.  The case study revealed that PAEs were more likely to assert their patents through litigation than were wireless manufacturers, and that “30% of Portfolio PAE wireless patent licenses and nearly 90% of Litigation PAE wireless patent licenses resulted from litigation, while only 1% of Wireless Manufacturer wireless patent licenses resulted from litigation.”  But perhaps more striking than what the FTC found was what it did not uncover.  Due to data limitations, “[t]he FTC . . . [did not] attempt[] to determine if the royalties received by Study PAEs were higher or lower than those that the original assignees of the licensed patents could have earned.”  In addition, the case study did “not report how much revenue PAEs shared with others, including independent inventors, or the costs of assertion activity.”

Curiously, the PAE Study also leaped to certain conclusions regarding PAE settlements based on questionable assumptions and without considering legitimate potential incentives for such settlements.  Thus, for example, the FTC found it particularly significant that 77% of litigation PAE settlements were for less than $300,000.  Why?  Because $300,000 was a “de facto benchmark” for nuisance litigation settlements, merely based on one American Intellectual Property Law Association study that claimed defending a non-practicing entity patent lawsuit through the end of discovery costs between $300,000 and $2.5 million, depending on the amount in controversy.  In light of that one study, the FTC surmised “that discovery costs, and not the technological value of the patent, may set the benchmark for settlement value in Litigation PAE cases.”  Thus, according to the FTC, “the behavior of Litigation PAEs is consistent with nuisance litigation.”  As noted patent lawyer Gene Quinn has pointed out, however, the FTC ignored the alternative eminently logical possibility that many settlements for less than $300,000 merely represented reasonable valuations of the patent rights at issue.  Quinn pithily stated:

[T]he reality is the FTC doesn’t know enough about the industry to understand that $300,000 is an arbitrary line in the sand that holds no relevance in the real world. For the very same reason that they said the term “patent troll” is unhelpful (i.e., because it inappropriately discriminates against rights owners without understanding the business model and practices), so too is $300,000 equally unhelpful. Without any understanding or appreciation of the value of the core innovation subject to the license there is no way to know whether a license is being offered for nuisance value or whether it is being offered at full, fair and appropriate value to compensate the patent owner for the infringement they had to chase down in litigation.

I thought the FTC was charged with ensuring fair business practices? It seems what they are doing is radically discriminating against incremental innovations valued at less than $300,000 and actually encouraging patent owners to charge more for their licenses than they are worth so they don’t get labeled a nuisance. Talk about perverse incentives! The FTC should stick to areas where they have subject matter competence and leave these patent issues to the experts.     

In sum, the FTC found that in one particular specialized industry sector featuring a certain  category of patents (software patents), PAEs tended to sue more than manufacturers before agreeing to licensing terms – hardly a surprising finding or a sign of a problem.  (To the contrary, the existence of “substantial” PAE litigation that led to licenses might be a sign that PAEs were acting as efficient intermediaries representing the interests and effectively vindicating the rights of small patentees.)  The FTC was not, however, able to comment on the relative levels of royalties, the extent to which PAE revenues were distributed to inventors, or the costs of PAE litigation (as opposed to any other sort of litigation).  Additionally, the FTC made certain assumptions about certain PAE litigation settlements that ignored reasonable alternative explanations for the behavior that was observed.  Accordingly, the reasonable observer would conclude from this that the agency was (to say the least) in no position to make any sort of policy recommendations, given the absence of any hard evidence of PAE abuses or excessive waste from litigation.

Unfortunately, the reasonable observer would be mistaken.  The FTC recommended reforms to: (1) address discovery burden and “cost asymmetries” (the notion that PAEs are less subject to costly counterclaims because they are not producers) in PAE litigation; (2) provide the courts and defendants with more information about the plaintiffs that have filed infringement lawsuits; (3) streamline multiple cases brought against defendants on the same theories of infringement; and (4) provide sufficient notice of these infringement theories as courts continue to develop heightened pleading requirements for patent cases.

Without getting into the merits of these individual suggestions (and without in any way denigrating the hard work and dedication of the highly talented FTC staffers who drafted the PAE Study), it is sufficient to note that they bear no logical relationship to the factual findings of the report.  The recommendations, which closely echo certain elements of various “patent reform” legislative proposals that have been floated in recent years, could have been advanced before any data had been gathered – with a saving to the companies that had to respond.  In short, the recommendations are classic pre-baked “solutions” to problems that have long been hypothesized.  Advancing such recommendations based on discrete information regarding a small skewed sample of PAEs – without obtaining crucial information on the direct costs and benefits of the PAE transactions being observed, or the incentive effects of PAE activity – is at odds with the FTC’s proud tradition of empirical research.  Unfortunately, Devin Hartline of the Antonin Scalia Law School proved prescient when commenting last April on the possible problems with the PAE Report, based on what was known about it prior to its release (and based on the preliminary thoughts of noted economists and law professors):

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general.  The study is simply not designed to do this.  It instead is a fact-finding mission, the results of which could guide future missions.  Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected.  And it’s crucial not to draw policy conclusions from it.  Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

To the extent patent reform is warranted, it should be considered carefully in a measured fashion, with full consideration given to the costs, benefits, and potential unintended consequences of suggested changes to the patent system and to litigation procedures.  As John Malcolm and I explained in a 2015 Heritage Foundation Legal Backgrounder which explored the relative merits of individual proposed reforms:

Before deciding to take action, Congress should weigh the particular merits of individual reform proposals carefully and meticulously, taking into account their possible harmful effects as well as their intended benefits. Precipitous, unreflective action on legislation is unwarranted, and caution should be the byword, especially since the effects of 2011 legislative changes and recent Supreme Court decisions have not yet been fully absorbed. Taking time is key to avoiding the serious and costly errors that too often are the fruit of omnibus legislative efforts.

Notably, this Legal Backgrounder also noted potential beneficial aspects of PAE activity that were not reflected in the PAE Study:

[E]ven entities whose business model relies on purchasing patents and licensing them or suing those who refuse to enter into licensing agreements and infringe those patents can serve a useful—even a vital—purpose. Some infringers may be large companies that infringe the patents of smaller companies or individual inventors, banking on the fact that such a small-time inventor will be less likely to file a lawsuit against a well-financed entity. Patent aggregators, often backed by well-heeled investors, help to level the playing field and can prevent such abuses.

More important, patent aggregators facilitate an efficient division of labor between inventors and those who wish to use those inventions for the betterment of their fellow man, allowing inventors to spend their time doing what they do best: inventing. Patent aggregators can expand access to patent pools that allow third parties to deal with one vendor instead of many, provide much-needed capital to inventors, and lead to a variety of licensing and sublicensing agreements that create and reflect a valuable and vibrant marketplace for patent holders and provide the kinds of incentives that spur innovation. They can also aggregate patents for litigation purposes, purchasing patents and licensing them in bundles.

This has at least two advantages: It can reduce the transaction costs for licensing multiple patents, and it can help to outsource and centralize patent litigation for multiple patent holders, thereby decreasing the costs associated with such litigation. In the copyright space, the American Society of Composers, Authors, and Publishers (ASCAP) plays a similar role.

All of this is to say that there can be good patent assertion entities that seek licensing agreements and file claims to enforce legitimate patents and bad patent assertion entities that purchase broad and vague patents and make absurd demands to extort license payments or settlements. The proper way to address patent trolls, therefore, is by using the same means and methods that would likely work against ambulance chasers or other bad actors who exist in other areas of the law, such as medical malpractice, securities fraud, and product liability—individuals who gin up or grossly exaggerate alleged injuries and then make unreasonable demands to extort settlements up to and including filing frivolous lawsuits.

In conclusion, the FTC would be well advised to avoid putting forth patent reform recommendations based on the findings of the PAE Study.  At the very least, it should explicitly weigh the implications of other research, which explores PAE-related efficiencies and considers all the ramifications of procedural and patent law changes, before seeking to advance any “PAE reform” recommendations.

On August 6, the Global Antitrust Institute (the GAI, a division of the Antonin Scalia Law School at George Mason University) submitted a filing (GAI filing or filing) in response to the Japan Fair Trade Commission’s (JFTC’s) consultation on reforms to the Japanese system of administrative surcharges assessed for competition law violations (see here for a link to the GAI’s filing).  The GAI’s outstanding filing was authored by GAI Director Koren Wong Ervin and Professors Douglas Ginsburg, Joshua Wright, and Bruce Kobayashi of the Scalia Law School.

The GAI filing’s three sets of major recommendations, set forth in italics, are as follows:

(1)   Due Process

 While the filing recognizes that the process may vary depending on the jurisdiction, the filing strongly urges the JFTC to adopt the core features of a fair and transparent process, including:   

(a)        Legal representation for parties under investigation, allowing the participation of local and foreign counsel of the parties’ choosing;

(b)        Notifying the parties of the legal and factual bases of an investigation and sharing the evidence on which the agency relies, including any exculpatory evidence and excluding only confidential business information;

(c)        Direct and meaningful engagement between the parties and the agency’s investigative staff and decision-makers;

(d)        Allowing the parties to present their defense to the ultimate decision-makers; and

(e)        Ensuring checks and balances on agency decision-making, including meaningful access to independent courts.

(2)   Calculation of Surcharges

The filing agrees with the JFTC that Japan’s current inflexible system of surcharges is unlikely to accurately reflect the degree of economic harm caused by anticompetitive practices.  As a general matter, the filing recommends that under Japan’s new surcharge system, surcharges imposed should rely upon economic analysis, rather than using sales volume as a proxy, to determine the harm caused by violations of Japan’s Antimonopoly Act.   

In that light, and more specifically, the filing therefore recommends that the JFTC limit punitive surcharges to matters in which:

(a)          the antitrust violation is clear (i.e., if considered at the time the conduct is undertaken, and based on existing laws, rules, and regulations, a reasonable party should expect the conduct at issue would likely be illegal) and is without any plausible efficiency justification;

(b)          it is feasible to articulate and calculate the harm caused by the violation;

(c)           the measure of harm calculated is the basis for any fines or penalties imposed; and

(d)          there are no alternative remedies that would adequately deter future violations of the law. 

In the alternative, and at the very least, the filing urges the JFTC to expand the circumstances under which it will not seek punitive surcharges to include two types of conduct that are widely recognized as having efficiency justifications:

  • unilateral conduct, such as refusals to deal and discriminatory dealing; and
  • vertical restraints, such as exclusive dealing, tying and bundling, and resale price maintenance.

(3)   Settlement Process

The filing recommends that the JFTC consider incorporating safeguards that prevent settlement provisions unrelated to the violation and limit the use of extended monitoring programs.  The filing notes that consent decrees and commitments extracted to settle a case too often end up imposing abusive remedies that undermine the welfare-enhancing goals of competition policy.  An agency’s ability to obtain in terrorem concessions reflects a party’s weighing of the costs and benefits of litigating versus the costs and benefits of acquiescing in the terms sought by the agency.  When firms settle merely to avoid the high relative costs of litigation and regulatory procedures, an agency may be able to extract more restrictive terms on firm behavior by entering into an agreement than by litigating its accusations in a court.  In addition, while settlements may be a more efficient use of scarce agency resources, the savings may come at the cost of potentially stunting the development of the common law arising through adjudication.

In sum, the latest filing maintains the GAI’s practice of employing law and economics analysis to recommend reforms in the imposition of competition law remedies (see here, here, and here for summaries of prior GAI filings that are in the same vein).  The GAI’s dispassionate analysis highlights principles of universal application – principles that may someday point the way toward greater economically-sensible convergence among national antitrust remedial systems.

Background

In addition to reforming substantive antitrust doctrine, the Supreme Court in recent decades succeeded in curbing the unwarranted costs of antitrust litigation by erecting new procedural barriers to highly questionable antitrust suits.  It did this principally through three key “gatekeeper” decisions, Monsanto (1984), Matsushita (1986), and Twombly (2007).

Prior to those holdings, bare allegations in a complaint typically were sufficient to avoid dismissal.  Furthermore, summary judgment was very hard to obtain, given the Supreme Court’s pronouncement in Poller v. CBS (1962) that “summary procedures should be used sparingly in complex antitrust litigation.”  Thus, plaintiffs had a strong incentive to file dubious (if not meritless) antitrust suits, in the hope of coercing unwarranted settlements from defendants faced with the prospect of burdensome, extended antitrust litigation – litigation that could impose serious business reputational costs over time, in addition to direct and indirect litigation costs.

This all changed starting in 1984.  Monsanto required that a plaintiff show a “conscious commitment to a common scheme designed to achieve an unlawful objective” to support a Sherman Act Section 1 (Section 1) antitrust conspiracy allegation.  Building on Monsanto, Matsushita held that “conduct as consistent with permissible competition as with illegal conspiracy does not, standing alone, support an inference of antitrust conspiracy.”  In Twombly, the Supreme Court made it easier to succeed on a motion to dismiss a Section 1 complaint, holding that mere evidence of parallel conduct does not establish a conspiracy.  Rather, under Twombly, a plaintiff seeking relief under Section 1 must allege, at a minimum, the general contours of when an agreement was made and must support those allegations with a context that tends to make such an agreement plausible.  (The Twombly Court’s approval of motions to dismiss as a tool to rein in excessive antitrust litigation costs was implicit in its admonition not to “forget that proceeding to antitrust discovery can be expensive.”)

In sum, as Professor Herbert Hovenkamp has put it, “[t]he effects of Twombly and Matsushita has [sic] been a far-reaching shift in the way antitrust cases proceed, and today a likely majority are dismissed on the pleadings or summary judgment before going to trial.”

Visa v. Osborn

So far, so good.  Trial lawyers never rest, however, and old lessons sometimes need to be relearned, as demonstrated by the D.C. Circuit’s strange opinion in Visa v. Osborn (2015).

Visa v. Osborn involves a putative class action filed against Visa, MasterCard, and three banks, essentially involving a bare bones complaint alleging that similar automatic teller machine pricing rules imposed by Visa and MasterCard were part of a price-fixing conspiracy among the banks and the credit card companies.  As I explained in my recent Competition Policy International article discussing this case, plaintiffs neither alleged any facts indicating any communications among defendants, nor did they suggest anything to undermine the very real possibility that the credit card firms separately adopted the rules as being in their independent self-interest.  In short, there is nothing in the complaint indicating that allegations of an anticompetitive agreement are plausible, and, as such, Twombly dictates that the complaint must be dismissed.  Amazingly, however, a D.C. Circuit panel held that the mere allegation “that the member banks used the bankcard associations to adopt and enforce” the purportedly anticompetitive access fee rule was “enough to satisfy the plausibility standard” required to survive a motion to dismiss.

Fortunately, the D.C. Circuit’s Osborn holding (which, in addition to being ill-reasoned, is inconsistent with Third, Fourth, and Ninth Circuit precedents) attracted the eye of the Supreme Court, which granted certiorari on June 28.  Specifically, the Supreme Court agreed to resolve the question “[w]hether allegations that members of a business association agreed to adhere to the association’s rules and possess governance rights in the association, without more, are sufficient to plead the element of conspiracy in violation of Section 1 of the Sherman Act, . . . or are insufficient, as the Third, Fourth, and Ninth Circuits have held.”

Conclusion

As I concluded in my Competition Policy International article:

Business associations bestow economic benefits on society through association rules that enable efficient cooperative activities.  Subjecting association members to potential antitrust liability merely for signing on to such rules and participating in association governance would substantially chill participation in associations and undermine the development of new and efficient forms of collaboration among businesses.  Such a development would reduce economic dynamism and harm both producers and consumers.  By decisively overruling the D.C. Circuit’s flawed decision in Osborn, the Supreme Court would preclude a harmful form of antitrust risk and establish an environment in which fruitful business association decision-making is granted greater freedom, to the benefit of the business community, consumers, and the overall economy.  

In addition, and more generally, the Court may wish to remind litigants that the antitrust litigation gatekeeper function laid out in Monsanto, Matsushita, and Twombly remains as strong and as vital as ever.  In so doing, the Court would reaffirm that motions to dismiss and summary judgment motions remain critically important tools needed to curb socially costly abusive antitrust litigation.

Brand drug manufacturers are no strangers to antitrust accusations when it comes to their complicated relationship with generic competitors — most obviously with respect to reverse payment settlements. But the massive and massively complex regulatory scheme under which drugs are regulated has provided other opportunities for regulatory legerdemain with potentially anticompetitive effect, as well.

In particular, some FTC Commissioners have raised concerns that brand drug companies have been taking advantage of an FDA drug safety program — the Risk Evaluation and Mitigation Strategies program, or “REMS” — to delay or prevent generic entry.

Drugs subject to a REMS restricted distribution program are difficult to obtain through market channels and not otherwise readily available, even for would-be generic manufacturers that need samples in order to perform the tests required to receive FDA approval to market their products. REMS allows (requires, in fact) brand manufacturers to restrict the distribution of certain drugs that present safety or abuse risks, creating an opportunity for branded drug manufacturers to take advantage of imprecise regulatory requirements by inappropriately limiting access by generic manufacturers.

The FTC has not (yet) brought an enforcement action, but it has opened several investigations, and filed an amicus brief in a private-party litigation. Generic drug companies have filed several antitrust claims against branded drug companies and raised concerns with the FDA.

The problem, however, is that even if these companies are using REMS to delay generics, such a practice makes for a terrible antitrust case. Not only does the existence of a regulatory scheme arguably set Trinko squarely in the way of a successful antitrust case, but the sort of refusal to deal claims at issue here (as in Trinko) are rightly difficult to win because, as the DOJ’s Section 2 Report notes, “there likely are few circumstances where forced sharing would help consumers in the long run.”

But just because there isn’t a viable antitrust case doesn’t mean there isn’t still a competition problem. But in this case, it’s a problem of regulatory failure. Companies rationally take advantage of poorly written federal laws and regulations in order to tilt the market to their own advantage. It’s no less problematic for the market, but its solution is much more straightforward, if politically more difficult.

Thus it’s heartening to see that Senator Mike Lee (R-UT), along with three of his colleagues (Patrick Leahy (D-VT), Chuck Grassley (R-IA), and Amy Klobuchar (D-MN)), has proposed a novel but efficient way to correct these bureaucracy-generated distortions in the pharmaceutical market without resorting to the “blunt instrument” of antitrust law. As the bill notes:

While the antitrust laws may address actions by license holders who impede the prompt negotiation and development on commercially reasonable terms of a single, shared system of elements to assure safe use, a more tailored legal pathway would help ensure that license holders negotiate such agreements in good faith and in a timely manner, facilitating competition in the marketplace for drugs and biological products.

The legislative solution put forward by the Creating and Restoring Equal Access to Equivalent Samples (CREATES) Act of 2016 targets the right culprit: the poor regulatory drafting that permits possibly anticompetitive conduct to take place. Moreover, the bill refrains from creating a per se rule, instead implementing several features that should still enable brand manufacturers to legitimately restrict access to drug samples when appropriate.

In essence, Senator Lee’s bill introduces a third party (in this case, the Secretary of Health and Human Services) who is capable of determining whether an eligible generic manufacturer is able to comply with REMS restrictions — thus bypassing any bias on the part of the brand manufacturer. Where the Secretary determines that a generic firm meets the REMS requirements, the bill also creates a narrow cause of action for this narrow class of plaintiffs, allowing suits against certain brand manufacturers who — despite the prohibition on using REMS to delay generics — nevertheless misuse the process to delay competitive entry.

Background on REMS

The REMS program was introduced as part of the Food and Drug Administration Amendments Act of 2007 (FDAAA). Following the withdrawal of Vioxx, an arthritis pain reliever, from the market because of a post-approval linkage of the drug to heart attacks, the FDA was under considerable fire, and there was a serious risk that fewer and fewer net beneficial drugs would be approved. The REMS program was introduced by Congress as a mechanism to ensure that society could reap the benefits from particularly risky drugs and biologics — rather than the FDA preventing them from entering the market at all. It accomplishes this by ensuring (among other things) that brands and generics adopt appropriate safety protocols for distribution and use of drugs — particularly when a drug has the potential to cause serious side effects, or has an unusually high abuse profile.

The FDA-determined REMS protocols can range from the simple (e.g., requiring a medication guide or a package insert about potential risks) to the more burdensome (including restrictions on a drug’s sale and distribution, or what the FDA calls “Elements to Assure Safe Use” (“ETASU”)). Most relevant here, the REMS process seems to allow brands considerable leeway to determine whether generic manufacturers are compliant or able to comply with ETASUs. Given this discretion, it is no surprise that brand manufacturers may be tempted to block competition by citing “safety concerns.”

Although the FDA specifically forbids the use of REMS to block lower-cost, generic alternatives from entering the market (of course), almost immediately following the law’s enactment, certain less-scrupulous branded pharmaceutical companies began using REMS for just that purpose (also, of course).

REMS abuse

To enter into pharmaceutical markets that no longer have any underlying IP protections, manufactures must submit to the FDA an Abbreviated New Drug Application (ANDA) for a generic, or an Abbreviated Biologic License Application (ABLA) for a biosimilar, of the brand drug. The purpose is to prove to the FDA that the competing product is as safe and effective as the branded reference product. In order to perform the testing sufficient to prove efficacy and safety, generic and biosimilar drug manufacturers must acquire a sample (many samples, in fact) of the reference product they are trying to replicate.

For the narrow class of dangerous or highly abused drugs, generic manufacturers are forced to comply with any REMS restrictions placed upon the brand manufacturer — even when the terms require the brand manufacturer to tightly control the distribution of its product.

And therein lies the problem. Because the brand manufacturer controls access to its products, it can refuse to provide the needed samples, using REMS as an excuse. In theory, it may be true in certain cases that a brand manufacturer is justified in refusing to distribute samples of its product, of course; some would-be generic manufacturers certainly may not meet the requisite standards for safety and security.

But in practice it turns out that most of the (known) examples of brands refusing to provide samples happen across the board — they preclude essentially all generic competition, not just the few firms that might have insufficient safeguards. It’s extremely difficult to justify such refusals on the basis of a generic manufacturer’s suitability when all would-be generic competitors are denied access, including well-established, high-quality manufacturers.

But, for a few brand manufacturers, at least, that seems to be how the REMS program is implemented. Thus, for example, Jon Haas, director of patient access at Turing Pharmaceuticals, referred to the practice of denying generics samples this way:

Most likely I would block that purchase… We spent a lot of money for this drug. We would like to do our best to avoid generic competition. It’s inevitable. They seem to figure out a way [to make generics], no matter what. But I’m certainly not going to make it easier for them. We’re spending millions and millions in research to find a better Daraprim, if you will.

As currently drafted, the REMS program gives branded manufacturers the ability to limit competition by stringing along negotiations for product samples for months, if not years. Although access to a few samples for testing is seemingly such a small, trivial thing, the ability to block this access allows a brand manufacturer to limit competition (at least from bioequivalent and generic drugs; obviously competition between competing branded drugs remains).

And even if a generic competitor manages to get ahold of samples, the law creates an additional wrinkle by imposing a requirement that brand and generic manufacturers enter into a single shared REMS plan for bioequivalent and generic drugs. But negotiating the particulars of the single, shared program can drag on for years. Consequently, even when a generic manufacturer has received the necessary samples, performed the requisite testing, and been approved by the FDA to sell a competing drug, it still may effectively be barred from entering the marketplace because of REMS.

The number of drugs covered by REMS is small: fewer than 100 in a universe of several thousand FDA-approved drugs. And the number of these alleged to be subject to abuse is much smaller still. Nonetheless, abuse of this regulation by certain brand manufacturers has likely limited competition and increased prices.

Antitrust is not the answer

Whether the complex, underlying regulatory scheme that allocates the relative rights of brands and generics — and that balances safety against access — gets the balance correct or not is an open question, to be sure. But given the regulatory framework we have and the perceived need for some sort of safety controls around access to samples and for shared REMS plans, the law should at least work to do what it intends, without creating an opportunity for harmful manipulation. Yet it appears that the ambiguity of the current law has allowed some brand manufacturers to exploit these safety protections to limit competition.

As noted above, some are quite keen to make this an antitrust issue. But, as also noted, antitrust is a poor fit for handling such abuses.

First, antitrust law has an uneasy relationship with other regulatory schemes. Not least because of Trinko, it is a tough case to make that brand manufacturers are violating antitrust laws when they rely upon legal obligations under a safety program that is essentially designed to limit generic entry on safety grounds. The issue is all the more properly removed from the realm of antitrust enforcement given that the problem is actually one of regulatory failure, not market failure.

Second, antitrust law doesn’t impose a duty to deal with rivals except in very limited circumstances. In Trinko, for example, the Court rejected the invitation to extend a duty to deal to situations where an existing, voluntary economic relationship wasn’t terminated. By definition this is unlikely to be the case here where the alleged refusal to deal is what prevents the generic from entering the market in the first place. The logic behind Trinko (and a host of other cases that have limited competitors’ obligations to assist their rivals) was to restrict duty to deal cases to those rare circumstances where it reliably leads to long-term competitive harm — not where it amounts to a perfectly legitimate effort to compete without giving rivals a leg-up.

But antitrust is such a powerful tool and such a flexible “catch-all” regulation, that there are always efforts to thwart reasonable limits on its use. As several of us at TOTM have written about at length in the past, former FTC Commissioner Rosch and former FTC Chairman Leibowitz were vocal proponents of using Section 5 of the FTC Act to circumvent sensible judicial limits on making out and winning antitrust claims, arguing that the limits were meant only for private plaintiffs — not (implicitly infallible) government enforcers. Although no one at the FTC has yet (publicly) suggested bringing a REMS case as a standalone Section 5 case, such a case would be consistent with the sorts of theories that animated past standalone Section 5 cases.

Again, this approach serves as an end-run around the reasonable judicial constraints that evolved as a result of judges actually examining the facts of individual cases over time, and is a misguided way of dealing with what is, after all, fundamentally a regulatory design problem.

The CREATES Act

Senator Lee’s bill, on the other hand, aims to solve the problem with a more straightforward approach by improving the existing regulatory mechanism and by adding a limited judicial remedy to incentivize compliance under the amended regulatory scheme. In summary:

  • The bill creates a cause of action for a refusal to deal only where plaintiff can prove, by a preponderance of the evidence, that certain well-defined conditions are met.
  • For samples, if a drug is not covered by a REMS, or if the generic manufacturer is specifically authorized, then the generic can sue if it doesn’t receive sufficient quantities of samples on commercially reasonable terms. This is not a per se offense subject to outsized antitrust damages. Instead, the remedy is a limited injunction ensuring the sale of samples on commercially reasonable terms, reasonable attorneys’ fees, and a monetary fine limited to revenue earned from sale of the drug during the refusal period.
  • The bill also gives a brand manufacturer an affirmative defense if it can prove by a preponderance of the evidence that, regardless of its own refusal to supply them, samples were nevertheless available elsewhere on commercially reasonable terms, or where the brand manufacturer is unable to supply the samples because it does not actually produce or market the drug.
  • In order to deal with the REMS process problems, the bill creates similar rights with similar limitations when the license holders and generics cannot come to an agreement about a shared REMS on commercially reasonable terms within 120 days of first contact by an eligible developer.
  • The bill also explicitly limits brand manufacturers’ liability for claims “arising out of the failure of an [eligible generic manufacturer] to follow adequate safeguards,” thus removing one of the (perfectly legitimate) objections to the bill pressed by brand manufacturers.

The primary remedy is limited, injunctive relief to end the delay. And brands are protected from frivolous litigation by an affirmative defense under which they need only show that the product is available for purchase on reasonable terms elsewhere. Damages are similarly limited and are awarded only if a court finds that the brand manufacturer lacked a legitimate business justification for its conduct (which, under the drug safety regime, means essentially a reasonable belief that its own REMS plan would be violated by dealing with the generic entrant). And monetary damages do not include punitive damages.

Finally, the proposed bill completely avoids the question of whether antitrust laws are applicable, leaving that possibility open to determination by courts — as is appropriate. Moreover, by establishing even more clearly the comprehensive regulatory regime governing potential generic entrants’ access to dangerous drugs, the bill would, given the holding in Trinko, probably make application of antitrust laws here considerably less likely.

Ultimately Senator Lee’s bill is a well-thought-out and targeted fix to an imperfect regulation that seems to be facilitating anticompetitive conduct by a few bad actors. It does so without trampling on the courts’ well-established antitrust jurisprudence, and without imposing excessive cost or risk on the majority of brand manufacturers that behave perfectly appropriately under the law.

Earlier this week I testified before the U.S. House Subcommittee on Commerce, Manufacturing, and Trade regarding several proposed FTC reform bills.

You can find my written testimony here. That testimony was drawn from a 100 page report, authored by Berin Szoka and me, entitled “The Federal Trade Commission: Restoring Congressional Oversight of the Second National Legislature — An Analysis of Proposed Legislation.” In the report we assess 9 of the 17 proposed reform bills in great detail, and offer a host of suggested amendments or additional reform proposals that, we believe, would help make the FTC more accountable to the courts. As I discuss in my oral remarks, that judicial oversight was part of the original plan for the Commission, and an essential part of ensuring that its immense discretion is effectively directed toward protecting consumers as technology and society evolve around it.

The report is “Report 2.0” of the FTC: Technology & Reform Project, which was convened by the International Center for Law & Economics and TechFreedom with an inaugural conference in 2013. Report 1.0 lays out some background on the FTC and its institutional dynamics, identifies the areas of possible reform at the agency, and suggests the key questions/issues each of them raises.

The text of my oral remarks follow, or, if you prefer, you can watch them here:

Chairman Burgess, Ranking Member Schakowsky, and Members of the Subcommittee, thank you for the opportunity to appear before you today.

I’m Executive Director of the International Center for Law & Economics, a non-profit, non-partisan research center. I’m a former law professor, I used to work at Microsoft, and I had what a colleague once called the most illustrious FTC career ever — because, at approximately 2 weeks, it was probably the shortest.

I’m not typically one to advocate active engagement by Congress in anything (no offense). But the FTC is different.

Despite Congressional reforms, the FTC remains the closest thing we have to a second national legislature. Its jurisdiction covers nearly every company in America. Section 5, at its heart, runs just 20 words — leaving the Commission enormous discretion to make policy decisions that are essentially legislative.

The courts were supposed to keep the agency on course. But they haven’t. As Former Chairman Muris has written, “the agency has… traditionally been beyond judicial control.”

So it’s up to Congress to monitor the FTC’s processes, and tweak them when the FTC goes off course, which is inevitable.

This isn’t a condemnation of the FTC’s dedicated staff. Rather, this one way ratchet of ever-expanding discretion is simply the nature of the beast.

Yet too many people lionize the status quo. They see any effort to change the agency from the outside as an affront. It’s as if Congress was struck by a bolt of lightning in 1914 and the Perfect Platonic Agency sprang forth.

But in the real world, an agency with massive scope and discretion needs oversight — and feedback on how its legal doctrines evolve.

So why don’t the courts play that role? Companies essentially always settle with the FTC because of its exceptionally broad investigatory powers, its relatively weak standard for voting out complaints, and the fact that those decisions effectively aren’t reviewable in federal court.

Then there’s the fact that the FTC sits in judgment of its own prosecutions. So even if a company doesn’t settle and actually wins before the ALJ, FTC staff still wins 100% of the time before the full Commission.

Able though FTC staffers are, this can’t be from sheer skill alone.

Whether by design or by neglect, the FTC has become, as Chairman Muris again described it, “a largely unconstrained agency.”

Please understand: I say this out of love. To paraphrase Churchill, the FTC is the “worst form of regulatory agency — except for all the others.”

Eventually Congress had to course-correct the agency — to fix the disconnect and to apply its own pressure to refocus Section 5 doctrine.

So a heavily Democratic Congress pressured the Commission to adopt the Unfairness Policy Statement in 1980. The FTC promised to restrain itself by balancing the perceived benefits of its unfairness actions against the costs, and not acting when injury is insignificant or consumers could have reasonably avoided injury on their own. It is, inherently, an economic calculus.

But while the Commission pays lip service to the test, you’d be hard-pressed to identify how (or whether) it’s implemented it in practice. Meanwhile, the agency has essentially nullified the “materiality” requirement that it volunteered in its 1983 Deception Policy Statement.

Worst of all, Congress failed to anticipate that the FTC would resume exercising its vast discretion through what it now proudly calls its “common law of consent decrees” in data security cases.

Combined with a flurry of recommended best practices in reports that function as quasi-rulemakings, these settlements have enabled the FTC to circumvent both Congressional rulemaking reforms and meaningful oversight by the courts.

The FTC’s data security settlements aren’t an evolving common law. They’re a static statement of “reasonable” practices, repeated about 55 times over the past 14 years. At this point, it’s reasonable to assume that they apply to all circumstances — much like a rule (which is, more or less, the opposite of the common law).

Congressman Pompeo’s SHIELD Act would help curtail this practice, especially if amended to include consent orders and reports. It would also help focus the Commission on the actual elements of the Unfairness Policy Statement — which should be codified through Congressman Mullins’ SURE Act.

Significantly, only one data security case has actually come before an Article III court. The FTC trumpets Wyndham as an out-and-out win. But it wasn’t. In fact, the court agreed with Wyndham on the crucial point that prior consent orders were of little use in trying to understand the requirements of Section 5.

More recently the FTC suffered another rebuke. While it won its product design suit against Amazon, the Court rejected the Commission’s “fencing in” request to permanently hover over the company and micromanage practices that Amazon had already ended.

As the FTC grapples with such cutting-edge legal issues, it’s drifting away from the balance it promised Congress.

But Congress can’t fix these problems simply by telling the FTC to take its bedrock policy statements more seriously. Instead it must regularly reassess the process that’s allowed the FTC to avoid meaningful judicial scrutiny. The FTC requires significant course correction if its model is to move closer to a true “common law.”

Yesterday a federal district court in Washington state granted the FTC’s motion for summary judgment against Amazon in FTC v. Amazon — the case alleging unfair trade practices in Amazon’s design of the in-app purchases interface for apps available in its mobile app store. The headlines score the decision as a loss for Amazon, and the FTC, of course, claims victory. But the court also granted Amazon’s motion for partial summary judgment on a significant aspect of the case, and the Commission’s win may be decidedly pyrrhic.

While the district court (very wrongly, in my view) essentially followed the FTC in deciding that a well-designed user experience doesn’t count as a consumer benefit for assessing substantial harm under the FTC Act, it rejected the Commission’s request for a permanent injunction against Amazon. It also called into question the FTC’s calculation of monetary damages. These last two may be huge. 

The FTC may have “won” the case, but it’s becoming increasingly apparent why it doesn’t want to take these cases to trial. First in Wyndham, and now in Amazon, courts have begun to chip away at the FTC’s expansive Section 5 discretion, even while handing the agency nominal victories.

The Good News

The FTC largely escapes judicial oversight in cases like these because its targets almost always settle (Amazon is a rare exception). These settlements — consent orders — typically impose detailed 20-year injunctions and give the FTC ongoing oversight of the companies’ conduct for the same period. The agency has wielded the threat of these consent orders as a powerful tool to micromanage tech companies, and it currently has at least one consent order in place with Twitter, Google, Apple, Facebook and several others.

As I wrote in a WSJ op-ed on these troubling consent orders:

The FTC prefers consent orders because they extend the commission’s authority with little judicial oversight, but they are too blunt an instrument for regulating a technology company. For the next 20 years, if the FTC decides that Google’s product design or billing practices don’t provide “express, informed consent,” the FTC could declare Google in violation of the new consent decree. The FTC could then impose huge penalties—tens or even hundreds of millions of dollars—without establishing that any consumer had actually been harmed.

Yesterday’s decision makes that outcome less likely. Companies will be much less willing to succumb to the FTC’s 20-year oversight demands if they know that courts may refuse the FTC’s injunction request and accept companies’ own, independent and market-driven efforts to address consumer concerns — without any special regulatory micromanagement.

In the same vein, while the court did find that Amazon was liable for repayment of unauthorized charges made without “express, informed authorization,” it also found the FTC’s monetary damages calculation questionable and asked for further briefing on the appropriate amount. If, as seems likely, it ultimately refuses to simply accept the FTC’s damages claims, that, too, will take some of the wind out of the FTC’s sails. Other companies have settled with the FTC and agreed to 20-year consent decrees in part, presumably, because of the threat of excessive damages if they litigate. That, too, is now less likely to happen.

Collectively, these holdings should help to force the FTC to better target its complaints to cases of still-ongoing and truly-harmful practices — the things the FTC Act was really meant to address, like actual fraud. Tech companies trying to navigate ever-changing competitive waters by carefully constructing their user interfaces and payment mechanisms (among other things) shouldn’t be treated the same way as fraudulent phishing scams.

The Bad News

The court’s other key holding is problematic, however. In essence, the court, like the FTC, seems to believe that regulators are better than companies’ product managers, designers and engineers at designing app-store user interfaces:

[A] clear and conspicuous disclaimer regarding in-app purchases and request for authorization on the front-end of a customer’s process could actually prove to… be more seamless than the somewhat unpredictable password prompt formulas rolled out by Amazon.

Never mind that Amazon has undoubtedly spent tremendous resources researching and designing the user experience in its app store. And never mind that — as Amazon is certainly aware — a consumer’s experience of a product is make-or-break in the cut-throat world of online commerce, advertising and search (just ask Jet).

Instead, for the court (and the FTC), the imagined mechanism of “affirmatively seeking a customer’s authorized consent to a charge” is all benefit and no cost. Whatever design decisions may have informed the way Amazon decided to seek consent are either irrelevant, or else the user-experience benefits they confer are negligible.

As I’ve written previously:

Amazon has built its entire business around the “1-click” concept — which consumers love — and implemented a host of notification and security processes hewing as much as possible to that design choice, but nevertheless taking account of the sorts of issues raised by in-app purchases. Moreover — and perhaps most significantly — it has implemented an innovative and comprehensive parental control regime (including the ability to turn off all in-app purchases) — Kindle Free Time — that arguably goes well beyond anything the FTC required in its Apple consent order.

Amazon is not abdicating its obligation to act fairly under the FTC Act and to ensure that users are protected from unauthorized charges. It’s just doing so in ways that also take account of the costs such protections may impose — particularly, in this case, on the majority of Amazon customers who didn’t and wouldn’t suffer such unauthorized charges.

Amazon began offering Kindle Free Time in 2012 as an innovative solution to a problem — children’s access to apps and in-app purchases — that affects only a small subset of Amazon’s customers. To dismiss that effort without considering that Amazon might have made a perfectly reasonable judgment that balanced consumer protection and product design disregards the cost-benefit balancing required by Section 5 of the FTC Act.

Moreover, the FTC Act imposes liability for harm only when they are not “reasonably avoidable.” Kindle Free Time is an outstanding example of an innovative mechanism that allows consumers at risk of unauthorized purchases by children to “reasonably avoid” harm. The court’s and the FTC’s disregard for it is inconsistent with the statute.

Conclusion

The court’s willingness to reinforce the FTC’s blackboard design “expertise” (such as it is) to second guess user-interface and other design decisions made by firms competing in real markets is unfortunate. But there’s a significant silver lining. By reining in the FTC’s discretion to go after these companies as if they were common fraudsters, the court has given consumers an important victory. After all, it is consumers who otherwise bear the costs (both directly and as a result of reduced risk-taking and innovation) of the FTC’s largely unchecked ability to extract excessive concessions from its enforcement targets.

Scolding teacher

I have small children and, like any reasonably competent parent, I take an interest in monitoring their Internet usage. In particular, I am sensitive to what ad content they are being served and which sites they visit that might try to misuse their information. My son even uses Chromebooks at his elementary school, which underscores this concern for me, as I can’t always be present to watch what he does online. However, also like any other reasonably competent parent, I trust his school and his teacher to make good choices about what he is allowed to do online when I am not there to watch him. And so it is that I am both interested in and rather perplexed by what has EFF so worked up in its FTC complaint alleging privacy “violations” in the “Google for Education” program.

EFF alleges three “unfair or deceptive” acts that would subject Google to remedies under Section 5 of the FTCA: (1) Students logged into “Google for Education” accounts have their non-educational behavior individually tracked (e.g. performing general web searches, browsing YouTube, etc.); (2) the Chromebooks distributed as part of the “Google for Education” program have the “Chrome Sync” feature turned on by default (ostensibly in a terribly diabolical effort to give students a seamless experience between using the Chromebooks at home and at school); and (3) the school administrators running particular instances of “Google for Education” have the ability to share student geolocation information with third-party websites. Each of these violations, claims EFF, violates the K-12 School Service Provider Pledge to Safeguard Student Privacy (“Pledge”) that was authored by the Future of Privacy Forum and Software & Information Industry Association, and to which Google is a signatory. According to EFF, Google included references to its signature in its “Google for Education” marketing materials, thereby creating the expectation in parents that it would adhere to the principles, failed to do so, and thus should be punished.

The TL;DR version: EFF appears to be making some simple interpretational errors — it believes that the scope of the Pledge covers any student activity and data generated while a student is logged into a Google account. As the rest of this post will (hopefully) make clear, however, the Pledge, though ambiguous, is more reasonably read as limiting Google’s obligations to instances where a student is using  Google for Education apps, and does not apply to instances where the student is using non-Education apps — whether she is logged on using her Education account or not.

The key problem, as EFF sees it, is that Google “use[d] and share[d] … student personal information beyond what is needed for education.” So nice of them to settle complex business and educational decisions for the world! Who knew it was so easy to determine exactly what is needed for educational purposes!

Case in point: EFF feels that Google’s use of anonymous and aggregated student data in order to improve its education apps is not an educational purpose. Seriously? How can that not be useful for educational purposes — to improve its educational apps!?

And, according to EFF, the fact that Chrome Sync is ‘on’ by default in the Chromebooks only amplifies the harm caused by the non-Education data tracking because, when the students log in outside of school, their behavior can be correlated with their in-school behavior. Of course, this ignores the fact that the same limitations apply to the tracking — it happens only on non-Education apps. Thus, the Chrome Sync objection is somehow vaguely based on geography. The fact that Google can correlate an individual student’s viewing of a Neil DeGrasse Tyson video in a computer lab at school with her later finishing that video at home is somehow really bad (or so EFF claims).

EFF also takes issue with the fact that school administrators are allowed to turn on a setting enabling third parties to access the geolocation data of Google education apps users.

The complaint is fairly sparse on this issue — and the claim is essentially limited to the assertion that “[s]haring a student’s physical location with third parties is unquestionably sharing personal information beyond what is needed for educational purposes[.]”  While it’s possible that third-parties could misuse student data, a presumption that it is per se outside of any educational use for third-parties to have geolocation access at all strikes me as unreasonable.

Geolocation data, particularly on mobile devices, could allow for any number of positive and negative uses, and without more it’s hard to really take EFF’s premature concern all that seriously. Did they conduct a study demonstrating that geolocation data can serve no educational purpose or that the feature is frequently abused? Sadly, it seems doubtful. Instead, they appear to be relying upon the rather loose definition of likely harm that we have seen in FTC actions in other contexts ( more on this problem here).  

Who decides what ambiguous terms mean?

The bigger issue, however, is the ambiguity latent in the Pledge and how that ambiguity is being exploited to criticize Google. The complaint barely conceals EFF’s eagerness, and gives one the distinct feeling that the Pledge and this complaint are part of a long game. Everyone knows that Google’s entire existence revolves around the clever and innovative employment of large data sets. When Google announced that it was interested in working with schools to provide technology to students, I can only imagine how the anti-big-data-for-any-commercial-purpose crowd sat up and took notice, just waiting to pounce as soon as an opportunity, no matter how tenuous, presented itself.

EFF notes that “[u]nlike Microsoft and numerous other developers of digital curriculum and classroom management software, Google did not initially sign onto the Student Privacy Pledge with the first round of signatories when it was announced in the fall of 2014.” Apparently, it is an indictment of Google that it hesitated to adopt an external statement of privacy principles that was authored by a group that had no involvement with Google’s internal operations or business realities. EFF goes on to note that it was only after “sustained criticism” that Google “reluctantly” signed the pledge. So the company is badgered into signing a pledge that it was reluctant to sign in the first place (almost certainly for exactly these sorts of reasons), and is now being skewered by the proponents of the pledge that it was reluctant to sign. Somehow I can’t help but get the sense that this FTC complaint was drafted even before Google signed the Pledge.

According to the Pledge, Google promised to:

  1. “Not collect, maintain, use or share student personal information beyond that needed for authorized educational/school purposes, or as authorized by the parent/student.”
  2. “Not build a personal profile of a student other than for supporting authorized educational/school purposes or as authorized by the parent/student.”
  3. “Not knowingly retain student personal information beyond the time period required to support the authorized educational/school purposes, or as authorized by the parent/student.”

EFF interprets “educational purpose” as anything a student does while logged into her education account, and by extension, any of the even non-educational activity will count as “student personal information.” I think that a fair reading of the Pledge undermines this position, however, and that the correct interpretation of the Pledge is that “educational purpose” and “student personal information” are more tightly coupled such that Google’s ability to collect student data is only circumscribed when the student is actually using the Google for Education Apps.

So what counts as “student personal information” in the pledge? “Student personal information” is “personally identifiable information as well as other information when it is both collected and maintained on an individual level and is linked to personally identifiable information.”  Although this is fairly broad, it is limited by the definition of “Educational/School purposes” which are “services or functions that customarily take place at the direction of the educational institution/agency or their teacher/employee, for which the institutions or agency would otherwise use its own employees, and that aid in the administration or improvement of educational and school activities.” (emphasis added).

This limitation in the Pledge essentially sinks EFF’s complaint. A major part of EFF’s gripe is that when the students interact with non-Education services, Google tracks them. However, the Pledge limits the collection of information only in contexts where “the institutions or agency would otherwise use its own employees” — a definition that clearly does not extend to general Internet usage. This definition would reasonably cover activities like administering classes, tests, and lessons. This definition would not cover activity such as general searches, watching videos on YouTube and the like. Key to EFF’s error is that the pledge is not operative on accounts but around activity — in particular educational activity “for which the institutions or agency would otherwise use its own employees.”

To interpret Google’s activity in the way that EFF does is to treat the Pledge as a promise never to do anything, ever, with the data of a student logged into an education account, whether generated as part of Education apps or otherwise. That just can’t be right. Thinking through the implications of EFF’s complaint, the ultimate end has to be that Google needs to obtain a permission slip from parents before offering access to Google for Education accounts. Administrators and Google are just not allowed to provision any services otherwise.

And here is where the long game comes in. EFF and its peers induced Google to sign the Pledge all the while understanding that their interpretation would necessarily require a re-write of Google’s business model.  But not only is this sneaky, it’s also ridiculous. By way of analogy, this would be similar to allowing parents an individual say over what textbooks or other curricular materials their children are allowed to access. This would either allow for a total veto by a single parent, or else would require certain students to be frozen out of participating in homework and other activities being performed with a Google for Education app. That may work for Yale students hiding from microaggressions, but it makes no sense to read such a contentious and questionable educational model into Google’s widely-offered apps.

I think a more reasonable interpretation should prevail. The privacy pledge is meant to govern the use of student data while that student is acting as a student — which in the case of Google for Education apps would mean while using said apps. Plenty of other Google apps could be used for educational purposes, but Google is intentionally delineating a sensible dividing line in order to avoid exactly this sort of problem (as well as problems that could arise under other laws directed at student activity, like COPPA, most notably). It is entirely unreasonable to presume that Google, by virtue of its socially desirable behavior of enabling students to have ready access to technology, is thereby prevented from tracking individuals’ behavior on non-Education apps as it chooses to define them.

What is the Harm?

According to EFF, there are two primary problems with Google’s gathering and use of student data: gathering and using individual data in non-Education apps, and gathering and using anonymized and aggregated data in the Education apps. So what is the evil end to which Google uses this non-Education gathered data?

“Google not only collects and stores the vast array of student data described above, but uses it for its own purposes such as improving Google products and serving targeted advertising (within non-Education Google services)”

The horrors! Google wants to use student behavior to improve its services! And yes, I get it, everyone hates ads — I hate ads too — but at some point you need to learn to accept that the wealth of nominally free apps available to every user is underwritten by the ad-sphere. So if Google is using the non-Education behavior of students to gain valuable insights that it can monetize and thereby subsidize its services, so what? This is life in the twenty-first century, and until everyone collectively decides that we prefer to pay for services up front, we had better get used to being tracked and monetized by advertisers.

But as noted above, whether you think Google should or shouldn’t be gathering this data, it seems clear that the data generated from use of non-Education apps doesn’t fall under the Pledge’s purview. Thus, perhaps sensing the problems in its non-Education use argument, EFF also half-heartedly attempts to demonize certain data practices that Google employs in the Education context. In short, Google aggregates and anonymizes the usage data of the Google for Education apps, and, according to EFF, this is a violation of the Pledge:

“Aggregating and anonymizing students’ browsing history does not change the intensely private nature of the data … such that Google should be free to use it[.]”

Again the “harm” is that Google actually wants to improve the Educational apps:  “Google has acknowledged that it collects, maintains, and uses student information via Chrome Sync (in aggregated and anonymized form) for the purpose of improving Google products”

This of course doesn’t violate the Pledge. After all, signatories to the Pledge promise only that they will “[n]ot collect, maintain, use or share student personal information beyond that needed for authorized educational/school purposes.” It’s eminently reasonable to include the improvement of the provisioned services as part of an “authorized educational … purpose[.]” And by ensuring that the data is anonymized and aggregated, Google is clearly acknowledging that some limits are appropriate in the education context — that it doesn’t need to collect individual and identifiable personal information for education purposes — but that improving its education products the same way it improves all its products is an educational purpose.

How are the harms enhanced by Chrome Sync? Honestly, it’s not really clear from EFF’s complaint. I believe that the core of EFF’s gripe (at least here) has to do with how the two data gathering activities may be correlated together. Google has ChromeSync enabled by default, so when the students sign on at different locations, the Education apps usage is recorded and grouped (still anonymously) for service improvement alongside non-Education use. And the presence of these two data sets being generated side-by-side creates the potential to track students in the educational capacity by correlating with information generated in their non-educational capacity.

Maybe there are potential flaws in the manner in which the data is anonymized. Obviously EFF thinks anonymized data won’t stay anonymized. That is a contentious view, to say the least, but regardless, it is in no way compelled by the Pledge. But more to the point, merely having both data sets does not do anything that clearly violates the Pledge.

The End Game

So what do groups like EFF actually want? It’s important to consider the effects on social welfare that this approach to privacy takes, and its context. First, the Pledge was overwhelmingly designed for and signed by pure education companies, and not large organizations like Google, Apple, or Microsoft — thus the nature of the Pledge itself is more or less ill-fitted to a multi-faceted business model. If we follow the logical conclusions of this complaint, a company like Google would face an undesirable choice: On the one hand, it can provide hardware to schools at zero cost or heavily subsidized prices, and also provide a suite of useful educational applications. However, as part of this socially desirable donation, it must also place a virtual invisibility shield around students once they’ve signed into their accounts. From that point on, regardless of what service they use — even non-educational ones — Google is prevented from using any data students generate. At this point, one has to question Google’s incentive to remove huge swaths of the population from its ability to gather data. If Google did nothing but provide the hardware, it could simply leave its free services online as-is, and let schools adopt or not adopt them as they wish (subject of course to extant legislation such as COPPA) — thereby allowing itself to possibly collect even more data on the same students.

On the other hand, if not Google, then surely many other companies would think twice before wading into this quagmire, or, when they do, they might offer severely limited services. For instance, one way of complying with EFF’s view of how the Pledge works would be to shut off access to all non-Education services. So, students logged into an education account could only access the word processing and email services, but would be prevented from accessing YouTube, web search and other services — and consequently suffer from a limitation of potentially novel educational options.

EFF goes on to cite numerous FTC enforcement actions and settlements from recent years. But all of the cited examples have one thing in common that the current complaint does not: they all are violations of § 5 for explicit statements or representations made by a company to consumers. EFF’s complaint, on the other hand, is based on a particular interpretation of an ambiguous document generally drafted, and outside of the the complicated business practice at issue. What counts as “student information” when a user employs a general purpose machine for both educational purposes and non-educational purposes?  The Pledge — at least the sections that EFF relies upon in its complaint — is far from clear and doesn’t cover Google’s behavior in an obvious manner.

Of course, the whole complaint presumes that the nature of Google’s services was somehow unfair or deceptive to parents — thus implying that there was at least some material reliance on the Pledge in parental decision making. However, this misses a crucial detail: it is the school administrators who contract with Google for the Chromebooks and Google for Education services, and not the parents or the students.  Then again, maybe EFF doesn’t care and it is, as I suggest above, just interested in a long game whereby it can shoehorn Google’s services into some new sort of privacy regime. This isn’t all that unusual, as we have seen even the White House in other contexts willing to rewrite business practices wholly apart from the realities of privacy “harms.”

But in the end, this approach to privacy is just a very efficient way to discover the lowest common denominator in charity. If it even decides to brave the possible privacy suits, Google and other similarly situated companies will provide the barest access to the most limited services in order to avoid extensive liability from ambiguous pledges. And, perhaps even worse for overall social welfare, using the law to force compliance with voluntarily enacted, ambiguous codes of conduct is a sure-fire way to make sure that there are fewer and more limited codes of conduct in the future.

On October 7, 2015, the Senate Judiciary Committee held a hearing on the “Standard Merger and Acquisition Reviews Through Equal Rules” (SMARTER) Act of 2015.  As former Antitrust Modernization Commission Chair (and former Acting Assistant Attorney General for Antitrust) Deborah Garza explained in her testimony, “t]he premise of the SMARTER Act is simple:  A merger should not be treated differently depending on which antitrust enforcement agency – DOJ or the FTC – happens to review it.  Regulatory outcomes should not be determined by a flip of the merger agency coin.”

Ms. Garza is clearly correct.  Both the U.S. Justice Department (DOJ) and the U.S. Federal Trade Commission (FTC) enforce the federal antitrust merger review provision, Section 7 of the Clayton Act, and employ a common set of substantive guidelines (last revised in 2010) to evaluate merger proposals.  Neutral “rule of law” principles indicate that private parties should expect to have their proposed mergers subject to the same methods of assessment and an identical standard of judicial review, regardless of which agency reviews a particular transaction.  (The two agencies decide by mutual agreement which agency will review any given merger proposal.)

Unfortunately, however, that is not the case today.  The FTC’s independent ability to challenge mergers administratively, combined with the difference in statutory injunctive standards that apply to FTC and DOJ merger reviews, mean that a particular merger application may face more formidable hurdles if reviewed by the FTC, rather than DOJ.  These two differences commendably would be eliminated by the SMARTER Act, which would subject the FTC to current DOJ standards.  The SMARTER Act would not deal with a third difference – the fact that DOJ merger consent decrees, but not FTC merger consent decrees, must be filed with a federal court for “public interest” review.  This commentary briefly addresses those three issues.  The first and second ones present significant “rule of law” problems, in that they involve differences in statutory language applied to the same conduct.  The third issue, the question of judicial review of settlements, is of a different nature, but nevertheless raises substantial policy concerns.

  1. FTC Administrative Authority

The first rule of law problem stems from the broader statutory authority the FTC possesses to challenge mergers.  In merger cases, while DOJ typically consolidates actions for a preliminary and permanent injunction in district court, the FTC merely seeks a preliminary injunction (which is easier to obtain than a permanent injunction) and “holds in its back pocket” the ability to challenge a merger in an FTC administrative proceeding – a power DOJ does not possess.  In short, the FTC subjects proposed mergers to a different and more onerous method of assessment than DOJ.  In Ms. Garza’s words (footnotes deleted):

“Despite the FTC’s legal ability to seek permanent relief from the district court, it prefers to seek a preliminary injunction only, to preserve the status quo while it proceeds with its administrative litigation.

This approach has great strategic significance. First, the standard for obtaining a preliminary injunction in government merger challenges is lower than the standard for obtaining a permanent injunction. That is, it is easier to get a preliminary injunction.

Second, as a practical matter, the grant of a preliminary injunction is typically sufficient to end the matter. In nearly every case, the parties will abandon their transaction rather than incur the heavy cost and uncertainty of trying to hold the merger together through further proceedings—which is why merging parties typically seek to consolidate proceedings for preliminary and permanent relief under Rule 65(a)(2). Time is of the essence. As one witness testified before the [Antitrust Modernization Commission], “it is a rare seller whose business can withstand the destabilizing effect of a year or more of uncertainty” after the issuance of a preliminary injunction.

Third, even if the court denies the FTC its preliminary injunction and the parties close their merger, the FTC can still continue to pursue an administrative challenge with an eye to undoing or restructuring the transaction. This is the “heads I win, tails you lose” aspect of the situation today. It is very difficult for the parties to get to the point of a full hearing in court given the effect of time on transactions, even with the FTC’s expedited administrative procedures adopted in about 2008. . . . 

[Moreover,] [while] [u]nder its new procedures, parties can move to dismiss an administrative proceeding if the FTC has lost a motion for preliminary injunction and the FTC will consider whether to proceed on a case-by-case basis[,] . . . th[is] [FTC] policy could just as easily change again, unless Congress speaks.”

Typically time is of the essence in proposed mergers, so substantial delays occasioned by extended reviews of those transactions may prevent many transactions from being consummated, even if they eventually would have passed antitrust muster.  Ms. Garza’s testimony, plus testimony by former Assistant Deputy Assistant Attorney General for Antitrust Abbott (Tad) Lipsky, document cases of substantial delay in FTC administrative reviews of merger proposals.  (As Mr. Lipsky explained, “[a]ntitrust practitioners have long perceived that the possibility of continued administrative litigation by the FTC following a court decision constitutes a significant disincentive for parties to invest resources in transaction planning and execution.”)  Congress should weigh these delay-specific costs, as well as the direct costs of any additional burdens occasioned by FTC administrative procedures, in deciding whether to require the FTC (like DOJ) to rely solely on federal court proceedings.

  1. Differences Between FTC and DOJ Injunctive Standards

The second rule of law problem arises from the lighter burden the FTC must satisfy to obtain injunctive relief in federal court.  Under Section 13(b) of the FTC Act, an injunction shall be granted the FTC “[u]pon a proper showing that, weighing the equities and considering the Commission’s likelihood of success, such action would be in the public interest.”  The D.C. Circuit (in FTC v. H.J. Heinz Co. and in FTC v. Whole Foods Market, Inc.) has stated that, to meet this burden, the FTC need merely have raised questions “so serious, substantial, difficult and doubtful as to make them fair ground for further investigation.”  By contrast, as Ms. Garza’s testimony points out, “under Section 15 of the Clayton Act, courts generally apply a traditional equities test requiring DOJ to show a reasonable likelihood of success on the merits—not merely that there is ‘fair ground for further investigation.’”  In a similar vein, Mr. Lipsky’s testimony stated that “[t]he cumulative effect of several recent contested merger decisions has been to allow the FTC to argue that it needn’t show likelihood of success in order to win a preliminary injunction; specifically these decisions suggest that the Commission need only show ‘serious, substantial, difficult and doubtful’ questions regarding the merits.”  Although some commentators have contended that, in reality, the two standards generally will be interpreted in a similar fashion (“whatever theoretical difference might exist between the FTC and DOJ standards has no practical significance”), there is no doubt that the language of the two standards is different – and basic principles of statutory construction indicate that differences in statutory language should be given meaning and not ignored.  Accordingly, merging parties face the real prospect that they might fare worse under federal court review of an FTC challenge to their merger proposal than they would have fared had DOJ challenged the same transaction.  Such an outcome, even if it is rare, would be at odds with neutral application of the rule of law.

  1. The Tunney Act

Finally, helpful as it is, the SMARTER Act does not entirely eliminate the disparate treatment of proposed mergers by DOJ and the FTC.  The Tunney Act, 15 U.S.C. § 16, enacted in 1974, which applies to DOJ but not to the FTC, requires that DOJ submit all proposed consent judgments under the antitrust laws (including Section 7 of the Clayton Act) to a federal district court for 60 days of public comment prior to being entered.

a.  Economic Costs (and Potential Benefits) of the Tunney Act

The Tunney Act potentially interjects uncertainty into the nature of the “deal” struck between merging parties and DOJ in merger cases.  It does this by subjecting proposed DOJ merger settlements (and other DOJ non-merger civil antitrust settlements) to a 60 day public review period, requiring federal judges to determine whether a proposed settlement is “in the public interest” before entering it, and instructing the court to consider the impact of the entry of judgment “upon competition and upon the public generally.”  Leading antitrust practitioners have noted that this uncertainty “could affect shareholders, customers, or even employees. Moreover, the merged company must devote some measure of resources to dealing with the Tunney Act review—resources that instead could be devoted to further integration of the two companies or generation of any planned efficiencies or synergies.”  More specifically:

“[W]hile Tunney Act proceedings are pending, a merged company may have to consider how its post-close actions and integration could be perceived by the court, and may feel the need to compete somewhat less aggressively, lest its more muscular competitive actions be taken by the court, amici, or the public at large to be the actions of a merged company exercising enhanced market power. Such a distortion in conduct probably was not contemplated by the Tunney Act’s drafters, but merger partners will need to be cognizant of how their post-close actions may be perceived during Tunney Act review. . . .  [And, in addition,] while Tunney Act proceedings are pending, a merged company may have to consider how its post-close actions and integration could be perceived by the court, and may feel the need to compete somewhat less aggressively, lest its more muscular competitive actions be taken by the court, amici, or the public at large to be the actions of a merged company exercising enhanced market power.”

Although the Tunney Act has been justified on traditional “public interest” grounds, even its scholarly supporters (a DOJ antitrust attorney), in praising its purported benefits, have acknowledged its potential for abuse:

“Properly interpreted and applied, the Tunney Act serves a number of related, useful functions. The disclosure provisions and judicial approval requirement for decrees can help identify, and more importantly deter, “influence peddling” and other abuses. The notice-and-comment procedures force the DOJ to explain its rationale for the settlement and provide its answers to objections, thus providing transparency. They also provide a mechanism for third-party input, and, thus, a way to identify and correct potentially unnoticed problems in a decree. Finally, the court’s public interest review not only helps ensure that the decree benefits the public, it also allows the court to protect itself against ambiguous provisions and enforcement problems and against an objectionable or pointless employment of judicial power. Improperly applied, the Tunney Act does more harm than good. When a district court takes it upon itself to investigate allegations not contained in a complaint, or attempts to “re-settle” a case to provide what it views as stronger, better relief, or permits lengthy, unfocused proceedings, the Act is turned from a useful check to an unpredictable, costly burden.”

The justifications presented by the author are open to serious question.  Whether “influence peddling” can be detected merely from the filing of proposed decree terms is doubtful – corrupt deals to settle a matter presumably would be done “behind the scenes” in a manner not available to public scrutiny.  The economic expertise and detailed factual knowledge that informs a DOJ merger settlement cannot be fully absorbed by a judge (who may fall prey to his or her personal predilections as to what constitutes good policy) during a brief review period.  “Transparency” that facilitates “third-party input” can too easily be manipulated by rent-seeking competitors who will “trump up” justifications for blocking an efficient merger.  Moreover, third parties who are opposed to mergers in general may also be expected to file objections to efficient arrangements.  In short, the “sunshine” justification for Tunney Act filings is more likely to cloud the evaluation of DOJ policy calls than to provide clarity.

b.  Constitutional Issues Raised by the Tunney Act

In addition to potential economic inefficiencies, the judicial review feature of the Tunney Act raises serious separation of powers issues, as emphasized by the DOJ Office of Legal Counsel (OLC, which advises the Attorney General and the President on questions of constitutional interpretation) in a 1989 opinion regarding qui tam provisions of the False Claims Act:

“There are very serious doubts as to the constitutionality . . . of the Tunney Act:  it intrudes into the Executive power and requires the courts to decide upon the public interest – that is, to exercise a policy discretion normally reserved to the political branches.  Three Justices of the Supreme Court questioned the constitutionality of the Tunney Act in Maryland v. United States, 460 U.S. 1001 (1983) (Rehnquist, J., joined by Burger, C.J., and White, J., dissenting).”

Notably, this DOJ critique of the Tunney Act was written before the 2004 amendments to that statute that specifically empower courts to consider the impact of proposed settlements “upon competition and upon the public generally” – language that significantly trenches upon Executive Branch prerogatives.  Admittedly, the Tunney Act has withstood judicial scrutiny – no court has ruled it unconstitutional.   Moreover, a federal judge can only accept or reject a Tunney Act settlement, not rewrite it, somewhat ameliorating its affront to the separation of powers.  In short, even though it may not be subject to serious constitutional challenge in the courts, the Tunney Act is problematic as a matter of sound constitutional policy.

c.  Congressional Reexamination of the Tunney Act

These economic and constitutional policy concerns suggest that Congress may wish to carefully reexamine the merits of the Tunney Act.  Any such reexamination, however, should be independent of, and not delay expedited consideration of, the SMARTER Act.  The Tunney Act, although of undoubted significance, is only a tangential aspect of the divergent legal standards that apply to FTC and DOJ merger reviews.  It is beyond the scope of current legislative proposals but it merits being taken up at an appropriate time – perhaps in the next Congress.  When Congress turns to the Tunney Act, it may wish to consider four options:  (1) repealing the Act in its entirety; (2) retaining the Act as is; (3) partially repealing it only with respect to merger reviews; or, (4) applying it in full force to the FTC.  A detailed evaluation of those options is beyond the scope of this commentary.

Conclusion

In sum, in order to eliminate inconsistencies between FTC and DOJ standards for reviewing proposed mergers, Congress should give serious consideration to enacting the SMARTER Act, which would both eliminate FTC administrative review of merger proposals and subject the FTC to the same injunctive standard as the DOJ in judicial review of those proposals.  Moreover, if the SMARTER Act is enacted, Congress should also consider going further and amending the Tunney Act to make it apply to FTC as well as to DOJ merger settlements – or, alternatively, to have it not apply at all to any merger settlements (a result which would better respect the constitutional separation of powers and reduce a potential source of economic inefficiency).