Archives For ftc

FTC Commissioner Josh Wright pens an incredibly important dissent in the FTC’s recent Ardagh/Saint-Gobain merger review.

At issue is how pro-competitive efficiencies should be considered by the agency under the Merger Guidelines.

As Josh notes, the core problem is the burden of proof:

Merger analysis is by its nature a predictive enterprise. Thinking rigorously about probabilistic assessment of competitive harms is an appropriate approach from an economic perspective. However, there is some reason for concern that the approach applied to efficiencies is deterministic in practice. In other words, there is a potentially dangerous asymmetry from a consumer welfare perspective of an approach that embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other.

In the summer of 1995, I spent a few weeks at the FTC. It was the end of the summer and nearly the entire office was on vacation, so I was left dealing with the most arduous tasks. In addition to fielding calls from Joe Sims prodding the agency to finish the Turner/Time Warner merger consent, I also worked on early drafting of the efficiencies defense, which was eventually incorporated into the 1997 Merger Guidelines revision.

The efficiencies defense was added to the Guidelines specifically to correct a defect of the pre-1997 Guidelines era in which

It is unlikely that efficiencies were recognized as an antitrust defense…. Even if efficiencies were thought to have a significant impact on the outcome of the case, the 1984 Guidelines stated that the defense should be based on “clear and convincing” evidence. Appeals Court Judge and former Assistant Attorney General for Antitrust Ginsburg has recently called reaching this standard “well-nigh impossible.” Further, even if defendants can meet this level of proof, only efficiencies in the relevant anticompetitive market may count.

The clear intention was to ensure better outcomes by ensuring that net pro-competitive mergers wouldn’t be thwarted. But even under the 1997 (and still under the 2010) Guidelines,

the merging firms must substantiate efficiency claims so that the Agency can verify by reasonable means the likelihood and magnitude of each asserted efficiency, how and when each would be achieved (and any costs of doing so), how each would enhance the merged firm’s ability and incentive to compete, and why each would be merger-specific. Efficiency claims will not be considered if they are vague or speculative or otherwise cannot be verified by reasonable means.

The 2006 Guidelines Commentary further supports the notion that the parties bear a substantial burden of demonstrating efficiencies.

As Josh notes, however:

Efficiencies, like anticompetitive effects, cannot and should not be presumed into existence. However, symmetrical treatment in both theory and practice of evidence proffered to discharge the respective burdens of proof facing the agencies and merging parties is necessary for consumer‐welfare based merger policy

There is no economic basis for demanding more proof of claimed efficiencies than of claimed anticompetitive harms. And the Guidelines since 1997 were (ostensibly) drafted in part precisely to ensure that efficiencies were appropriately considered by the agencies (and the courts) in their enforcement decisions.

But as Josh notes, this has not really been the case, much to the detriment of consumer-welfare-enhancing merger review:

To the extent the Merger Guidelines are interpreted or applied to impose asymmetric burdens upon the agencies and parties to establish anticompetitive effects and efficiencies, respectively, such interpretations do not make economic sense and are inconsistent with a merger policy designed to promote consumer welfare. Application of a more symmetric standard is unlikely to allow, as the Commission alludes to, the efficiencies defense to “swallow the whole of Section 7 of the Clayton Act.” A cursory read of the cases is sufficient to put to rest any concerns that the efficiencies defense is a mortal threat to agency activity under the Clayton Act. The much more pressing concern at present is whether application of asymmetric burdens of proof in merger review will swallow the efficiencies defense.

It benefits consumers to permit mergers that offer efficiencies that offset presumed anticompetitive effects. To the extent that the agencies, as in the Ardagh/Saint-Gobain merger, discount efficiencies evidence relative to their treatment of anticompetitive effects evidence, consumers will be harmed and the agencies will fail to fulfill their mandate.

This is an enormously significant issue, and Josh should be widely commended for raising it in this case. With luck it will spur a broader discussion and, someday, a more appropriate treatment in the Guidelines and by the agencies of merger efficiencies.

 

Last month the Wall Street Journal raised the specter of an antitrust challenge to the proposed Jos. A. Bank/Men’s Warehouse merger.

Whether a challenge is forthcoming appears to turn, of course, on market definition:

An important question in the FTC’s review will be whether it believes the two companies compete in a market that is more specialized than the broad men’s apparel market. If the commission concludes the companies do compete in a different space than retailers like Macy’s, Kohl’s and J.C. Penney, then the merger partners could face a more-difficult government review.

You’ll be excused for recalling that the last time you bought a suit you shopped at Jos. A. Bank and Macy’s before making your purchase at Nordstrom Rack, and for thinking that the idea of a relevant market comprising Jos. A. Bank and Men’s Warehouse to the exclusion of the others is absurd.  Because, you see, as the article notes (quoting Darren Tucker),

“The FTC sometimes segments markets in ways that can appear counterintuitive to the public.”

“Ah,” you say to yourself. “In other words, if the FTC’s rigorous econometric analysis shows that prices at Macy’s don’t actually affect pricing decisions at Men’s Warehouse, then I’d be surprised, but so be it.”

But that’s not what he means by “counterintuitive.” Rather,

The commission’s analysis, he said, will largely turn on how the companies have viewed the market in their own ordinary-course business documents.

According to this logic, even if Macy’s does exert pricing pressure on Jos. A Bank, if Jos. A. Bank’s business documents talk about Men’s Warehouse as its only real competition, or suggest that the two companies “dominate” the “mid-range men’s apparel market,” then FTC may decide to challenge the deal.

I don’t mean to single out Darren here; he just happens to be who the article quotes, and this kind of thinking is de rigeur.

But it’s just wrong. Or, I should say, it may be descriptively accurate — it may be that the FTC will make its enforcement decision (and the court would make its ruling) on the basis of business documents — but it’s just wrong as a matter of economics, common sense, logic and the protection of consumer welfare.

One can’t help but think of the Whole Foods/Wild Oats merger and the FTC’s ridiculous “premium, natural and organic supermarkets” market. As I said of that market definition:

In other words, there is a serious risk of conflating a “market” for business purposes with an actual antitrust-relevant market. Whole Foods and Wild Oats may view themselves as operating in a different world than Wal-Mart. But their self-characterization is largely irrelevant. What matters is whether customers who shop at Whole Foods would shop elsewhere for substitute products if Whole Food’s prices rose too much. The implicit notion that the availability of organic foods at Wal-Mart (to say nothing of pretty much every other grocery store in the US today!) exerts little or no competitive pressure on prices at Whole Foods seems facially silly.

I don’t know for certain what an econometric analysis would show, but I would indeed be shocked if a legitimate economic analysis suggested that Jos. A. Banks and Men’s Warehouse occupied all or most of any relevant market. For the most part — and certainly for the marginal consumer — there is no meaningful difference between a basic, grey worsted wool suit bought at a big department store in the mall and a similar suit bought at a small retailer in the same mall or a “warehouse” store across the street. And the barriers to entry in such a market, if it existed, would be insignificant. Again, what I said of Whole Foods/Wild Oats is surely true here, too:

But because economically-relevant market definition turns on demand elasticity among consumers who are often free to purchase products from multiple distribution channels, a myopic focus on a single channel of distribution to the exclusion of others is dangerous.

Let’s hope the FTC gets it right this time.

As it begins its hundredth year, the FTC is increasingly becoming the Federal Technology Commission. The agency’s role in regulating data security, privacy, the Internet of Things, high-tech antitrust and patents, among other things, has once again brought to the forefront the question of the agency’s discretion and the sources of the limits on its power.Please join us this Monday, December 16th, for a half-day conference launching the year-long “FTC: Technology & Reform Project,” which will assess both process and substance at the FTC and recommend concrete reforms to help ensure that the FTC continues to make consumers better off.

FTC Commissioner Josh Wright will give a keynote luncheon address titled, “The Need for Limits on Agency Discretion and the Case for Section 5 UMC Guidelines.” Project members will discuss the themes raised in our inaugural report and how they might inform some of the most pressing issues of FTC process and substance confronting the FTC, Congress and the courts. The afternoon will conclude with a Fireside Chat with former FTC Chairmen Tim Muris and Bill Kovacic, followed by a cocktail reception.

Full Agenda:

  • Lunch and Keynote Address (12:00-1:00)
    • FTC Commissioner Joshua Wright
  • Introduction to the Project and the “Questions & Frameworks” Report (1:00-1:15)
    • Gus Hurwitz, Geoffrey Manne and Berin Szoka
  • Panel 1: Limits on FTC Discretion: Institutional Structure & Economics (1:15-2:30)
    • Jeffrey Eisenach (AEI | Former Economist, BE)
    • Todd Zywicki (GMU Law | Former Director, OPP)
    • Tad Lipsky (Latham & Watkins)
    • Geoffrey Manne (ICLE) (moderator)
  • Panel 2: Section 5 and the Future of the FTC (2:45-4:00)
    • Paul Rubin (Emory University Law and Economics | Former Director of Advertising Economics, BE)
    • James Cooper (GMU Law | Former Acting Director, OPP)
    • Gus Hurwitz (University of Nebraska Law)
    • Berin Szoka (TechFreedom) (moderator)
  • A Fireside Chat with Former FTC Chairmen (4:15-5:30)
    • Tim Muris (Former FTC Chairman | George Mason University) & Bill Kovacic (Former FTC Chairman | George Washington University)
  • Reception (5:30-6:30)
Our conference is a “widely-attended event.” Registration is $75 but free for nonprofit, media and government attendees. Space is limited, so RSVP today!

Working Group Members:
Howard Beales
Terry Calvani
James Cooper
Jeffrey Eisenach
Gus Hurwitz
Thom Lambert
Tad Lipsky
Geoffrey Manne
Timothy Muris
Paul Rubin
Joanna Shepherd-Bailey
Joe Sims
Berin Szoka
Sasha Volokh
Todd Zywicki

image

Please join us at the Willard Hotel in Washington, DC on December 16th for a conference launching the year-long project, “FTC: Technology and Reform.” With complex technological issues increasingly on the FTC’s docket, we will consider what it means that the FTC is fast becoming the Federal Technology Commission.

The FTC: Technology & Reform Project brings together a unique collection of experts on the law, economics, and technology of competition and consumer protection to consider challenges facing the FTC in general, and especially regarding its regulation of technology.

For many, new technologies represent “challenges” to the agency, a continuous stream of complex threats to consumers that can be mitigated only by ongoing regulatory vigilance. We view technology differently, as an overwhelmingly positive force for consumers. To us, the FTC’s role is to promote the consumer benefits of new technology — not to “tame the beast” but to intervene only with caution, when the likely consumer benefits of regulation outweigh the risk of regulatory error. This conference is the start of a year-long project that will recommend concrete reforms to ensure that the FTC’s treatment of technology works to make consumers better off. Continue Reading…

Below is the text of my oral testimony to the Senate Commerce, Science and Transportation Committee, the Consumer Protection, Product Safety, and Insurance Subcommittee, at its November 7, 2013 hearing on “Demand Letters and Consumer Protection: Examining Deceptive Practices by Patent Assertion Entities.” Information on the hearing is here, including an archived webcast of the hearing. My much longer and more indepth written testimony is here.

Please note that I am incorrectly identified on the hearing website as speaking on behalf of the Center for the Protection of Intellectual Property (CPIP). In fact, I was invited to testify soley in my personal capacity as a Professor of Law at George Mason University School of Law, given my academic research into the history of the patent system and the role of licensing and commercialization in the distribution of patented innovation. I spoke for neither George Mason University nor CPIP, and thus I am solely responsible for the content of my research and remarks.

Chairman McCaskill, Ranking Member Heller, and Members of the Subcommittee:

Thank you for this opportunity to speak with you today.

There certainly are bad actors, deceptive demand letters, and frivolous litigation in the patent system. The important question, though, is whether there is a systemic problem requiring further systemic revisions to the patent system. There is no answer to this question, and this is the case for three reasons.

Harm to Innovation

First, the calls to rush to enact systemic revisions to the patent system are being made without established evidence there is in fact systemic harm to innovation, let alone any harm to the consumers that Section 5 authorizes the FTC to protect. As the Government Accountability Office found in its August 2013 report on patent litigation, the frequently-cited studies claiming harms are actually “nonrandom and nongeneralizable,” which means they are unscientific and unreliable.

These anecdotal reports and unreliable studies do not prove there is a systemic problem requiring a systemic revision to patent licensing practices.

Of even greater concern is that the many changes to the patent system Congress is considering, incl. extending the FTC’s authority over demand letters, would impose serious costs on real innovators and thus do actual harm to America’s innovation economy and job growth.

From Charles Goodyear and Thomas Edison in the nineteenth century to IBM and Microsoft today, patent licensing has been essential in bringing patented innovation to the marketplace, creating economic growth and a flourishing society.  But expanding FTC authority to regulate requests for licensing royalties under vague evidentiary and legal standards only weakens patents and create costly uncertainty.

This will hamper America’s innovation economy—causing reduced economic growth, lost jobs, and reduced standards of living for everyone, incl. the consumers the FTC is charged to protect.

Existing Tools

Second, the Patent and Trademark Office (PTO) and courts have long had the legal tools to weed out bad patents and punish bad actors, and these tools were massively expanded just two years ago with the enactment of the America Invents Act.

This is important because the real concern with demand letters is that the underlying patents are invalid.

No one denies that owners of valid patents have the right to license their property or to sue infringers, or that patent owners can even make patent licensing their sole business model, as did Charles Goodyear and Elias Howe in the mid-nineteenth century.

There are too many of these tools to discuss in my brief remarks, but to name just a few: recipients of demand letters can sue patent owners in courts through declaratory judgment actions and invalidate bad patents. And the PTO now has four separate programs dedicated solely to weeding out bad patents.

For those who lack the knowledge or resources to access these legal tools, there are now numerous legal clinics, law firms and policy organizations that actively offer assistance.

Again, further systemic changes to the patent system are unwarranted because there are existing legal tools with established legal standards to address the bad actors and their bad patents.

If Congress enacts a law this year, then it should secure full funding for the PTO. Weakening patents and creating more uncertainties in the licensing process is not the solution.

Rhetoric

Lastly, Congress is being driven to revise the patent system on the basis of rhetoric and anecdote instead of objective evidence and reasoned explanations. While there are bad actors in the patent system, terms like PAE or patent troll constantly shift in meaning. These terms have been used to cover anyone who licenses patents, including universities, startups, companies that engage in R&D, and many others.

Classic American innovators in the nineteenth century like Thomas Edison, Charles Goodyear, and Elias Howe would be called PAEs or patent trolls today. In fact, they and other patent owners made royalty demands against thousands of end users.

Congress should exercise restraint when it is being asked to enact systemic legislative or regulatory changes on the basis of pejorative labels that would lead us to condemn or discriminate against classic innovators like Edison who have contributed immensely to America’s innovation economy.

Conclusion

In conclusion, the benefits or costs of patent licensing to the innovation economy is an important empirical and policy question, but systemic changes to the patent system should not be based on rhetoric, anecdotes, invalid studies, and incorrect claims about the historical and economic significance of patent licensing

As former PTO Director David Kappos stated last week in his testimony before the House Judiciary Committee: “we are reworking the greatest innovation engine the world has ever known, almost instantly after it has just been significantly overhauled. If there were ever a case where caution is called for, this is it.”

Thank you.

Commissioner Wright makes a powerful and important case in dissenting from the FTC’s 2-1 (Commissioner Ohlhausen was recused from the matter) decision imposing conditions on Nielsen’s acquisition of Arbitron.

Essential to Josh’s dissent is the absence of any actual existing market supporting the Commission’s challenge:

Nielsen and Arbitron do not currently compete in the sale of national syndicated cross-platform audience measurement services. In fact, there is no commercially available national syndicated cross-platform audience measurement service today. The Commission thus challenges the proposed transaction based upon what must be acknowledged as a novel theory—that is, that the merger will substantially lessen competition in a market that does not today exist.

* * *

[W]e…do not know how the market will evolve, what other potential competitors might exist, and whether and to what extent these competitors might impose competitive constraints upon the parties.

* * *

To be clear, I do not base my disagreement with the Commission today on the possibility that the potential efficiencies arising from the transaction would offset any anticompetitive effect. As discussed above, I find no reason to believe the transaction is likely to substantially lessen competition because the evidence does not support the conclusion that it is likely to generate anticompetitive effects in the alleged relevant market.

This is the kind of theory that seriously threatens innovation. Regulators in Washington are singularly ill-positioned to predict the course of technological evolution — that’s why they’re regulators and not billionaire innovators. To impose antitrust-based constraints on economic activity that hasn’t even yet occurred is the height of folly. As Virginia Postrel discusses in The Future and Its Enemies, this is the technocratic mindset, in all its stasist glory:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation.

* * *

By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

* * *

For technocrats, a kaleidoscope of trial-and-error innovation is not enough; decentralized experiments lack coherence. “Today, we have an opportunity to shape technology,” wrote [Newt] Gingrich in classic technocratic style. His message was that computer technology is too important to be left to hackers, hobbyists, entrepreneurs, venture capitalists, and computer buyers. “We” must shape it into a “coherent picture.” That is the technocratic notion of progress: Decide on the one best way, make a plan, and stick to it.

It should go without saying that this is the antithesis of the environment most conducive to economic advance. Whatever antitrust’s role in regulating technology markets, it must be evidence-based, grounded in economics and aware of its own limitations.

As Josh notes:

A future market case, such as the one alleged by the Commission today, presents a number of unique challenges not confronted in a typical merger review or even in “actual potential competition” cases. For instance, it is inherently more difficult in future market cases to define properly the relevant product market, to identify likely buyers and sellers, to estimate cross-elasticities of demand or understand on a more qualitative level potential product substitutability, and to ascertain the set of potential entrants and their likely incentives. Although all merger review necessarily is forward looking, it is an exceedingly difficult task to predict the competitive effects of a transaction where there is insufficient evidence to reliably answer these basic questions upon which proper merger analysis is based.

* * *

When the Commission’s antitrust analysis comes unmoored from such fact-based inquiry, tethered tightly to robust economic theory, there is a more significant risk that non-economic considerations, intuition, and policy preferences influence the outcome of cases.

Josh’s dissent also contains an important, related criticism of the FTC’s problematic reliance on consent agreements. It’s so good, in fact, I will quote it almost in its entirety:

Whether parties to a transaction are willing to enter into a consent agreement will often have little to do with whether the agreed upon remedy actually promotes consumer welfare. The Commission’s ability to obtain concessions instead reflects the weighing by the parties of the private costs and private benefits of delaying the transaction and potentially litigating the merger against the private costs and private benefits of acquiescing to the proposed terms. Indeed, one can imagine that where, as here, the alleged relevant product market is small relative to the overall deal size, the parties would be happy to agree to concessions that cost very little and finally permit the deal to close. Put simply, where there is no reason to believe a transaction violates the antitrust laws, a sincerely held view that a consent decree will improve upon the post-merger competitive outcome or have other beneficial effects does not justify imposing those conditions. Instead, entering into such agreements subtly, and in my view harmfully, shifts the Commission’s mission from that of antitrust enforcer to a much broader mandate of “fixing” a variety of perceived economic welfare-reducing arrangements.

Consents can and do play an important and productive role in the Commission’s competition enforcement mission. Consents can efficiently address competitive concerns arising from a merger by allowing the Commission to reach a resolution more quickly and at less expense than would be possible through litigation. However, consents potentially also can have a detrimental impact upon consumers. The Commission’s consents serve as important guidance and inform practitioners and the business community about how the agency is likely to view and remedy certain mergers. Where the Commission has endorsed by way of consent a willingness to challenge transactions where it might not be able to meet its burden of proving harm to competition, and which therefore at best are competitively innocuous, the Commission’s actions may alter private parties’ behavior in a manner that does not enhance consumer welfare. Because there is no judicial approval of Commission settlements, it is especially important that the Commission take care to ensure its consents are in the public interest.

This issue of the significance of the FTC’s tendency to, effectively, legislate by consent decree is of great importance, particularly in its Section 5 practice (as we discuss in our amicus brief in the Wyndham case).

As the FTC begins its 100th year next week, we need more voices like those of Commissioners Wright and Ohlhausen challenging the FTC’s harmful, technocratic mindset.

[Cross posted at The Center for the Protection of Intellectual Property]

In a prior blog posting, I reported how reports of a so-called “patent litigation explosion” today are just wrong.  As I detailed in another blog posting, the percentage of patent lawsuits today are not only consistent with historical patent litigation rates in the nineteenth century, there is actually less litigation today than during some decades in the early nineteenth century. Between 1840 and 1849, for instance, patent litigation rates were 3.6% — more than twice the patent litigation rate today.

(As an aside, we have to hold constant for issued patents in computing litigation percentage rates because more patents are issued now per year than twice the total population of New York City (NYC) in 1820 — 253,315 patents issued in 2012 compared to 123,706 residents in NYC in 1820.  Yet before someone says that this just means we have too many patents today, as Judge Posner blithely asserts without any empirical evidence, one must also recognize that the NYC population in 2013 is 8.3 million, which is far beyond merely double its 1820 population — NYC’s population has grown by a factor of 67!  A simple comparison to population growth, especially taking into account the explosive growth in the innovation industries in the past several decades, could as easily justify the claim that we haven’t got enough patents issuing today.)

Unfortunately, the mythical claims about a “patent litigation explosion” have shifted in recent months (perhaps because the original assertion was untenable).  Now the assertion is that there has been an “explosion” in lawsuits brought by patent licensing companies.  I’ll note for the record here that patent licensing companies are often referred to today by the undefined and nonobjective rhetorical epithet of “patent troll.”  In a recent study of patent licensing companies that exposes many of the unsound and unproven claims about these much-maligned companies – such as that patents owned by these companies are of lower quality than those owned by manufacturing entities – Stephen Moore first explained that the “troll” slur is used today by academics, commentators and the public alike “without a universally accepted definition.” So, let’s dispense with nonobjective rhetoric and simply identify these companies factually by their business models: patent licensing.

As with all discriminatory slurs, it’s unsurprising that this new claim about an alleged “explosion” in so-called “patent troll” lawsuits is unproven rubbish.  Similar to the myth about patent litigation generally, this is just another example of overwrought and empirically unsound rhetoric being used to push a policy agenda in Congress and regulatory agencies. (Six bills have been on the Hill so far this year, and FTC Chairwoman Edith Ramirez has announced that the FTC intends to begin a formal § 6(b) investigation of patent licensing companies).

How do we know that patent licensing companies are not the sole driver of any increases in patent litigation?  Contrary to the much-hyped claim today that patent licensing companies are the primary cause of most patent lawsuits in district courts in 2012, other serious and more careful reviews of the litigation data have shown that the primary culprit is not patent licensing companies, but rather the America Invents Act of 2011(“AIA”). The AIA created numerous new administrative proceedings for invalidating patents at the Patent & Trademark Office, which created additional incentives to file lawsuits in certain contexts.  Moreover, the AIA expressly prohibited joinder of multiple defendants in single lawsuits.  Both of these significant changes to the patent system has produced the entirely logical and expected result of more lawsuits being filed after the AIA’s statutory provisions went into effect in 2011 and 2012. In basic statistics terms, the effect of these statutory provisions in any study of patent litigation rates that does not take them into account is referred to as a “confounding variable.”

Even more important, when the data used in one of the most-referenced studies asserting a patent litigation explosion by patent licensing companies was tested by a highly respected scholar who specializes in statistical and empirical analyses of the patent system, he reported that he found no statistically significant results. (See Dave Schwartz’s testimony at the DOJ-FTC Workshop (Dec. 10, 2012), starting at approximately 1:58 at this video. Transcript available here.)  At least the scholars of this disputed study made their data available for confirmation, according to basic scientific norms. Other prominently cited studies on patent licensing companies have relied on secret data from companies like RPX, Patent Freedom, and other firms who have a very large dog in the litigation and policy fight, and thus this data has all of the trappings of being unreliable and biased (see here and here)

The important role that the AIA is playing in increasing patent lawsuits by patent licensing companies is ironic if only because the people misreporting the patent litigation data are the same people who were big proponents of the AIA (some of them even attended the AIA’s signing ceremony with President Obama in September 2011).  Among non-patent scholars, this is called trying to have your cake and eat it, too.  Usually such efforts fail, especially when children always try to get away with this logical fallacy.  It shows the depths to which the patent policy debates have sunk that the press, Congress, the President and many others don’t seem to care about this one bit and instead are pushing ahead and repeating – and even drafting legislation based upon – bad “statistics” with serious methodological problems and compiled from secret, unreliable data.

With Congress rushing headlong to enact legislation that discriminates against patent licensing companies, it’s time to step back and start asking serious questions before the legal system that makes possible the innovation industries is changed and we discover too late that it’s for the worse.  It’s time to set aside rhetoric and made-up “statistics” based on secret data and to ask whether there really is a systemic problem.  It’s also time to start asking serious questions about why these myths were created in the first place, what does the raw data actually say, who is providing the data and funding these “troll” studies, and who is pushing this rhetoric into the public policy debates to the point that it has become a deafening roar that makes impossible all reasonable and sensible discussion.

[NOTE: minor grammatical and style changes were made after the initial posting]

 

Joshua Wright is a Commissioner at the Federal Trade Commission

I’d like to thank Geoff and Thom for organizing this symposium and creating a forum for an open and frank exchange of ideas about the FTC’s unfair methods of competition authority under Section 5.  In offering my own views in a concrete proposed Policy Statement and speech earlier this summer, I hoped to encourage just such a discussion about how the Commission can define its authority to prosecute unfair methods of competition in a way that both strengthens the agency’s ability to target anticompetitive conduct and provides much needed guidance to the business community.  During the course of this symposium, I have enjoyed reading the many thoughtful posts providing feedback on my specific proposal, as well as offering other views on how guidance and limits can be imposed on the Commission’s unfair methods of competition authority.  Through this marketplace of ideas, I believe the Commission can develop a consensus position and finally accomplish the long overdue task of articulating its views on the application of the agency’s signature competition statute.  As this symposium comes to a close, I’d like to make a couple quick observations and respond to a few specific comments about my proposal.

There Exists a Vast Area of Agreement on Section 5

Although conventional wisdom may suggest it will be impossible to reach any meaningful consensus with respect to Section 5, this symposium demonstrates that there actually already exists a vast area of agreement on the subject.  In fact, it appears safe to draw at least two broad conclusions from the contributions that have been offered as part of this symposium.

First, an overwhelming majority of commentators believe that we need guidance on the scope of the FTC’s unfair methods of competition authority.  This is not surprising.  The absence of meaningful limiting principles distinguishing lawful conduct from unlawful conduct under Section 5 and the breadth of the Commission’s authority to prosecute unfair methods of competition creates significant uncertainty among the business community.  Moreover, without a coherent framework for applying Section 5, the Commission cannot possibly hope to fulfill Congress’s vision that Section 5 would play a key role in helping the FTC leverage its unique research and reporting functions to develop evidence-based competition policy.

Second, there is near unanimity that the FTC should challenge only conduct as an unfair method of competition if it results in “harm to competition” as the phrase is understood under the traditional federal antitrust laws.  Harm to competition is a concept that is readily understandable and has been deeply embedded into antitrust jurisprudence.  Incorporating this concept would require that any conduct challenged under Section 5 must both harm the competitive process and harm consumers.  Under this approach, the FTC should not consider non-economic factors, such as whether the practice harms small business or whether it violates public morals, in deciding whether to prosecute conduct as an unfair method of competition.  This is a simple commitment, but one that is not currently enshrined in the law.  By tethering the definition of unfair methods of competition to modern economics and to the understanding of competitive harm articulated in contemporary antitrust jurisprudence, we would ensure Section 5 enforcement focuses upon conduct that actually is anticompetitive.

While it is not surprising that commentators offering a diverse set of perspectives on the appropriate scope of the FTC’s unfair methods of competition authority would agree on these two points, I think it is important to note that this consensus covers much of the Section 5 debate while leaving some room for debate on the margins as to how the FTC can best use its unfair methods of competition authority to complement its mission of protecting competition.

Some Clarifications Regarding My Proposed Policy Statement

In the spirit of furthering the debate along those margins, I also briefly would like to correct the record, or at least provide some clarification, on a few aspects of my proposed Policy Statement.

First, contrary to David Balto’s suggestion, my proposed Policy Statement acknowledges the fact that Congress envisioned Section 5 to be an incipiency statute.  Indeed, the first element of my proposed definition of unfair methods of competition requires the FTC to show that the act or practice in question “harms or is likely to harm competition significantly.”  In fact, it is by prosecuting practices that have not yet resulted in harm to competition, but are likely to result in anticompetitive effects if allowed to continue, that my definition reaches “invitations to collude.”  Paul Denis raises an interesting question about how the FTC should assess the likelihood of harm to competition, and suggests doing so using an expected value test.  My proposed policy statement does just that by requiring the FTC to assess both the magnitude and probability of the competitive harm when determining whether a practice that has not yet harmed competition, but potentially is likely to, is an unfair method of competition under Section 5.  Where the probability of competitive harm is smaller, the Commission should not find an unfair method of competition without reason to believe the conduct poses a substantial harm.  Moreover, by requiring the FTC to show that the conduct in question results in “harm to competition” as that phrase is understood under the traditional federal antitrust laws, my proposal also incorporates all the temporal elements of harm discussed in the antitrust case law and therefore puts the Commission on the same footing as the courts.

Second, both Dan Crane and Marina Lao have suggested that the efficiencies screen I have proposed results in a null (or very small) set of cases because there is virtually no conduct for which some efficiencies cannot be claimed.  This suggestion stems from an apparent misunderstanding of the efficiencies screen.  What these comments fail to recognize is that the efficiencies screen I offer intentionally leverages the Commission’s considerable expertise in identifying the presence of cognizable efficiencies in the merger context and explicitly ties the analysis to the well-developed framework offered in the Horizontal Merger Guidelines.  As any antitrust practitioner can attest, the Commission does not credit “cognizable efficiencies” lightly and requires a rigorous showing that the claimed efficiencies are merger-specific, verifiable, and not derived from an anticompetitive reduction in output or service.  Fears that the efficiencies screen in the Section 5 context would immunize patently anticompetitive conduct because a firm nakedly asserts cost savings arising from the conduct without evidence supporting its claim are unwarranted.  Under this strict standard, the FTC would almost certainly have no trouble demonstrating no cognizable efficiencies exist in Dan’s “blowing up of the competitor’s factory” example because the very act of sabotage amounts to an anticompetitive reduction in output.

Third, Marina Lao further argues that permitting the FTC to challenge conduct as an unfair method of competition only when there are no cognizable efficiencies is too strict a standard and that it would be better to allow the agency to balance the harms against the efficiencies.  The current formulation of the Commission’s unfair methods of competition enforcement has proven unworkable in large part because it lacks clear boundaries and is both malleable and ambiguous.  In my view, in order to make Section 5 a meaningful statute, and one that can contribute productively to the Commission’s competition enforcement mission as envisioned by Congress, the Commission must first confine its unfair methods of competition authority to those areas where it can leverage its unique institutional capabilities to target the conduct most harmful to consumers.  This in no way requires the Commission to let anticompetitive conduct run rampant.  Where the FTC identifies and wants to challenge conduct with both harms and benefits, it is fully capable of doing so successfully in federal court under the traditional antitrust laws.

I cannot think of a contribution the Commission can make to the FTC’s competition mission that is more important than issuing a Policy Statement articulating the appropriate application of Section 5.  I look forward to continuing to exchange ideas with those both inside and outside the agency regarding how the Commission can provide guidance about its unfair methods of competition authority.  Thank you once again to Truth on the Market for organizing and hosting this symposium and to the many participants for their thoughtful contributions.

*The views expressed here are my own and do not reflect those of the Commission or any other Commissioner.

Tad Lipsky is a partner in the law firm of Latham & Watkins LLP.

The FTC’s struggle to provide guidance for its enforcement of Section 5’s Unfair Methods of Competition (UMC) clause (or not – some oppose the provision of forward guidance by the agency, much as one occasionally heard opposition to the concept of merger guidelines in 1968 and again in 1982) could evoke a much broader long-run issue: is a federal law regulating single-firm conduct worth the trouble?  Antitrust law has its hard spots and its soft spots: I imagine that most antitrust lawyers think they can define “naked” price-fixing and other hard-core cartel conduct, and they would defend having a law that prohibits it.  Similarly with a law that prohibits anticompetitive mergers.  Monopolization perhaps not so much: 123 years of Section 2 enforcement and the best our Supreme Court can do is the Grinnell standard, defining monopolization as the “willful acquisition or maintenance of [monopoly] power as distinguished from growth or development as a consequence of a superior product, business acumen, or historic accident.”  Is this Grinnell definition that much better than “unfair methods of competition”?

The Court has created a few specific conduct categories within the Grinnell rubric: sham petitioning (objectively and subjectively baseless appeals for government action), predatory pricing (pricing below cost with a reasonable prospect of recoupment through the exercise of power obtained by achieving monopoly or disciplining competitors), and unlawful tying (using market power over one product to force the purchase of a distinct product – you probably know the rest).  These categories are neither perfectly clear (what measure of cost indicates a predatory price?) nor guaranteed to last (the presumption that a patent bestows market power within the meaning of the tying rule was abandoned in 2005).  At least the more specific categories give some guidance to lower courts, prosecutors, litigants and – most important of all – compliance-inclined businesses.  They provide more useful guidance than Grinnell.

The scope for differences of opinion regarding the definition of monopolization is at an historical zenith.  Some of the least civilized disagreements between the FTC and the Antitrust Division – the Justice Department’s visible contempt for the FTC’s ReaLemon decision in the early 1980’s, or the three-Commissioner vilification of the Justice Department’s 2008 report on unilateral conduct – concern these differences.  The 2009 Justice Department theatrically withdrew the 2008 Justice Department’s report, claiming (against clear objective evidence to the contrary) that the issue was settled in its favor by Lorain Journal, Aspen Skiing, and the D.C. Circuit decision in the main case involving Microsoft.

Although less noted in the copious scholarly output concerning UMC, disputes about the meaning of Section 5 are encouraged by the lack of definitive guidance on monopolization.  For every clarification provided by the Supreme Court, the FTC’s room for maneuver under UMC is reduced.  The FTC could not define sham litigation inconsistently with Professional Real Estate Investors v. Columbia Pictures Industries; it could not read recoupment out of the Brooke Group v. Brown & Williamson Tobacco Co. definition of predatory pricing.

The fact remains that there has been less-than-satisfactory clarification of single-firm conduct standards under either statute.  Grinnell remains the only “guideline” for the vast territory of Section 2 enforcement (aside from the specific mentioned categories), especially since the Supreme Court has shown no enthusiasm for either of the two main appellate-court approaches to a general test for unlawful unilateral conduct under Section 2, the “intent test” and the “essential facilities doctrine.”  (It has not rejected them, either.)  The current differences of opinion – even within the Commission itself, leave aside the appellate courts – are emblematic of a similar failure with regard to UMC.  Failure to clarify rules of such universal applicability has obvious costs and adverse impacts: creative and competitively benign business conduct is deterred (with corresponding losses in innovation, productivity and welfare), and the costs, delays, disruption and other burdens of litigation are amplified.  Are these costs worth bearing?

Years ago I heard it said that a certain old-line law firm had tightened its standards of partner performance: whereas formerly the firm would expel a partner who remained drunk for ten years, the new rule was that a partner could remain drunk only for five years.  The antitrust standards for unilateral conduct have vacillated for over a century.  For a time (as exemplified by United States v. United Shoe Machinery Corp.) any act of self-preservation by a monopolist – even if “honestly industrial” – was presumptively unlawful if not compelled by outside circumstances.  Even Grinnell looks good compared to that, but Grinnell still fails to provide much help in most Section 2 cases; and the debate over UMC says the same about Section 5.  I do not advocate the repeal of either statute, but shouldn’t we expect that someone might want to tighten our standards?  Maybe we can allow a statute a hundred years to be clarified through common-law application.  Section 2 passed that milepost twenty-three years ago, and Section 5 reaches that point next year.  We shouldn’t be surprised if someone wants to pull the plug beyond that point.

Paul Denis is a partner at Dechert LLP and Deputy Chair of the Firm’s Global Litigation Practice.  His views do not necessarily reflect those of his firm or its clients.

Deterrence ought to be an important objective of enforcement policy.  Some might argue it should be THE objective.  But it is difficult to know what is being deterred by a law if the agency enforcing the law cannot or will not explain its boundaries.  Commissioner Wright’s call for a policy statement on the scope of Section 5 enforcement is a welcome step toward Section 5 achieving meaningful deterrence of competitively harmful conduct.

The draft policy statement has considerable breadth.  I will limit myself to three concepts that I see as important to its application, the temporal dimension (applicable to both harm and efficiencies), the concept of harm to competition, and the concept of cognizable efficiencies.

Temporal Dimension

Commissioner Wright offers a compelling framework, but it is missing an important element — the temporal dimension.  Over what time period must likely harm to competition be felt in order to be actionable?  Similarly, over what time period must efficiencies be realized in order to be cognizable?  On page 8 of the draft policy statement he notes that the Commission may challenge “practices that have not yet resulted in harm to competition but are likely to result in anticompetitive effects if allowed to continue.”  When must those effects be felt?  How good is the Commission’s crystal ball for predicting harm to competition when the claim is that the challenged conduct precluded some future competition from coming to market?  Doesn’t that crystal ball get a bit murky when you are looking further into the future?  Doesn’t it get particularly murky when the future effect depends on one more other things happening between now and the time of feared anticompetitive effects?

We often hear from the Commission that arguments about future entry are too remote in time (although the bright line test of 2 years for entry to have an effect was pulled from the Horizontal Merger Guidelines).  Shouldn’t similar considerations be applied to claims of harm to competition?  The Commission has engaged in considerable innovation to try to get around the potential competition doctrine developed by the courts and the Commission under Section 7 of the Clayton Act.  The policy statement should consider whether there can be some temporal limit to Section 5 claims.  Perhaps the concept of likely harm to competition could be interpreted in an expected value sense, considering both probability of harm and timing of harm, but it is not obvious to me how that interpretation, whatever its theoretical appeal, could be made operational.  Bright line tests or presumptive time periods may be crude but may also be more easily applied.

Harm to Competition

On the “harm to competition” element, I was left unclear if this was a unified concept or whether there were two subparts to it.  Commissioner Wright paraphrases Chicago Board of Trade and concludes that “Conduct challenged under Section 5 must harm competition and cause an anticompetitive effect.” (emphasis supplied).  He then quotes Microsoft for the proposition that conduct “must harm the competitive process and thereby harm consumers.” (emphasis supplied).  The indicators referenced at the bottom of page 18 of his speech strike me as indicators of harm to consumers rather than indicators of harm to the competitive process.  Is there anything more to “harm to competition” than “harm to consumers?”  If so, what is it?  I think there probably should be something more than harm to consumers.  If I develop a new product that drives from the market all rivals, the effect may be to increase prices and reduce output.  But absent some bad act – some harm to the competitive process – my development of the new product should not expose me to a Section 5 claim or even the obligation to argue cognizable efficiencies.

On the subject of indicators, the draft policy statement notes that perhaps most relevant are price or output effects.  But Commissioner Wright’s speech goes on to note that increased prices, reduced output, diminished quality, or weakened incentives to innovate are also indicators (Speech at 19).  Shouldn’t this list be limited to output (or quality-adjusted output)?  If price goes up but output rises, isn’t that evidence that consumers have been benefitted?  Why should I have to defend myself by arguing that there are obvious efficiencies (as evidenced by the increased output)?  The reference to innovation is particularly confusing. I don’t believe there is a well developed theoretical or empirical basis for assessing innovation. The structural inferences that we make about price (often dubious themselves) don’t apply to innovation.  We don’t really know what leads to more or less innovation.  How is it that the Commission can see this indicator?  What is it that they are observing?

Cognizable Efficiencies

On cognizable efficiencies, there is a benefit in that the draft policy statement ties this element to the analogous concept used in merger enforcement.  But there is a disadvantage in that the FTC staff usually finds that efficiencies advanced by the parties in mergers are not cognizable for one reason or another.  Perhaps most of what the parties in mergers advance is not cognizable.  But it strikes me as implausible that after so many years of applying this concept that the Commission still rarely sees an efficiencies argument that is cognizable.  Are merging parties and their counsel really that dense?  Or is it that the goal posts keeping moving to ensure that no one ever scores?  Based on the history in mergers, I’m not sure this second element will amount to much.  The respondent will assert cognizable efficiencies, the staff will reject them, and we will be back in the litigation morass that the draft policy statement was trying to avoid, limited only by the Commission’s need to show harm to competition.