Archives For federal trade commission

I posted this originally on my own blog, but decided to cross-post here since Thom and I have been blogging on this topic.

“The U.S. stock market is having another solid year. You wouldn’t know it by looking at the shares of companies that manage money.”

That’s the lead from Charles Stein on Bloomberg’s Markets’ page today. Stein goes on to offer three possible explanations: 1) a weary bull market, 2) a move toward more active stock-picking by individual investors, and 3) increasing pressure on fees.

So what has any of that to do with the common ownership issue? A few things.

First, it shows that large institutional investors must not be very good at harvesting the benefits of the non-competitive behavior they encourage among the firms the invest in–if you believe they actually do that in the first place. In other words, if you believe common ownership is a problem because CEOs are enriching institutional investors by softening competition, you must admit they’re doing a pretty lousy job of capturing that value.

Second, and more importantly–as well as more relevant–the pressure on fees has led money managers to emphasis low-cost passive index funds. Indeed, among the firms doing well according to the article is BlackRock, “whose iShares exchange-traded fund business tracks indexes, won $20 billion.” In an aggressive move, Fidelity has introduced a total of four zero-fee index funds as a way to draw fee-conscious investors. These index tracking funds are exactly the type of inter-industry diversified funds that negate any incentive for competition softening in any one industry.

Finally, this also illustrates the cost to the investing public of the limits on common ownership proposed by the likes of Einer Elhague, Eric Posner, and Glen Weyl. Were these types of proposals in place, investment managers could not offer diversified index funds that include more than one firm’s stock from any industry with even a moderate level of market concentration. Given competitive forces are pushing investment companies to increase the offerings of such low-cost index funds, any regulatory proposal that precludes those possibilities is sure to harm the investing public.

Just one more piece of real evidence that common ownership is not only not a problem, but that the proposed “fixes” are.

At the heart of the common ownership issue in the current antitrust debate is an empirical measure, the Modified Herfindahl-Hirschmann Index, researchers have used to correlate patterns of common ownership with measures of firm behavior and performance. In an accompanying post, Thom Lambert provides a great summary of just what the MHHI, and more specifically the MHHIΔ, is and how it can be calculated. I’m going to free-ride off Thom’s effort, so if you’re not very familiar with the measure, I suggest you start here and here.

There are multiple problems with the common ownership story and with the empirical evidence proponents of stricter antitrust enforcement point to in order to justify their calls to action. Thom and I address a number of those problems in our recent paper on “The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms.” However, one problem we don’t take on in that paper is the nature of the MHHIΔ itself. More specifically, what is one to make of it and how should it be interpreted, especially from a policy perspective?

The Policy Benchmark

The benchmark for discussion is the original Herfindahl-Hirschmann Index (HHI), which has been part of antitrust for decades. The HHI is calculated by summing the squared value of each firm’s market share. Depending on whether you use percents or percentages, the value of the sum may be multiplied by 10,000. For instance, for two firms that split the market evenly, the HHI could be calculated either as:

HHI = 502 + 502 = 5.000, or
HHI = (.502 + .502)*10,000 = 5,000

It’s a pretty simple exercise to see that one of the useful properties of HHI is that it is naturally bounded between 0 and 10,000. In the case of a pure monopoly that commands the entire market, the value of HHI is 10,000 (1002). As the number of firms increases and market shares approach very small fractions, the value of HHI asymptotically approaches 0. For a market with 10 firms firms that evenly share the market, for instance, HHI is 1,000; for 100 identical firms, HHI is 100; for 1,000 identical firms, HHI is 1. As a result, we know that when HHI is close to 10,000, the industry is highly concentrated in one firm; and when the HHI is close to zero, there is no meaningful concentration at all. Indeed, the Department of Justice’s Horizontal Merger Guidelines make use of this property of the HHI:

Based on their experience, the Agencies generally classify markets into three types:

  • Unconcentrated Markets: HHI below 1500
  • Moderately Concentrated Markets: HHI between 1500 and 2500
  • Highly Concentrated Markets: HHI above 2500

The Agencies employ the following general standards for the relevant markets they have defined:

  • Small Change in Concentration: Mergers involving an increase in the HHI of less than 100 points are unlikely to have adverse competitive effects and ordinarily require no further analysis.
  • Unconcentrated Markets: Mergers resulting in unconcentrated markets are unlikely to have adverse competitive effects and ordinarily require no further analysis.
  • Moderately Concentrated Markets: Mergers resulting in moderately concentrated markets that involve an increase in the HHI of more than 100 points potentially raise significant competitive concerns and often warrant scrutiny.
  • Highly Concentrated Markets: Mergers resulting in highly concentrated markets that involve an increase in the HHI of between 100 points and 200 points potentially raise significant competitive concerns and often warrant scrutiny. Mergers resulting in highly concentrated markets that involve an increase in the HHI of more than 200 points will be presumed to be likely to enhance market power. The presumption may be rebutted by persuasive evidence showing that the merger is unlikely to enhance market power.

Just by way of reference, an HHI of 2500 could reflect four firms sharing the market equally (i.e., 25% each), or it could be one firm with roughly 49% of the market and 51 identical small firms sharing the rest evenly.

Injecting MHHIΔ Into the Mix

MHHI is intended to account for both the product market concentration among firms captured by the HHI, and the common ownership concentration across firms in the market measured by the MHHIΔ. In short, MHHI = HHI + MHHIΔ.

As Thom explains in great detail, MHHIΔ attempts to measure the combined effects of the relative influence of shareholders that own positions across competing firms on management’s strategic decision-making and the combined market shares of the commonly-owned firms. MHHIΔ is the measure used in the various empirical studies allegedly demonstrating a causal relationship between common ownership (higher MHHIΔs) and the supposed anti-competitive behavior of choice.

Some common ownership critics, such as Einer Elhague, have taken those results and suggested modifying antitrust rules to incorporate the MHHIΔ in the HHI guidelines above. For instance, Elhague writes (p 1303):

Accordingly, the federal agencies can and should challenge any stock acquisitions that have produced, or are likely to produce, anti-competitive horizontal shareholdings. Given their own guidelines and the empirical results summarized in Part I, they should investigate any horizontal stock acquisitions that have created, or would create, a ΔMHHI of over 200 in a market with an MHHI over 2500, in order to determine whether those horizontal stock acquisitions raised prices or are likely to do so.

Elhague, like many others, couch their discussion of MHHI and MHHIΔ in the context of HHI values as though the additive nature of MHHI means such a context make sense. And if the examples are carefully chosen, the numbers even seem to make sense. For instance, even in our paper (page 30), we give a few examples to illustrate some of the endogeneity problems with MHHIΔ:

For example, suppose again that five institutional investors hold equal stakes (say, 3%) of each airline servicing a market and that the airlines have no other significant shareholders.  If there are two airlines servicing the market and their market shares are equivalent, HHI will be 5000, MHHI∆ will be 5000, and MHHI (HHI + MHHI∆) will be 10000.  If a third airline enters and grows so that the three airlines have equal market shares, HHI will drop to 3333, MHHI∆ will rise to 6667, and MHHI will remain constant at 10000.  If a fourth airline enters and the airlines split the market evenly, HHI will fall to 2500, MHHI∆ will rise further to 7500, and MHHI will again total 10000.

But do MHHI and MHHI∆ really fit so neatly into the HHI framework? Sadly–and worringly–no, not at all.

The Policy Problem

There seems to be a significant problem with simply imposing MHHIΔ into the HHI framework. Unlike HHI, from which we can infer something about the market based on the nominal value of the measure, MHHIΔ has no established intuitive or theoretical grounding. In fact, MHHIΔ has no intuitively meaningful mathematical boundaries from which to draw inferences about “how big is big?”, a fundamental problem for antitrust policy.

This is especially true within the range of cross-shareholding values we’re talking about in the common ownership debate. To illustrate just how big a problem this is, consider a constrained optimization of MHHI based on parameters that are not at all unreasonable relative to hypothetical examples cited in the literature:

  • Four competing firms in the market, each of which is constrained to having at least 5% market share, and their collective sum must equal 1 (or 100%).
  • Five institutional investors each of which can own no more than 5% of the outstanding shares of any individual airline, with no restrictions across airlines.
  • The remaining outstanding shares are assumed to be diffusely owned (i.e., no other large shareholder in any firm).

With only these modest restrictions on market share and common ownership, what’s the maximum potential value of MHHI? A mere 26,864,516,491, with an MHHI∆ of 26,864,513,774 and HHI of 2,717.

That’s right, over 26.8 billion. To reach such an astronomical number, what are the parameter values? The four firms split the market with 33, 31.7, 18.3, and 17% shares, respectively. Investor 1 owns 2.6% of the largest firm (by market share) while Investors 2-5 each own between 4.5 and 5% of the largest firm. Investors 1 and 2 own 5% of the smallest firm, while Investors 3 and 4 own 3.9% and Investor 5 owns a minuscule (0.0006%) share. Investor 2 is the only investor with any holdings (a tiny 0.0000004% each) in the two middling firms. These are not unreasonable numbers by any means, but the MHHI∆ surely is–especially from a policy perspective.

So if MHHI∆ can range from near zero to as much as 28.6 billion within reasonable ranges of market share and shareholdings, what should we make of Elhague’s proposal that mergers be scrutinized for increasing MHHI∆ by 200 points if the MHHI is 2,500 or more? We argue that such an arbitrary policy model is not only unfounded empirically, but is completely void of substantive reason or relevance.

The DOJ’s Horizontal Merger Guidelines above indicate that antitrust agencies adopted the HHI benchmarks for review “[b]ased on their experience”.  In the 1982 and 1984 Guidelines, the agencies adopted HHI standards 1,000 and 1,800, compared to the current 1,500 and 2,500 levels, in determining whether the industry is concentrated and a merger deserves additional scrutiny. These changes reflect decades of case reviews relating market structure to likely competitive behavior and consumer harm.

We simply do not know enough yet empirically about the relation between MHHI∆ and benchmarks of competitive behavior and consumer welfare to make any intelligent policies based on that metric–even if the underlying argument had any substantive theoretical basis, which we doubt. This is just one more reason we believe the best response to the common ownership problem is to do nothing, at least until we have a theoretically, and empirically, sound basis on which to make intelligent and informed policy decisions and frameworks.

The Federal Trade Commission will soon hold hearings on Competition and Consumer Protection in the 21st Century.  The topics to be considered include:

  1. The state of antitrust and consumer protection law and enforcement, and their development, since the [1995] Pitofsky hearings;
  2. Competition and consumer protection issues in communication, information and media technology networks;
  3. The identification and measurement of market power and entry barriers, and the evaluation of collusive, exclusionary, or predatory conduct or conduct that violates the consumer protection statutes enforced by the FTC, in markets featuring “platform” businesses;
  4. The intersection between privacy, big data, and competition;
  5. The Commission’s remedial authority to deter unfair and deceptive conduct in privacy and data security matters;
  6. Evaluating the competitive effects of corporate acquisitions and mergers;
  7. Evidence and analysis of monopsony power, including but not limited to, in labor markets;
  8. The role of intellectual property and competition policy in promoting innovation;
  9. The consumer welfare implications associated with the use of algorithmic decision tools, artificial intelligence, and predictive analytics;
  10. The interpretation and harmonization of state and federal statutes and regulations that prohibit unfair and deceptive acts and practices; and
  11. The agency’s investigation, enforcement and remedial processes.

The Commission has solicited comments on each of these main topics and a number of subtopics.  Initial comments are due today, but comments will also be accepted at two other times.  First, before each scheduled hearing on a topic, the Commission will accept comments on that particular matter.  In addition, the Commission will accept comments at the end of all the hearings.

Over the weekend, Mike Sykuta and I submitted a comment on topic 6, “evaluating the competitive effects of corporate acquisitions and mergers.”  We addressed one of the subtopics the FTC will consider: “the analysis of acquisitions and holding of a non-controlling ownership interest in competing companies.”

Here’s our comment, with a link to our working paper on the topic of common ownership by institutional investors:

To Whom It May Concern:

We are grateful for the opportunity to respond to the U.S. Federal Trade Commission’s request for comment on its upcoming hearings on Competition and Consumer Protection in the 21st Century. We are professors of law (Lambert) and economics (Sykuta) at the University of Missouri. We wish to comment on Topic 6, “evaluating the competitive effects of corporate acquisitions and mergers” and specifically on Subtopic 6(c), “the analysis of acquisitions and holding of a non-controlling ownership interest in competing companies.”

Recent empirical research purports to demonstrate that institutional investors’ “common ownership” of small stakes in competing firms causes those firms to compete less aggressively, injuring consumers. A number of prominent antitrust scholars have cited this research as grounds for limiting the degree to which institutional investors may hold stakes in multiple firms that compete in any concentrated market. In our recent working paper, The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms, which we submit along with these comments, we contend that the purported competitive problem is overblown and that the proposed solutions would reduce overall social welfare.

With respect to the purported problem, our paper shows that the theory of anticompetitive harm from institutional investors’ common ownership is implausible and that the empirical studies supporting the theory are methodologically unsound. The theory fails to account for the fact that intra-industry diversified institutional investors are also inter-industry diversified, and it rests upon unrealistic assumptions about managerial decision-making. The empirical studies purporting to demonstrate anticompetitive harm from common ownership are deficient because they inaccurately assess institutional investors’ economic interests and employ an endogenous measure that precludes causal inferences.

Even if institutional investors’ common ownership of competing firms did soften market competition somewhat, the proposed policy solutions would themselves create welfare losses that would overwhelm any social benefits they secured. The proposed policy solutions would create tremendous new decision costs for business planners and adjudicators and would raise error costs by eliminating welfare-enhancing investment options and/or exacerbating corporate agency costs.

In light of these problems with the purported problem and shortcomings of the proposed solutions, the optimal regulatory approach—at least, on the current empirical record—is to do nothing about institutional investors’ common ownership of small stakes in competing firms.

Thank you for considering these comments and our attached paper. We would be happy to answer any questions you may have.

Sincerely,

Thomas A. Lambert, Wall Family Chair in Corporate Law and Governance, University of Missouri Law School;
Michael E. Sykuta, Associate Professor, Division of Applied Social Sciences, University of Missouri; Director, Contracting and Organizations Research Institute (CORI)

Kudos to the Commission for holding this important set of hearings.

This has been a big year for business in the courts. A U.S. district court approved the AT&T-Time Warner merger, the Supreme Court upheld Amex’s agreements with merchants, and a circuit court pushed back on the Federal Trade Commission’s vague and heavy handed policing of companies’ consumer data safeguards.

These three decisions mark a new era in the intersection of law and economics.

AT&T-Time Warner

AT&T-Time Warner is a vertical merger, a combination of firms with a buyer-seller relationship. Time Warner creates and broadcasts content via outlets such as HBO, CNN, and TNT. AT&T distributes content via services such as DirecTV.

Economists see little risk to competition from vertical mergers, although there are some idiosyncratic circumstances in which competition could be harmed. Nevertheless, the U.S. Department of Justice went to court to block the merger.

The last time the goverment sued to block a merger was more than 40 years ago, and the government lost. Since then, the government relied on the threat of litigation to extract settlements from the merging parties. For example, in the 1996 merger between Time Warner and Turner, the FTC required limits on how the new company could bundle HBO with less desirable channels and eliminated agreements that allowed TCI (a cable company that partially owned Turner) to carry Turner channels at preferential rates.

With AT&T-Time Warner, the government took a big risk, and lost. It was a big risk because (1) it’s a vertical merger, and (2) the case against the merger was weak. The government’s expert argued consumers would face an extra 45 cents a month on their cable bills if the merger went through, but under cross-examination, conceded it might be as little as 13 cents a month. That’s a big difference and raised big questions about the reliability of the expert’s model.

Judge Richard J. Leon’s 170+ page ruling agreed that the government’s case was weak and its expert was not credible. While it’s easy to cheer a victory of big business over big government, the real victory was the judge’s heavy reliance on facts, data, and analysis rather than speculation over the potential for consumer harm. That’s a big deal and may make the way for more vertical mergers.

Ohio v. American Express

The Supreme Court’s ruling in Amex may seem obscure. The court backed American Express Co.’s policy of preventing retailers from offering customers incentives to pay with cheaper cards.

Amex charges higher fees to merchants than do other cards, such as Visa, MasterCard, and Discover. Amex cardholders also have higher incomes and tend to spend more at stores than those associated with other networks. And, Amex offers its cardholders better benefits, services, and rewards than the other cards. Merchants don’t like Amex because of the higher fees, customers prefer Amex because of the card’s perks.

Amex, and other card companies, operate in what is known as a two-sided market. Put simply, they have two sets of customers: merchants who pay swipe fees, and consumers who pay fees and interest.

Part of Amex’s agreement with merchants is an “anti-steering” provision that bars merchants from offering discounts for using non-Amex cards. The U.S. Justice Department and a group of states sued the company, alleging the Amex rules limited merchants’ ability to reduce their costs from accepting credit cards, which meant higher retail prices. Amex argued that the higher prices charged to merchants were kicked back to its cardholders in the form of more and better perks.

The Supreme Court found that the Justice Department and states focused exclusively on one side (merchant fees) of the two-sided market. The courts says the government can’t meet its burden by showing some effect on some part of the market. Instead, they must demonstrate, “increased cost of credit card transactions … reduced number of credit card transactions, or otherwise stifled competition.” The government could not prove any of those things.

We live in a world two-sided markets. Amazon may be the biggest two-sided market in the history of the world, linking buyers and sellers. Smartphones such as iPhones and Android devices are two-sided markets, linking consumers with app developers. The Supreme Court’s ruling in Amex sets a standard for how antitrust law should treat the economics of two-sided markets.

LabMD

LabMD is another matter that seems obscure, but could have big impacts on the administrative state.

Since the early 2000s, the FTC has brought charges against more than 150 companies alleging they had bad security or privacy practices. LabMD was one of them, when its computer system was compromised by professional hackers in 2008. The FTC claimed that LabMD’s failure to adequately protect customer data was an “unfair” business practice.

Challenging the FTC can get very expensive and the agency used the threat of litigation to secure settlements from dozens of companies. It then used those settlements to convince everyone else that those settlements constituted binding law and enforceable security standards.

Because no one ever forced the FTC to defend what it was doing in court, the FTC’s assertion of legal authority became a self-fulfilling prophecy. LabMD, however, chose to challege the FTC. The fight drove LabMD out of business, but public interest law firm Cause of Action and lawyers at Ropes & Gray took the case on a pro bono basis.

The 11th Circuit Court of Appeals ruled the FTC’s approach to developing security standards violates basic principles of due process. The court said the FTC’s basic approach—in which the FTC tries to improve general security practices by suing companies that experience security breaches—violates the basic legal principle that the government can’t punish someone for conduct that the government hasn’t previously explained is problematic.

My colleague at ICLE observes the lesson to learn from LabMD isn’t about the illegitimacy of the FTC’s approach to internet privacy and security. Instead, it says legality of the administrative state is premised on courts placing a check on abusive regulators.

The lessons learned from these three recent cases reflect a profound shift in thinkging about the laws governing economic activity:

  • AT&T-Time Warner indicates that facts matter. Mere speculation of potential harms will not satisfy the court.
  • Amex highlights the growing role two-sided markets play in our economy and provides framework for evaluating competition in these markets.
  • LabMD is a small step in reining in the administrative state. Regulations must be scrutinized before they are imposed and enforced.

In some ways none of these decisions are revolutionary. Instead, they reflect an evolution toward greater transparency in how the law is to be applied and greater scrutiny over how the regulations are imposed.

 

The Eleventh Circuit’s LabMD opinion came out last week and has been something of a rorschach test for those of us who study consumer protection law.

Neil Chilson found the result to be a disturbing sign of slippage in Congress’s command that the FTC refrain from basing enforcement on “public policy.” Berin Szóka, on the other hand, saw the ruling as a long-awaited rebuke against the FTC’s expansive notion of its “unfairness” authority. Whereas Daniel Solove and Woodrow Hartzog described the decision as “quite narrow and… far from crippling,” in part, because “[t]he opinion says very little about the FTC’s general power to enforce Section 5 unfairness.” Even among the ICLE crew, our understandings of the opinion reflect our priors, from it being best understood as expressing due process concerns about injury-based enforcement of Section 5, on the one hand, to being about the meaning of Section 5(n)’s causation requirement, on the other.

You can expect to hear lots more about these and other LabMD-related issues from us soon, but for now we want to write about the only thing more exciting than dueling histories of the FTC’s 1980 Unfairness Statement: administrative law.

While most of those watching the LabMD case come from some nexus of FTC watchers, data security specialists, and privacy lawyers, the reality is that the case itself is mostly about administrative law (the law that governs how federal agencies are given and use their power). And the court’s opinion is best understood from a primarily administrative law perspective.

From that perspective, the case should lead to some significant introspection at the Commission. While the FTC may find ways to comply with the letter of the opinion without substantially altering its approach to data security cases, it will likely face difficulty defending that approach before the courts. True compliance with this decision will require the FTC to define what makes certain data security practices unfair in a more-coherent and far-more-readily ascertainable fashion.

The devil is in the (well-specified) details

The actual holding in the case comes in Part III of the 11th Circuit’s opinion, where the court finds for LabMD on the ground that, owing to a fatal lack of specificity in the FTC’s proposed order, “the Commission’s cease and desist order is itself unenforceable.”  This is the punchline of the opinion, to which we will return. But it is worth spending some time on the path that the court takes to get there.

It should be stressed at the outset that Part II of the opinion — in which the Court walks through the conceptual and statutory framework that supports an “unfairness” claim — is surprisingly unimportant to the court’s ultimate holding. This was the meat of the case for FTC watchers and privacy and data security lawyers, and it is a fascinating exposition. Doubtless it will be the focus of most analysis of the opinion.

But, for purposes of the court’s disposition of the case, it’s of (perhaps-frustratingly) scant importance. In short, the court assumes, arguendo, that the FTC has sufficient basis to make out an unfairness claim against LabMD before moving on to Part III of the opinion analyzing the FTC’s order given that assumption.

It’s not clear why the court took this approach — and it is dangerous to assume any particular explanation (although it is and will continue to be the subject of much debate). There are several reasonable explanations for the approach, ranging from the court thinking it obvious that the FTC’s unfairness analysis was correct, to it side-stepping the thorny question of how to define injury under Section 5, to the court avoiding writing a decision that could call into question the fundamental constitutionality of a significant portion of the FTC’s legal portfolio. Regardless — and regardless of its relative lack of importance to the ultimate holding — the analysis offered in Part II bears, and will receive, significant attention.

The FTC has two basic forms of consumer protection authority: It can take action against 1) unfair acts or practices and 2) deceptive acts or practices. The FTC’s case against LabMD was framed in terms of unfairness. Unsurprisingly, “unfairness” is a broad, ambiguous concept — one that can easily grow into an amorphous blob of ill-defined enforcement authority.

As discussed by the court (as well as by us, ad nauseum), in the 1970s the FTC made very aggressive use of its unfairness authority to regulate the advertising industry, effectively usurping Congress’ authority to legislate in that area. This over-aggressive enforcement didn’t sit well with Congress, of course, and led it to shut down the FTC for a period of time until the agency adopted a more constrained understanding of the meaning of its unfairness authority. This understanding was communicated to Congress in the FTC’s 1980 Unfairness Statement. That statement was subsequently codified by Congress, in slightly modified form, as Section 5(n) of the FTC Act.

Section 5(n) states that

The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

The meaning of Section 5(n) has been the subject of intense debate for years (for example, here, here and here). In particular, it is unclear whether Section 5(n) defines a test for what constitutes unfair conduct (that which “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition”) or whether instead imposes a necessary, but not necessarily sufficient, condition on the extent of the FTC’s authority to bring cases. The meaning of “cause” under 5(n) is also unclear because, unlike causation in traditional legal contexts, Section 5(n) also targets conduct that is “likely to cause” harm.

Section 5(n) concludes with an important, but also somewhat inscrutable, discussion of the role of “public policy” in the Commission’s unfairness enforcement, indicating that that Commission is free to consider “established public policies” as evidence of unfair conduct, but may not use such considerations “as a primary basis” for its unfairness enforcement.

Just say no to public policy

Section 5 empowers and directs the FTC to police unfair business practices, and there is little reason to think that bad data security practices cannot sometimes fall under its purview. But the FTC’s efforts with respect to data security (and, for that matter, privacy) over the past nearly two decades have focused extensively on developing what it considers to be a comprehensive jurisprudence to address data security concerns. This creates a distinct impression that the FTC has been using its unfairness authority to develop a new area of public policy — to legislate data security standards, in other words — as opposed to policing data security practices that are unfair under established principles of unfairness.

This is a subtle distinction — and there is frankly little guidance for understanding when the agency is acting on the basis of public policy versus when it is proscribing conduct that falls within the meaning of unfairness.

But it is an important distinction. If it is the case — or, more precisely, if the courts think that it is the case — that the FTC is acting on the basis of public policy, then the FTC’s data security efforts are clearly problematic under Section 5(n)’s prohibition on the use of public policy as the primary basis for unfairness actions.

And this is where the Commission gets itself into trouble. The Commission’s efforts to develop its data security enforcement program looks an awful lot like something being driven by public policy, and not so much as merely enforcing existing policy as captured by, in the LabMD court’s words (echoing the FTC’s pre-Section 5(n) unfairness factors), “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.”

The distinction between effecting public policy and enforcing legal norms is… not very clear. Nonetheless, exploring and respecting that distinction is an important task for courts and agencies.

Unfortunately, this case does not well describe how to make that distinction. The opinion is more than a bit muddled and difficult to clearly interpret. Nonetheless, reading the court’s dicta in Part II is instructive. It’s clearly the case that some bad security practices, in some contexts, can be unfair practices. So the proper task for the FTC is to discover how to police “unfairness” within data security cases rather than setting out to become a first-order data security enforcement agency.

How does public policy become well-established law?

Part II of the Eleventh Circuit’s opinion — even if dicta — is important for future interpretations of Section 5 cases. The court goes to great lengths to demonstrate, based on the FTC’s enforcement history and related Congressional rebukes, that the Commission may not rely upon vague “public policy” standards for bringing “unfairness” actions.

But this raises a critical question about the nature of the FTC’s unfairness authority. The Commission was created largely to police conduct that could not readily be proscribed by statute or simple rules. In some cases this means conduct that is hard to label or describe in text with any degree of precision — “I know it when I see it” kinds of acts and practices. In other cases, it may refer to novel or otherwise unpredictable conduct that could not be foreseen by legislators or regulators. In either case, the very purpose of the FTC is to be able to protect consumers from conduct that is not necessarily proscribed elsewhere.

This means that the Commission must have some ability to take action against “unfair” conduct that has not previously been enshrined as “unfair” in “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.” But that ability is not unbounded, of course.

The court explained that the Commission could expound upon what acts fall within the meaning of “unfair” in one of two ways: It could use its rulemaking authority to issue Congressionally reviewable rules, or it could proceed on a case-by-case basis.

In either case, the court’s discussion of how the Commission is to determine what is “unfair” within the constraints of Section 5(n) is frustratingly vague. The earlier parts of the opinion tell us that unfairness is to be adjudged based upon “well-established legal standards,” but here the court tells us that the scope of unfairness can be altered — that is, those well-established legal standards can be changed — through adjudication. It is difficult to square what the court means by this. Regardless, it is the guidance that we have been given by the court.

This is Admin Law 101

And yet perhaps there is some resolution to this conundrum in administrative law. For administrative law scholars, the 11th Circuit’s discussion of the permissibility of agencies developing binding legal norms using either rulemaking or adjudication procedures, is straight out of Chenery II.

Chenery II is a bedrock case of American administrative law, standing broadly for the proposition (as echoed by the 11th Circuit) that agencies can generally develop legal rules through either rulemaking or adjudication, that there may be good reasons to use either in any given case, and that (assuming Congress has empowered the agency to use both) it is primarily up to the agency to determine which approach is preferable in any given case.

But, while Chenery II certainly allows agencies to proceed on a case-by-case basis, that permission is not a broad license to eschew the development of determinate legal standards. And the reason is fairly obvious: if an agency develops rules that are difficult to know ex ante, they can hardly provide guidance for private parties as they order their affairs.

Chenery II places an important caveat on the use of case-by-case adjudication. Much like the judges in the LabMD opinion, the Chenery II court was concerned with specificity and clarity, and tells us that agencies may not rely on vague bases for their rules or enforcement actions and expect courts to “chisel” out the details. Rather:

If the administrative action is to be tested by the basis upon which it purports to rest, that basis must be set forth with such clarity as to be understandable. It will not do for a court to be compelled to guess at the theory underlying the agency’s action; nor can a court be expected to chisel that which must be precise from what the agency has left vague and indecisive. In other words, ‘We must know what a decision means before the duty becomes ours to say whether it is right or wrong.’ (emphasis added)

The parallels between the 11th Circuit’s opinion in LabMD and the Supreme Court’s opinion in Chenery II 70 years earlier are uncanny. It is also not very surprising that the 11th Circuit opinion would reflect the principles discussed in Chenery II, nor that it would do so without reference to Chenery II: these are, after all, bedrock principles of administrative law.  

The principles set out in Chenery II, of course, do not answer the data-security law question whether the FTC properly exercised its authority in this (or any) case under Section 5. But they do provide an intelligible basis for the court sidestepping this question, and asking whether the FTC sufficiently defined what it was doing in the first place.  

Conclusion

The FTC’s data security mission has been, in essence, a voyage of public policy exploration. Its method of case-by-case adjudication, based on ill-defined consent decrees, non-binding guidance documents, and broadly-worded complaints creates the vagueness that the Court in Chenery II rejected, and that the 11th Circuit held results in unenforceable remedies.

Even in its best light, the Commission’s public materials are woefully deficient as sources of useful (and legally-binding) guidance. In its complaints the FTC does typically mention some of the facts that led it to investigate, and presents some rudimentary details of how those facts relate to its Section 5 authority. Yet the FTC issues complaints based merely on its “reason to believe” that an unfair act has taken place. This is a far different standard than that faced in district court, and undoubtedly leads the Commission to construe facts liberally in its own favor.

Moreover, targets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action  is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.

So, while the court’s opinion in this case was narrowly focused on the FTC’s proposed order, the underlying legal analysis that supports its holding should be troubling to the Commission.

The specificity the 11th Circuit demands in the remedial order must exist no less in the theories of harm the Commission alleges against targets. And those theories cannot be based on mere public policy preferences. Courts that follow the Eleventh Circuit’s approach — which indeed Section 5(n) reasonably seems to require — will look more deeply into the Commission’s allegations of “unreasonable” data security in order to determine if it is actually attempting to pursue harms by proving something like negligence, or is instead simply ascribing “unfairness” to certain conduct that the Commission deems harmful.

The FTC may find ways to comply with the letter of this particular opinion without substantially altering its overall approach — but that seems unlikely. True compliance with this decision will require the FTC to respect real limits on its authority and to develop ascertainable data security requirements out of much more than mere consent decrees and kitchen-sink complaints.

One of the hottest antitrust topics of late has been institutional investors’ “common ownership” of minority stakes in competing firms.  Writing in the Harvard Law Review, Einer Elhauge proclaimed that “[a]n economic blockbuster has recently been exposed”—namely, “[a] small group of institutions has acquired large shareholdings in horizontal competitors throughout our economy, causing them to compete less vigorously with each other.”  In the Antitrust Law Journal, Eric Posner, Fiona Scott Morton, and Glen Weyl contended that “the concentration of markets through large institutional investors is the major new antitrust challenge of our time.”  Those same authors took to the pages of the New York Times to argue that “[t]he great, but mostly unknown, antitrust story of our time is the astonishing rise of the institutional investor … and the challenge that it poses to market competition.”

Not surprisingly, these scholars have gone beyond just identifying a potential problem; they have also advocated policy solutions.  Elhauge has called for allowing government enforcers and private parties to use Section 7 of the Clayton Act, the provision primarily used to prevent anticompetitive mergers, to police institutional investors’ ownership of minority positions in competing firms.  Posner et al., concerned “that private litigation or unguided public litigation could cause problems because of the interactive nature of institutional holdings on competition,” have proposed that federal antitrust enforcers adopt an enforcement policy that would encourage institutional investors either to avoid common ownership of firms in concentrated industries or to limit their influence over such firms by refraining from voting their shares.

The position of these scholars is thus (1) that common ownership by institutional investors significantly diminishes competition in concentrated industries, and (2) that additional antitrust intervention—beyond generally applicable rules on, say, hub-and-spoke conspiracies and anticompetitive information exchanges—is appropriate to prevent competitive harm.

Mike Sykuta and I have recently posted a paper taking issue with this two-pronged view.  With respect to the first prong, we contend that there are serious problems with both the theory of competitive harm stemming from institutional investors’ common ownership and the empirical evidence that has been marshalled in support of that theory.  With respect to the second, we argue that even if competition were softened by institutional investors’ common ownership of small minority interests in competing firms, the unintended negative consequences of an antitrust fix would outweigh any benefits from such intervention.

Over the next few days, we plan to unpack some of the key arguments in our paper, The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms.  In the meantime, we encourage readers to download the paper and send us any comments.

The paper’s abstract is below the fold. Continue Reading…

Following is the (slightly expanded and edited) text of my remarks from the panel, Antitrust and the Tech Industry: What Is at Stake?, hosted last Thursday by CCIA. Bruce Hoffman (keynote), Bill Kovacic, Nicolas Petit, and Christine Caffarra also spoke. If we’re lucky Bruce will post his remarks on the FTC website; they were very good.

(NB: Some of these comments were adapted (or lifted outright) from a forthcoming Cato Policy Report cover story co-authored with Gus Hurwitz, so Gus shares some of the credit/blame.)

 

The urge to treat antitrust as a legal Swiss Army knife capable of correcting all manner of social and economic ills is apparently difficult for some to resist. Conflating size with market power, and market power with political power, many recent calls for regulation of industry — and the tech industry in particular — are framed in antitrust terms. Take Senator Elizabeth Warren, for example:

[T]oday, in America, competition is dying. Consolidation and concentration are on the rise in sector after sector. Concentration threatens our markets, threatens our economy, and threatens our democracy.

And she is not alone. A growing chorus of advocates are now calling for invasive, “public-utility-style” regulation or even the dissolution of some of the world’s most innovative companies essentially because they are “too big.”

According to critics, these firms impose all manner of alleged harms — from fake news, to the demise of local retail, to low wages, to the veritable destruction of democracy — because of their size. What is needed, they say, is industrial policy that shackles large companies or effectively mandates smaller firms in order to keep their economic and political power in check.

But consider the relationship between firm size and political power and democracy.

Say you’re successful in reducing the size of today’s largest tech firms and in deterring the creation of new, very-large firms: What effect might we expect this to have on their political power and influence?

For the critics, the effect is obvious: A re-balancing of wealth and thus the reduction of political influence away from Silicon Valley oligarchs and toward the middle class — the “rudder that steers American democracy on an even keel.”

But consider a few (and this is by no means all) countervailing points:

To begin, at the margin, if you limit firm growth as a means of competing with rivals, you make correspondingly more important competition through political influence. Erecting barriers to entry and raising rivals’ costs through regulation are time-honored American political traditions, and rent-seeking by smaller firms could both be more prevalent, and, paradoxically, ultimately lead to increased concentration.

Next, by imbuing antitrust with an ill-defined set of vague political objectives, you also make antitrust into a sort of “meta-legislation.” As a result, the return on influencing a handful of government appointments with authority over antitrust becomes huge — increasing the ability and the incentive to do so.

And finally, if the underlying basis for antitrust enforcement is extended beyond economic welfare effects, how long can we expect to resist calls to restrain enforcement precisely to further those goals? All of a sudden the effort and ability to get exemptions will be massively increased as the persuasiveness of the claimed justifications for those exemptions, which already encompass non-economic goals, will be greatly enhanced. We might even find, again, that we end up with even more concentration because the exceptions could subsume the rules.

All of which of course highlights the fundamental, underlying problem: If you make antitrust more political, you’ll get less democratic, more politically determined, results — precisely the opposite of what proponents claim to want.

Then there’s democracy, and calls to break up tech in order to save it. Calls to do so are often made with reference to the original intent of the Sherman Act and Louis Brandeis and his “curse of bigness.” But intentional or not, these are rallying cries for the assertion, not the restraint, of political power.

The Sherman Act’s origin was ambivalent: although it was intended to proscribe business practices that harmed consumers, it was also intended to allow politically-preferred firms to maintain high prices in the face of competition from politically-disfavored businesses.

The years leading up to the adoption of the Sherman Act in 1890 were characterized by dramatic growth in the efficiency-enhancing, high-tech industries of the day. For many, the purpose of the Sherman Act was to stem this growth: to prevent low prices — and, yes, large firms — from “driving out of business the small dealers and worthy men whose lives have been spent therein,” in the words of Trans-Missouri Freight, one of the early Supreme Court decisions applying the Act.

Left to the courts, however, the Sherman Act didn’t quite do the trick. By 1911 (in Standard Oil and American Tobacco) — and reflecting consumers’ preferences for low prices over smaller firms — only “unreasonable” conduct was actionable under the Act. As one of the prime intellectual engineers behind the Clayton Antitrust Act and the Federal Trade Commission in 1914, Brandeis played a significant role in the (partial) legislative and administrative overriding of the judiciary’s excessive support for economic efficiency.

Brandeis was motivated by the belief that firms could become large only by illegitimate means and by deceiving consumers. But Brandeis was no advocate for consumer sovereignty. In fact, consumers, in Brandeis’ view, needed to be saved from themselves because they were, at root, “servile, self-indulgent, indolent, ignorant.”

There’s a lot that today we (many of us, at least) would find anti-democratic in the underpinnings of progressivism in US history: anti-consumerism; racism; elitism; a belief in centrally planned, technocratic oversight of the economy; promotion of social engineering, including through eugenics; etc. The aim of limiting economic power was manifestly about stemming the threat it posed to powerful people’s conception of what political power could do: to mold and shape the country in their image — what economist Thomas Sowell calls “the vision of the anointed.”

That may sound great when it’s your vision being implemented, but today’s populist antitrust resurgence comes while Trump is in the White House. It’s baffling to me that so many would expand and then hand over the means to design the economy and society in their image to antitrust enforcers in the executive branch and presidentially appointed technocrats.

Throughout US history, it is the courts that have often been the bulwark against excessive politicization of the economy, and it was the courts that shepherded the evolution of antitrust away from its politicized roots toward rigorous, economically grounded policy. And it was progressives like Brandeis who worked to take antitrust away from the courts. Now, with efforts like Senator Klobuchar’s merger bill, the “New Brandeisians” want to rein in the courts again — to get them out of the way of efforts to implement their “big is bad” vision.

But the evidence that big is actually bad, least of all on those non-economic dimensions, is thin and contested.

While Zuckerberg is grilled in Congress over perceived, endemic privacy problems, politician after politician and news article after news article rushes to assert that the real problem is Facebook’s size. Yet there is no convincing analysis (maybe no analysis of any sort) that connects its size with the problem, or that evaluates whether the asserted problem would actually be cured by breaking up Facebook.

Barry Lynn claims that the origins of antitrust are in the checks and balances of the Constitution, extended to economic power. But if that’s right, then the consumer welfare standard and the courts are the only things actually restraining the disruption of that order. If there may be gains to be had from tweaking the minutiae of the process of antitrust enforcement and adjudication, by all means we should have a careful, lengthy discussion about those tweaks.

But throwing the whole apparatus under the bus for the sake of an unsubstantiated, neo-Brandeisian conception of what the economy should look like is a terrible idea.

The world discovered something this past weekend that the world had already known: that what you say on the Internet stays on the Internet, spread intractably and untraceably through the tendrils of social media. I refer, of course, to the Cambridge Analytica/Facebook SNAFU (or just Situation Normal): the disclosure that Cambridge Analytica, a company used for election analytics by the Trump campaign, breached a contract with Facebook in order to unauthorizedly collect information on 50 million Facebook users. Since the news broke, Facebook’s stock is off by about 10 percent, Cambridge Analytica is almost certainly a doomed company, the FTC has started investigating both, private suits against Facebook are already being filed, the Europeans are investigating as well, and Cambridge Analytica is now being blamed for Brexit.

That is all fine and well, and we will be discussing this situation and its fallout for years to come. I want to write about a couple of other aspects of the story: the culpability of 270,000 Facebook users in disclosing the data of 50 million of their peers, and what this situation tells us about evergreen proposals to “open up the social graph” by making users’ social media content portable.

I Have Seen the Enemy and the Enemy is Us

Most discussion of Cambridge Analytica’s use of Facebook data has focused on the large number of user records Cambridge Analytica obtained access to – 50 million – and the fact that it obtained these records through some problematic means (and Cambridge Analytica pretty clearly breached contracts and acted deceptively to obtain these records). But one needs to dig a deeper to understand the mechanics of what actually happened. Once one does this, the story becomes both less remarkable and more interesting.

(For purposes of this discussion, I refer to Cambridge Analytica as the actor that obtained the records. It’s actually a little more complicated: Cambridge Analytica worked with an academic researcher to obtain these records. That researcher was given permission by Facebook to work with and obtain data on users for purposes relating to his research. But he exceeded that scope of authority, sharing the data that he collected with CA.)

The 50 million users’ records that Cambridge Analytica obtained access to were given to Cambridge Analytica by about 200,000 individual Facebook users. Those 270,000 users become involved with Cambridge Analytica by participating in an online quiz – one of those fun little throwaway quizzes that periodically get some attention on Facebook and other platforms. As part of taking that quiz, those 270,000 users agreed to grant Cambridge Analytica access to their profile information, including information available through their profile about their friends.

This general practice is reasonably well known. Any time a quiz or game like this has its moment on Facebook it is also accompanied by discussion of how the quiz or game is likely being used to harvest data about users. The terms of use of these quizzes and games almost always disclose that such information is being collected. More telling, any time a user posts a link to one of these quizzes or games, some friend will will invariably leave a comment warning about these terms of service and of these data harvesting practices.

There are two remarkable things about this. The first remarkable thing is that there is almost nothing remarkable about the fact that Cambridge Analytica obtained this information. A hundred such data harvesting efforts have preceded Cambridge Analytica; and a hundred more will follow it. The only remarkable things about the present story is that Cambridge Analytica was an election analytics firm working for Donald Trump – never mind that by all accounts the data collected proved to be of limited use generally in elections or that when Cambridge Analytica started working for the Trump campaign they were tasked with more mundane work that didn’t make use of this data.

More remarkable is that Cambridge Analytica didn’t really obtain data about 50 million individuals from Facebook, or from a Facebook quiz. Cambridge Analytica obtained this data from those 50 million individuals’ friends.

There are unquestionably important questions to be asked about the role of Facebook in giving users better control over, or ability to track uses of, their information. And there are questions about the use of contracts such as that between Facebook and Cambridge Analytica to control how data like this is handled. But this discussion will not be complete unless and until we also understand the roles and responsibilities of individual users in managing and respecting the privacy of their friends.

Fundamentally, we lack a clear and easy way to delineate privacy rights. If I share with my friends that I participated in a political rally, that I attended a concert, that I like certain activities, that I engage in certain illegal activities, what rights do I have to control how they subsequently share that information? The answer in the physical world, in the American tradition, is none – at least, unless I take affirmative steps to establish such a right prior to disclosing that information.

The answer is the same in the online world, as well – though platforms have substantial ability to alter this if they so desire. For instance, Facebook could change the design of its system to prohibit users from sharing information about their friends with third parties. (Indeed, this is something that most privacy advocates think social media platforms should do.) But such a “solution” to the delineation problem has its own problems. It assumes that the platform is the appropriate arbiter of privacy rights – a perhaps questionable assumption given platforms’ history of getting things wrong when it comes to privacy. More trenchant, it raises questions about users’ ability to delineate or allocate their privacy differently than allowed by the platforms, particularly where a given platform may not allow the delineation or allocation of rights that users prefer.

The Badness of the Open Graph Idea

One of the standard responses to concerns about how platforms may delineate and allow users to allocate their privacy interests is, on the one hand, that competition among platforms would promote desirable outcomes and that, on the other hand, the relatively limited and monopolistic competition that we see among firms like Facebook is one of the reasons that consumers today have relatively poor control over their information.

The nature of competition in markets such as these, including whether and how to promote more of it, is a perennial and difficult topic. The network effects inherent in markets like these suggest that promoting competition may in fact not improve consumer outcomes, for instance. Competition could push firms to less consumer-friendly privacy positions if that allows better monetization and competitive advantages. And the simple fact that Facebook has lost 10% of its value following the Cambridge Analytica news suggests that there are real market constraints on how Facebook operates.

But placing those issues to the side for now, the situation with Cambridge Analytica offers an important cautionary tale about one of the perennial proposals for how to promote competition between social media platforms: “opening up the social graph.” The basic idea of these proposals is to make it easier for users of these platforms to migrate between platforms or to use the features of different platforms through data portability and interoperability. Specific proposals have taken various forms over the years, but generally they would require firms like Facebook to either make users’ data exportable in a standardized form so that users could easily migrate it to other platforms or to adopt a standardized API that would allow other platforms to interoperate with data stored on the Facebook platform.

In other words, proposals to “open the social graph” are proposals to make it easier to export massive volumes of Facebook user data to third parties at efficient scale.

If there is one lesson from the past decade that is more trenchant than that delineation privacy rights is difficult it is that data security is even harder.

These last two points do not sum together well. The easier that Facebook makes it for its users’ data to be exported at scale, the easier Facebook makes it for its users’ data to be exfiltrated at scale. Despite its myriad problems, Cambridge Analytica at least was operating within a contractual framework with Facebook – it was a known party. Creating external API for exporting Facebook data makes it easier for unknown third-parties to anonymously obtain user information. Indeed, even if the API only works to allow trusted third parties to to obtain such information, the problem of keeping that data secured against subsequent exfiltration multiplies with each third party that is allowed access to that data.

The U.S. Federal Trade Commission’s (FTC) well-recognized expertise in assessing unfair or deceptive acts or practices can play a vital role in policing abusive broadband practices.  Unfortunately, however, because Section 5(a)(2) of the FTC Act exempts common carriers from the FTC’s jurisdiction, serious questions have been raised about the FTC’s authority to deal with unfair or deceptive practices in cyberspace that are carried out by common carriers, but involve non-common-carrier activity (in contrast, common carrier services have highly regulated terms and must be made available to all potential customers).

Commendably, the Ninth Circuit held on February 26, in FTC v. AT&T Mobility, that harmful broadband data throttling practices by a common carrier were subject to the FTC’s unfair acts or practices jurisdiction, because the common carrier exception is “activity-based,” and the practices in question did not involve common carrier services.  Key excerpts from the summary of the Ninth Circuit’s opinion follow:

The en banc court affirmed the district court’s denial of AT&T Mobility’s motion to dismiss an action brought by th Federal Trade Commission (“FTC”) under Section 5 of the FTC Act, alleging that AT&T’s data-throttling plan was unfair and deceptive. AT&T Mobility’s data-throttling is a practice by which the company reduced customers’ broadband data speed without regard to actual network congestion. Section 5 of the FTC Act gives the agency enforcement authority over “unfair or deceptive acts or practices,” but exempts “common carriers subject to the Acts to regulate commerce.” 15 U.S.C § 45(a)(1), (2). AT&T moved to dismiss the action, arguing that it was exempt from FTC regulation under Section 5. . . .

The en banc court held that the FTC Act’s common carrier exemption was activity-based, and therefore the phrase “common carriers subject to the Acts to regulate commerce” provided immunity from FTC regulation only to the extent that a common carrier was engaging in common carrier services. In reaching this conclusion, the en banc court looked to the FTC Act’s text, the meaning of “common carrier” according to the courts around the time the statute was passed in 1914, decades of judicial interpretation, the expertise of the FTC and Federal Communications Commission (“FCC”), and legislative history.

Addressing the FCC’s order, issued on March 12, 2015, reclassifying mobile data service from a non-common carriage service to a common carriage service, the en banc court held that the prospective reclassification order did not rob the FTC of its jurisdiction or authority over conduct occurring before the order. Accordingly, the en banc court affirmed the district court’s denial of AT&T’s motion to dismiss.

A key introductory paragraph in the Ninth Circuit’s opinion underscores the importance of the court’s holding for sound regulatory policy:

This statutory interpretation [that the common carrier exception is activity-based] also accords with common sense. The FTC is the leading federal consumer protection agency and, for many decades, has been the chief federal agency on privacy policy and enforcement. Permitting the FTC to oversee unfair and deceptive non-common-carriage practices of telecommunications companies has practical ramifications. New technologies have spawned new regulatory challenges. A phone company is no longer just a phone company. The transformation of information services and the ubiquity of digital technology mean that telecommunications operators have expanded into website operation, video distribution, news and entertainment production, interactive entertainment services and devices, home security and more. Reaffirming FTC jurisdiction over activities that fall outside of common-carrier services avoids regulatory gaps and provides consistency and predictability in regulatory enforcement.

But what can the FTC do about unfair or deceptive practices affecting broadband services, offered by common carriers, subsequent to the FCC’s 2015 reclassification of mobile data service as a common carriage service?  The FTC will be able to act, assuming that the Federal Communications Commission’s December 2017 rulemaking, reclassifying mobile broadband Internet access service as not involving a common carrier service, passes legal muster (as it should).  In order to avoid any legal uncertainty, however, Congress could take the simple step of eliminating the FTC Act’s common carrier exception – an outdated relic that threatens to generate disparate enforcement outcomes toward the same abusive broadband practice, based merely upon whether the parent company is deemed a “common carrier.”

On January 23rd, the Heritage Foundation convened its Fourth Annual Antitrust Conference, “Trump Antitrust Policy after One Year.”  The entire Conference can be viewed online (here).  The Conference featured a keynote speech, followed by three separate panels that addressed  developments at the Federal Trade Commission (FTC), at the Justice Department’s Antitrust Division (DOJ), and in the international arena, developments that can have a serious effect on the country’s economic growth and expansion of our business and industrial sector.

  1. Professor Bill Kovacic’s Keynote Speech

The conference started with a bang, featuring a stellar keynote speech (complemented by excellent power point slides) by GW Professor and former FTC Chairman Bill Kovacic, who also serves as a Member of the Board of the UK Government’s Competitive Markets Authority.  Kovacic began by noting the claim by senior foreign officials that “nothing is happening” in U.S. antitrust enforcement.  Although this perception may be inaccurate, Kovacic argued that it colors foreign officials’ dealings with the U.S., and continues a preexisting trend of diminishing U.S. influence on foreign governments’ antitrust enforcement systems.  (It is widely believed that the European antitrust model is dominant internationally.)

In order to enhance the perceived effectiveness (and prestige) of American antitrust on the global plane, American antitrust enforcers should, according to Kovacic, adopt a positive agenda citing specific priorities for action (as opposed to a “negative approach” focused on what actions will not be taken) – an orientation which former FTC Chairman Muris employed successfully in the last Bush Administration.  The positive engagement themes should be communicated powerfully to the public here and abroad through active public engagement by agency officials.  Agency strengths, such as FTC market studies and economic expertise, should be highlighted.

In addition, the FTC and Justice Department should act more like an “antitrust policy joint venture” at home and abroad, extending cooperation beyond guidelines to economic research, studies, and other aspects of their missions.  This would showcase the outstanding capabilities of the U.S. public antitrust enterprise.

  1. FTC Panel

A panel on FTC developments (moderated by Dr. Jeff Eisenach, Managing Director of NERA Economic Consulting and former Chief of Staff to FTC Chairman James Miller) followed Kovacic’s presentation.

Acting Bureau of Competition Chief Bruce Hoffman began by stressing that FTC antitrust enforcers are busier than ever, with a number of important cases in litigation and resources stretched to the limit.  Thus, FTC enforcement is neither weak nor timid – to the contrary, it is quite vigorous.  Hoffman was surprised by recent political attacks on the 40 year bipartisan consensus regarding the economics-centered consumer welfare standard that has set the direction of U.S. antitrust enforcement.  According to Hoffman, noted economist Carl Shapiro has debunked the notion that supposed increases in industry concentration even at the national level are meaningful.  In short, there is no empirical basis to dethrone the consumer welfare standard and replace it with something else.

Other former senior FTC officials engaged in a discussion following Hoffman’s remarks.  Orrick Partner Alex Okuliar, a former Attorney-Advisor to FTC Acting Chairman Maureen Ohlhausen, noted Ohlhausen’s emphasis on “regulatory humility” ( recognizing the inherent limitations of regulation and acting in accordance with those limits) and on the work of the FTC’s Economic Liberty Task Force, which centers on removing unnecessary regulatory restraints on competition (such as excessive occupational licensing requirements).

Wilson Sonsini Partner Susan Creighton, a former Director of the FTC’s Bureau of Competition, discussed the importance of economics-based “technocratic antitrust” (applied by sophisticated judges) for a sound and manageable antitrust system – something still not well understood by many foreign antitrust agencies.  Creighton had three reform suggestions for the Trump Administration:

(1) the DOJ and the FTC should stress the central role of economics in the institutional arrangements of antitrust (DOJ’s “economics structure” is a bit different than the FTC’s);

(2) both agencies should send relatively more economists to represent us at antitrust meetings abroad, thereby enabling the agencies to place a greater stress on the importance of economic rigor in antitrust enforcement; and

(3) the FTC and the DOJ should establish a task force to jointly carry out economics research and hone a consistent economic policy message.

Sidley & Austin Partner Bill Blumenthal, a former FTC General Counsel, noted the problems of defining Trump FTC policy in the absence of new Trump FTC Commissioners.  Blumenthal noted that signs of a populist uprising against current antitrust norms extend beyond antitrust, and that the agencies may have to look to new unilateral conduct cases to show that they are “doing something.”  He added that the populist rejection of current economics-based antitrust analysis is intellectually incoherent.  There is a tension between protecting consumers and protecting labor; for example, anti-consumer cartels may be beneficial to labor union interests.

In a follow-up roundtable discussion, Hoffman noted that theoretical “existence theorems” of anticompetitive harm that lack empirical support in particular cases are not administrable.  Creighton opined that, as an independent agency, the FTC may be a bit more susceptible to congressional pressure than DOJ.  Blumenthal stated that congressional interest may be able to trigger particular investigations, but it does not dictate outcomes.

  1. DOJ Panel

Following lunch, a panel of antitrust experts (moderated by Morgan Lewis Partner and former Chief of Staff to the Assistant Attorney General Hill Wellford) addressed DOJ developments.

The current Principal Deputy Assistant Attorney General for Antitrust, Andrew Finch, began by stating that the three major Antitrust Division initiatives involve (1) intellectual property (IP), (2) remedies, and (3) criminal enforcement.  Assistant Attorney General Makan Delrahim’s November 2017 speech explained that antitrust should not undermine legitimate incentives of patent holders to maximize returns to their IP through licensing.  DOJ is looking into buyer and seller cartel behavior (including in standard setting) that could harm IP rights.  DOJ will work to streamline and improve consent decrees and other remedies, and make it easier to go after decree violations.  In criminal enforcement, DOJ will continue to go after “no employee poaching” employer agreements as criminal violations.

Former Assistant Attorney General Tom Barnett, a Covington & Burling Partner, noted that more national agencies are willing to intervene in international matters, leading to inconsistencies in results.  The International Competition Network is important, but major differences in rhetoric have created a sense that there is very little agreement among enforcers, although the reality may be otherwise.  Muted U.S. agency voices on the international plane and limited resources have proven unfortunate – the FTC needs to engage better in international discussions and needs new Commissioners.

Former Counsel to the Assistant Attorney Eric Grannon, a White & Case Partner, made three specific comments:

(1) DOJ should look outside the career criminal enforcement bureaucracy and consider selecting someone with significant private sector experience as Deputy Assistant Attorney General for Criminal Enforcement;

(2) DOJ needs to go beyond merely focusing on metrics that show increased aggregate fines and jail time year-by-year (something is wrong if cartel activities and penalties keep rising despite the growing emphasis on inculcating an “anti-cartel culture” within firms); and

(3) DOJ needs to reassess its “amnesty plus” program, in which an amnesty applicant benefits by highlighting the existence of a second cartel in which it participates (non-culpable firms allegedly in the second cartel may be fingered, leading to unjustified potential treble damages liability for them in private lawsuits).

Grannon urged that DOJ hold a public workshop on the amnesty plus program in the coming year.  Grannon also argued against the classification of antitrust offenses as crimes of “moral turpitude” (moral turpitude offenses allow perpetrators to be excluded from the U.S. for 20 years).  Finally, as a good government measure, Grannon recommended that the Antitrust Division should post all briefs on its website, including those of opposing parties and third parties.

Baker and Botts Partner Stephen Weissman, a former Deputy Director of the FTC’s Bureau of Competition, found a great deal of continuity in DOJ civil enforcement.  Nevertheless, he expressed surprise at Assistant Attorney General Delrahim’s recent remarks that suggested that DOJ might consider asking the Supreme Court to overturn the Illinois Brick ban on indirect purchaser suits under federal antitrust law.  Weissman noted the increased DOJ focus on the rights of IP holders, not implementers, and the beneficial emphasis on the importance of DOJ’s amicus program.

The following discussion among the panelists elicited agreement (Weissman and Barnett) that the business community needs more clear-cut guidance on vertical mergers (and perhaps on other mergers as well) and affirmative statements on DOJ’s plans.  DOJ was characterized as too heavy-handed in setting timing agreements in mergers.  The panelists were in accord that enforcers should continue to emphasize the American consumer welfare model of antitrust.  The panelists believed the U.S. gets it right in stressing jail time for cartelists and in detrebling for amnesty applicants.  DOJ should, however, apply a proper dose of skepticism in assessing the factual content of proffers made by amnesty applicants.  Former enforcers saw no need to automatically grant markers to those applicants.  Andrew Finch returned to the topic of Illinois Brick, explaining that the Antitrust Modernization Commission had suggested reexamining that case’s bar on federal indirect purchaser suits.  In response to an audience question as to which agency should do internet oversight, Finch stressed that relevant agency experience and resources are assessed on a matter-specific basis.

  1. International Panel

The last panel of the afternoon, which focused on international developments, was moderated by Cadwalader Counsel (and former Attorney-Advisor to FTC Chairman Tim Muris) Bilal Sayyed.

Deputy Assistant Attorney General for International Matters, Roger Alford, began with an overview of trade and antitrust considerations.  Alford explained that DOJ adds a consumer welfare and economics perspective to Trump Administration trade policy discussions.  On the international plane, DOJ supports principles of non-discrimination, strong antitrust enforcement, and opposition to national champions, plus the addition of a new competition chapter in “NAFTA 2.0” negotiations.  The revised 2017 DOJ International Antitrust Guidelines dealt with economic efficiency and the consideration of comity.  DOJ and the Executive Branch will take into account the degree of conflict with other jurisdictions’ laws (fleshing out comity analysis) and will push case coordination as well as policy coordination.  DOJ is considering new ideas for dealing with due process internationally, in addition to working within the International Competition Network to develop best practices.  Better international coordination is also needed on the cartel leniency program.

Next, Koren Wong-Ervin, Qualcomm Director of IP and Competition Policy (and former Director of the Scalia Law School’s Global Antitrust Institute) stated that the Korea Fair Trade Commission had ignored comity and guidance from U.S. expert officials in imposing global licensing remedies and penalties on Qualcomm.  The U.S. Government is moving toward a sounder approach on the evaluation of standard essential patents, as is Europe, with a move away from required component-specific patent licensing royalty determinations.  More generally, a return to an economic effects-based approach to IP licensing is important.  Comprehensive revisions to China’s Anti-Monopoly Law, now under consideration, will have enormous public policy importance.  Balanced IP licensing rules, with courts as gatekeepers, are important.  Chinese law still has overly broad essential facilities and deception law; IP price regulation proposals are very troublesome.  New FTC Commissioners are needed, accompanied by robust budget support for international work.

Latham & Watkins’ Washington, D.C. Managing Partner Michael Egge focused on the substantial divergence in merger enforcement practice around the world.  The cost of compliance imposed by European Commission pre-notification filing requirements is overly high; this pre-notification practice is not written down and has escaped needed public attention.  Chinese merger filing practice (“China is struggling to cope”) features a costly 1-3 month pre-filing acceptance period, and merger filing requirements in India are particularly onerous.

Jim Rill, former Assistant Attorney General for Antitrust and former ABA Antitrust Section Chair, stressed that due process improvements can help promote substantive antitrust convergence around the globe.  Rill stated that U.S. Government officials, with the assistance of private sector stakeholders, need a mechanism (a “report card”) to measure foreign agencies’ implementation of OECD antitrust recommendations.  U.S. Government officials should consider participating in foreign proceedings where the denial of due process is blatant, and where foreign governments indirectly dictate a particular harmful policy result.  Multilateral review of international agreements is valuable as well.  The comity principles found in the 1991 EU-U.S. Antitrust Cooperation Agreement are quite useful.  Trade remedies in antitrust agreements are not a competition solution, and are not helpful.  More and better training programs for foreign officials are called for; International Chamber of Commerce, American Bar Association, and U.S. Chamber of Commerce principles are generally sound.  Some consideration should be given to old ICPAC recommendations, such as (perhaps) the development of a common merger notification form for use around the world.

Douglas Ginsburg, Senior Judge (and former Chief Judge) of the U.S. Court of Appeals for the D.C. Circuit, and former Assistant Attorney General for Antitrust, spoke last, focusing on the European Court of Justice’s Intel decision, which laid bare the deficiencies in the European Commission’s finding of a competition law violation in that matter.

In a brief closing roundtable discussion, Roger Alford suggested possible greater involvement by business community stakeholders in training foreign antitrust officials.

  1. Conclusion

Heritage Foundation host Alden Abbott closed the proceedings with a brief capsule summary of panel highlights.  As in prior years, the Fourth Annual Heritage Antitrust Conference generated spirited discussion among the brightest lights in the American antitrust firmament on recent developments and likely trends in antitrust enforcement and policy development, here and abroad.