Archives For consent order

Since the LabMD decision, in which the Eleventh Circuit Court of Appeals told the FTC that its orders were unconstitutionally vague, the FTC has been put on notice that it needs to reconsider how it develops and substantiates its claims in data security enforcement actions brought under Section 5. 

Thus, on January 6, the FTC announced on its blog that it will have “New and improved FTC data security orders: Better guidance for companies, better protection for consumers.” However, the changes the Commission highlights only get to a small part of what we have previously criticized when it comes to their “common law” of data security (see here and here). 

While the new orders do list more specific requirements to help explain what the FTC believes is a “comprehensive data security program”, there is still no legal analysis in either the orders or the complaints that would give companies fair notice of what the law requires. Furthermore, nothing about the underlying FTC process has changed, which means there is still enormous pressure for companies to settle rather than litigate the contours of what “reasonable” data security practices look like. Thus, despite the Commission’s optimism, the recent orders and complaints do little to nothing to remedy the problems that plague the Commission’s data security enforcement program.

The changes

In his blog post, the director of the Bureau of Consumer Protection at the FTC describes how new orders in data security enforcement actions are more specific, with one of the main goals being more guidance to businesses trying to follow the law.

Since the early 2000s, our data security orders had contained fairly standard language. For example, these orders typically required a company to implement a comprehensive information security program subject to a biennial outside assessment. As part of the FTC’s Hearings on Competition and Consumer Protection in the 21st Century, we held a hearing in December 2018 that specifically considered how we might improve our data security orders. We were also mindful of the 11th Circuit’s 2018 LabMD decision, which struck down an FTC data security order as unenforceably vague.

Based on this learning, in 2019 the FTC made significant improvements to its data security orders. These improvements are reflected in seven orders announced this year against an array of diverse companies: ClixSense (pay-to-click survey company), i-Dressup (online games for kids), DealerBuilt (car dealer software provider), D-Link (Internet-connected routers and cameras), Equifax (credit bureau), Retina-X (monitoring app), and Infotrax (service provider for multilevel marketers)…

[T]he orders are more specific. They continue to require that the company implement a comprehensive, process-based data security program, and they require the company to implement specific safeguards to address the problems alleged in the complaint. Examples have included yearly employee training, access controls, monitoring systems for data security incidents, patch management systems, and encryption. These requirements not only make the FTC’s expectations clearer to companies, but also improve order enforceability.

Why the FTC’s data security enforcement regime fails to provide fair notice or develop law (and is not like the common law)

While these changes are long overdue, it is just one step in the direction of a much-needed process reform at the FTC in how it prosecutes cases with its unfairness authority, particularly in the realm of data security. It’s helpful to understand exactly why the historical failures of the FTC process are problematic in order to understand why the changes it is undertaking are insufficient.

For instance, Geoffrey Manne and I previously highlighted  the various ways the FTC’s data security consent order regime fails in comparison with the common law: 

In Lord Mansfield’s characterization, “the common law ‘does not consist of particular cases, but of general principles, which are illustrated and explained by those cases.’” Further, the common law is evolutionary in nature, with the outcome of each particular case depending substantially on the precedent laid down in previous cases. The common law thus emerges through the accretion of marginal glosses on general rules, dictated by new circumstances. 

The common law arguably leads to legal rules with at least two substantial benefits—efficiency and predictability or certainty. The repeated adjudication of inefficient or otherwise suboptimal rules results in a system that generally offers marginal improvements to the law. The incentives of parties bringing cases generally means “hard cases,” and thus judicial decisions that have to define both what facts and circumstances violate the law and what facts and circumstances don’t. Thus, a benefit of a “real” common law evolution is that it produces a body of law and analysis that actors can use to determine what conduct they can undertake without risk of liability and what they cannot. 

In the abstract, of course, the FTC’s data security process is neither evolutionary in nature nor does it produce such well-defined rules. Rather, it is a succession of wholly independent cases, without any precedent, narrow in scope, and binding only on the parties to each particular case. Moreover it is generally devoid of analysis of the causal link between conduct and liability and entirely devoid of analysis of which facts do not lead to liability. Like all regulation it tends to be static; the FTC is, after all, an enforcement agency, charged with enforcing the strictures of specific and little-changing pieces of legislation and regulation. For better or worse, much of the FTC’s data security adjudication adheres unerringly to the terms of the regulations it enforces with vanishingly little in the way of gloss or evolution. As such (and, we believe, for worse), the FTC’s process in data security cases tends to reject the ever-evolving “local knowledge” of individual actors and substitutes instead the inherently limited legislative and regulatory pronouncements of the past. 

By contrast, real common law, as a result of its case-by-case, bottom-up process, adapts to changing attributes of society over time, largely absent the knowledge and rent-seeking problems of legislatures or administrative agencies. The mechanism of constant litigation of inefficient rules allows the common law to retain a generally efficient character unmatched by legislation, regulation, or even administrative enforcement. 

Because the common law process depends on the issues selected for litigation and the effects of the decisions resulting from that litigation, both the process by which disputes come to the decision-makers’ attention, as well as (to a lesser extent, because errors will be corrected over time) the incentives and ability of the decision-maker to render welfare-enhancing decisions, determine the value of the common law process. These are decidedly problematic at the FTC.

In our analysis, we found the FTC’s process to be wanting compared to the institution of the common law. The incentives of the administrative complaint process put a relatively larger pressure on companies to settle data security actions brought by the FTC compared to private litigants. This is because the FTC can use its investigatory powers as a public enforcer to bypass the normal discovery process to which private litigants are subject, and over which independent judges have authority. 

In a private court action, plaintiffs can’t engage in discovery unless their complaint survives a motion to dismiss from the defendant. Discovery costs remain a major driver of settlements, so this important judicial review is necessary to make sure there is actually a harm present before putting those costs on defendants. 

Furthermore, the FTC can also bring cases in a Part III adjudicatory process which starts in front of an administrative law judge (ALJ) but is then appealable to the FTC itself. Former Commissioner Joshua Wright noted in 2013 that “in the past nearly twenty years… after the administrative decision was appealed to the Commission, the Commission ruled in favor of FTC staff. In other words, in 100 percent of cases where the ALJ ruled in favor of the FTC, the Commission affirmed; and in 100 percent of the cases in which the ALJ ruled against the FTC, the Commission reversed.” In other words, the FTC nearly always rules in favor of itself on appeal if the ALJ finds there is no case, as it did in LabMD. The combination of investigation costs before any complaint at all and the high likelihood of losing through several stages of litigation makes the intelligent business decision to simply agree to a consent decree.

The results of this asymmetrical process show the FTC has not really been building a common law. In all but two cases (Wyndham and LabMD), the companies who have been targeted for investigation by the FTC on data security enforcement have settled. We also noted how the FTC’s data security orders tended to be nearly identical from case-to-case, reflecting the standards of the FTC’s Safeguards Rule. Since the orders were giving nearly identical—and as LabMD found, vague—remedies in each case, it cannot be said there was a common law developing over time.  

What LabMD addressed and what it didn’t

In its decision, the Eleventh Circuit sidestepped fundamental substantive problems with the FTC’s data security practice (which we have made in both our scholarship and LabMD amicus brief) about notice or substantial injury. Instead, the court decided to assume the FTC had proven its case and focused exclusively on the remedy. 

We will assume arguendo that the Commission is correct and that LabMD’s negligent failure to design and maintain a reasonable data-security program invaded consumers’ right of privacy and thus constituted an unfair act or practice.

What the Eleventh Circuit did address, though, was that the remedies the FTC had been routinely applying to businesses through its data enforcement actions lacked the necessary specificity in order to be enforceable through injunctions or cease and desist orders.

In the case at hand, the cease and desist order contains no prohibitions. It does not instruct LabMD to stop committing a specific act or practice. Rather, it commands LabMD to overhaul and replace its data-security program to meet an indeterminable standard of reasonableness. This command is unenforceable. Its unenforceability is made clear if we imagine what would take place if the Commission sought the order’s enforcement…

The Commission moves the district court for an order requiring LabMD to show cause why it should not be held in contempt for violating the following injunctive provision:

[T]he respondent shall … establish and implement, and thereafter maintain, a comprehensive information security program that is reasonably designed to protect the security, confidentiality, and integrity of personal information collected from or about consumers…. Such program… shall contain administrative, technical, and physical safeguards appropriate to respondent’s size and complexity, the nature and scope of respondent’s activities, and the sensitivity of the personal information collected from or about consumers….

The Commission’s motion alleges that LabMD’s program failed to implement “x” and is therefore not “reasonably designed.” The court concludes that the Commission’s alleged failure is within the provision’s language and orders LabMD to show cause why it should not be held in contempt.

At the show cause hearing, LabMD calls an expert who testifies that the data-security program LabMD implemented complies with the injunctive provision at issue. The expert testifies that “x” is not a necessary component of a reasonably designed data-security program. The Commission, in response, calls an expert who disagrees. At this point, the district court undertakes to determine which of the two equally qualified experts correctly read the injunctive provision. Nothing in the provision, however, indicates which expert is correct. The provision contains no mention of “x” and is devoid of any meaningful standard informing the court of what constitutes a “reasonably designed” data-security program. The court therefore has no choice but to conclude that the Commission has not proven — and indeed cannot prove — LabMD’s alleged violation by clear and convincing evidence.

In other words, the Eleventh Circuit found that an order requiring a reasonable data security program is not specific enough to make it enforceable. This leaves questions as to whether the FTC’s requirement of a “reasonable data security program” is specific enough to survive a motion to dismiss and/or a fair notice challenge going forward.

Under the Federal Rules of Civil Procedure, a plaintiff must provide “a short and plain statement . . . showing that the pleader is entitled to relief,” Fed. R. Civ. P. 8(a)(2), including “enough facts to state a claim . . . that is plausible on its face.” Bell Atl. Corp. v. Twombly, 550 U.S. 544, 570 (2007). “[T]hreadbare recitals of the elements of a cause of action, supported by mere conclusory statements” will not suffice. Ashcroft v. Iqbal, 556 U.S. 662, 678 (2009). In FTC v. D-Link, for instance, the Northern District of California dismissed the unfairness claims because the FTC did not sufficiently plead injury. 

[T]hey make out a mere possibility of injury at best. The FTC does not identify a single incident where a consumer’s financial, medical or other sensitive personal information has been accessed, exposed or misused in any way, or whose IP camera has been compromised by unauthorized parties, or who has suffered any harm or even simple annoyance and inconvenience from the alleged security flaws in the DLS devices. The absence of any concrete facts makes it just as possible that DLS’s devices are not likely to substantially harm consumers, and the FTC cannot rely on wholly conclusory allegations about potential injury to tilt the balance in its favor. 

The fair notice question wasn’t reached in LabMD, though it was in FTC v. Wyndham. But the Third Circuit did not analyze the FTC’s data security regime under the “ascertainable certainty” standard applied to agency interpretation of a statute.

Wyndham’s position is unmistakable: the FTC has not yet declared that cybersecurity practices can be unfair; there is no relevant FTC rule, adjudication or document that merits deference; and the FTC is asking the federal courts to interpret § 45(a) in the first instance to decide whether it prohibits the alleged conduct here. The implication of this position is similarly clear: if the federal courts are to decide whether Wyndham’s conduct was unfair in the first instance under the statute without deferring to any FTC interpretation, then this case involves ordinary judicial interpretation of a civil statute, and the ascertainable certainty standard does not apply. The relevant question is not whether Wyndham had fair notice of the FTC’s interpretation of the statute, but whether Wyndham had fair notice of what the statute itself requires.

In other words, Wyndham boxed itself into a corner arguing that they did not have fair notice that the FTC could bring a data security enforcement action against the under Section 5 unfairness. LabMD, on the other hand, argued they did not have fair notice as to how the FTC would enforce its data security standards. Cf. ICLE-Techfreedom Amicus Brief at 19. The Third Circuit even suggested that under an “ascertainable certainty” standard, the FTC failed to provide fair notice: “we agree with Wyndham that the guidebook could not, on its own, provide ‘ascertainable certainty’ of the FTC’s interpretation of what specific cybersecurity practices fail § 45(n).” Wyndham, 799 F.3d at 256 n.21

Most importantly, the Eleventh Circuit did not actually get to the issue of whether LabMD actually violated the law under the factual record developed in the case. This means there is still no caselaw (aside from the ALJ decision in this case) which would allow a company to learn what is and what is not reasonable data security, or what counts as a substantial injury for the purposes of Section 5 unfairness in data security cases. 

How FTC’s changes fundamentally fail to address its failures of process

The FTC’s new approach to its orders is billed as directly responsive to what the Eleventh Circuit did reach in the LabMD decision, but it leaves so much of what makes the process insufficient in place.

First, it is notable that while the FTC highlights changes to its orders, there is still a lack of legal analysis in the orders that would allow a company to accurately predict whether its data security practices are enough under the law. A listing of what specific companies under consent orders are required to do is helpful. But these consent decrees do not require companies to admit liability or contain anything close to the reasoning that accompanies court opinions or normal agency guidance on complying with the law. 

For instance, the general formulation in these 2019 orders is that the company must “establish, implement, and maintain a comprehensive information/software security program that is designed to protect the security, confidentiality, and integrity of such personal information. To satisfy this requirement, Respondent/Defendant must, at a minimum…” (emphasis added), followed by a list of pretty similar requirements with variation depending on the business. Even if a company does all of the listed requirements but a breach occurs, the FTC is not obligated to find the data security program was legally sufficient. There is no safe harbor or presumptive reasonableness that attaches even for the business subject to the order, nonetheless companies looking for guidance. 

While the FTC does now require more specific things, like “yearly employee training, access controls, monitoring systems for data security incidents, patch management systems, and encryption,” there is still no analysis on how to meet the standard of reasonableness the FTC relies upon. In other words, it is not clear that this new approach to orders does anything to increase fair notice to companies as to what the FTC requires under Section 5 unfairness.

Second, nothing about the underlying process has really changed. The FTC can still investigate and prosecute cases through administrative law courts with itself as initial court of appeal. This makes the FTC the police, prosecutor, and judge in its own case. In the case of LabMD, who actually won after many appeals, this process ended in bankruptcy. It is no surprise that since the LabMD decision, each of the FTC’s data security enforcement cases have been settled with consent orders, just as they were before the Eleventh Circuit opinion. 

Unfortunately, if the FTC really wants to evolve its data security process like the common law, it needs to engage in an actual common law process. Without caselaw on the facts necessary to establish substantial injury, “unreasonable” data security practices, and causation, there will continue to be more questions than answers about what the law requires. And without changes to the process, the FTC will continue to be able to strong-arm companies into consent decrees.

Underpinning many policy disputes is a frequently rehearsed conflict of visions: Should we experiment with policies that are likely to lead to superior, but unknown, solutions, or should we should stick to well-worn policies, regardless of how poorly they fit current circumstances? 

This conflict is clearly visible in the debate over whether DOJ should continue to enforce its consent decrees with the major music performing rights organizations (“PROs”), ASCAP and BMI—or terminate them. 

As we note in our recently filed comments with the DOJ, summarized below, the world has moved on since the decrees were put in place in the early twentieth century. Given the changed circumstances, the DOJ should terminate the consent decrees. This would allow entrepreneurs, armed with modern technology, to facilitate a true market for public performance rights.

The consent decrees

In the early days of radio, it was unclear how composers and publishers could effectively monitor and enforce their copyrights. Thousands of radio stations across the nation were playing the songs that tens of thousands of composers had written. Given the state of technology, there was no readily foreseeable way to enable bargaining between the stations and composers for license fees associated with these plays.

In 1914, a group of rights holders established the American Society of Composers Authors and Publishers (ASCAP) as a way to overcome these transactions costs by negotiating with radio stations on behalf of all of its members.

Even though ASCAP’s business was clearly aimed at ensuring that rightsholders’ were appropriately compensated for the use of their works, which logically would have incentivized greater output of licensable works, the nonstandard arrangement it embodied was unacceptable to the antitrust enforcers of the era. Not long after it was created, the Department of Justice began investigating ASCAP for potential antitrust violations.

While the agglomeration of rights under a single entity had obvious benefits for licensors and licensees of musical works, a power struggle nevertheless emerged between ASCAP and radio broadcasters over the terms of those licenses. Eventually this struggle led to the formation of a new PRO, the broadcaster-backed BMI, in 1939. The following year, the DOJ challenged the activities of both PROs in dual criminal antitrust proceedings. The eventual result was a set of consent decrees in 1941 that, with relatively minor modifications over the years, still regulate the music industry.

Enter the Internet

The emergence of new ways to distribute music has, perhaps unsurprisingly, resulted in renewed interest from artists in developing alternative ways to license their material. In 2014, BMI and ASCAP asked the DOJ to modify their consent decrees to permit music publishers partially to withdraw from the PROs, which would have enabled those partially-withdrawing publishers to license their works to digital services under separate agreements (and prohibited the PROs from licensing their works to those same services). However, the DOJ rejected this request and insisted that the consent decree requires “full-work” licenses — a result that would have not only entrenched the status quo, but also erased the competitive differences that currently exist between the PROs. (It might also have created other problems, such as limiting collaborations between artists who currently license through different PROs.)

This episode demonstrates a critical flaw in how the consent decrees currently operate. Imposing full-work license obligations on PROs would have short-circuited the limited market that currently exists, to the detriment of creators, competition among PROs, and, ultimately, consumers. Paradoxically these harms flow directly from a  presumption that administrative officials, seeking to enforce antitrust law — the ultimate aim of which is to promote competition and consumer welfare — can dictate through top-down regulatory intervention market terms better than participants working together. 

If a PRO wants to offer full-work licenses to its licensee-customers, it should be free to do so (including, e.g., by contracting with other PROs in cases where the PRO in question does not own the work outright). These could be a great boon to licensees and the market. But such an innovation would flow from a feedback mechanism in the market, and would be subject to that same feedback mechanism. 

However, for the DOJ as a regulatory overseer to intervene in the market and assert a preference that it deemed superior (but that was clearly not the result of market demand, or subject to market discipline) is fraught with difficulty. And this is the emblematic problem with the consent decrees and the mandated licensing regimes. It allows regulators to imagine that they have both the knowledge and expertise to manage highly complicated markets. But, as Mark Lemley has observed, “[g]one are the days when there was any serious debate about the superiority of a market-based economy over any of its traditional alternatives, from feudalism to communism.” 

It is no knock against the DOJ that it patently does not have either the knowledge or expertise to manage these markets: no one does. That’s the entire point of having markets, which facilitate the transmission and effective utilization of vast amounts of disaggregated information, including subjective preferences, that cannot be known to anyone other than the individual who holds them. When regulators can allow this process to work, they should.

Letting the market move forward

Some advocates of the status quo have recommended that the consent orders remain in place, because 

Without robust competition in the music licensing market, consumers could face higher prices, less choice, and an increase in licensing costs that could render many vibrant public spaces silent. In the absence of a truly competitive market in which PROs compete to attract services and other licensees, the consent decrees must remain in place to prevent ASCAP and BMI from abusing their substantial market power.

This gets to the very heart of the problem with the conflict of visions that undergirds policy debates. Advocating for the status quo in this manner is based on a static view of “markets,” one that is, moreover, rooted in an early twentieth-century conception of the relevant industries. The DOJ froze the licensing market in time with the consent decrees — perhaps justifiably in 1941 given the state of technology and the very high transaction costs involved. But technology and business practices have evolved and are now much more capable of handling the complex, distributed set of transactions necessary to make the performance license market a reality.

Believing that the absence of the consent decrees will force the performance licensing market to collapse into an anticompetitive wasteland reflects a failure of imagination and suggests a fundamental distrust in the power of the market to uncover novel solutions—against the overwhelming evidence to the contrary

Yet, those of a dull and pessimistic mindset need not fear unduly the revocation of the consent decrees. For if evidence emerges that the market participants (including the PROs and whatever other entities emerge) are engaging in anticompetitive practices to the detriment of consumer welfare, the DOJ can sue those entities. The threat of such actions should be sufficient in itself to deter such anticompetitive practices but if it is not, then the sword of antitrust, including potentially the imposition of consent decrees, can once again be wielded. 

Meanwhile, those of us with an optimistic, imaginative mindset, look forward to a time in the near future when entrepreneurs devise innovative and cost-effective solutions to the problem of highly-distributed music licensing. In some respects their job is made easier by the fact that an increasing proportion of music is  streamed via a small number of large companies (Spotify, Pandora, Apple, Amazon, Tencent, YouTube, Tidal, etc.). But it is quite feasible that in the absence of the consent decrees new licensing systems will emerge, using modern database technologies, blockchain and other distributed ledgers, that will enable much more effective usage-based licenses applicable not only to these streaming services but others too. 

We hope the DOJ has the foresight to allow such true competition to enter this market and the strength to believe enough in our institutions that it can permit some uncertainty while entrepreneurs experiment with superior methods of facilitating music licensing.

[This post is the fourth in an ongoing symposium on “Should We Break Up Big Tech?“that features analysis and opinion from various perspectives.]

[This post is authored by Pallavi Guniganti, editor of Global Competition Review.]

Start with the assumption that there is a problem

The European Commission and Austria’s Federal Competition Authority are investigating Amazon over its use of Marketplace sellers’ data. US senator Elizabeth Warren has said that one reason to require “large tech platforms to be designated as ‘Platform Utilities’ and broken apart from any participant on that platform” is to prevent them from using data they obtain from third parties on the platform to benefit their own participation on the platform.

Amazon tweeted in response to Warren: “We don’t use individual sellers’ data to launch private label products.” However, an Amazon spokeswoman would not answer questions about whether it uses aggregated non-public data about sellers, or data from buyers; and whether any formal firewall prevents Amazon’s retail operation from accessing Marketplace data.

If the problem is solely that Amazon’s own retail operation can access data from the Marketplace, structurally breaking up the company and forbidding it and other platforms from participating on those platforms may be a far more extensive intervention than is needed. A targeted response such as a firewall could remedy the specific competitive harm.

Germany’s Federal Cartel Office implicitly recognised this with its Facebook decision, which did not demand the divestiture of every business beyond the core social network – the “Mark Zuckerberg Production” that began in 2004. Instead, the competition authority prohibited Facebook from conditioning the use of that social network on consent to the collection and combination of data from WhatsApp, Oculus, Masquerade, Instagram and any other sites or apps where Facebook might track them.

The decision does not limit data collection on Facebook itself. “It is taken into account that an advertising-funded social network generally needs to process a large amount of personal data,” the authority said. “However, the Bundeskartellamt holds that the efficiencies in a business model based on personalised advertising do not outweigh the interests of the users when it comes to processing data from sources outside of the social network.”

The Federal Cartel Office thus aims to wall off the data collected on Facebook from data that can be collected anywhere else. It ordered Facebook to present a road map for how it would implement these changes within four months of the February 2019 decision, but the time limit was suspended by the company’s emergency appeal to the Düsseldorf Higher Regional Court.

Federal Cartel Office president Andreas Mundt has described the kind of remedy he had ordered for Facebook as not exactly structural, but going in a “structural direction” that might work for other cases as well. Keeping the data apart is a way to “break up this market power” without literally breaking up the corporation, and the first step to an “internal divestiture”, he said.

Mundt claimed that this kind of remedy gets to “the core of the problem”: big internet companies being able to out-compete new entrants, because the former can obtain and process data even beyond what they collected on a single service that has attracted a large number of users.

He used terms like “silo” rather than “firewall”, but the essential idea is to protect competition by preventing the dissemination of certain information. Antitrust authorities worldwide have considered firewalls, particularly in vertical merger remedies, as a way to prevent the anticompetitive movement of data while still allowing for some efficiencies of business units being under the same corporate umbrella.

Notwithstanding Mundt’s reference to a “structural direction”, competition authorities including his own have traditionally classified firewalls as a behavioural or conduct remedy. They purport to solve a specific problem: the movement of information.

Other aspects of big companies that can give them an advantage – such as the use of profits from one part of a company to invest in another part, perhaps to undercut rivals on prices – would not be addressed by firewalls. They would more likely would require dividing up a company at the corporate level.

But if data are the central concern, then the way forward might be found in firewalls.

What do the enforcers say?

Germany

The Federal Cartel Office’s May 2017 guidance on merger remedies disfavours firewalls, stating that such obligations are “not suitable to remedy competitive harm” because they require continuous oversight. Employees of a corporation in almost any sector will commonly exchange information on a daily basis in almost every industry, making it “extremely difficult to identify, stop and prevent non-compliance with the firewall obligations”, the guidance states. In a footnote, it acknowledges that other, unspecified jurisdictions have regarded firewalls “as an effective remedy to remove competition concerns”.

UK

The UK’s Competition and Markets Authority takes a more optimistic view of the ability to keep a firewall in place, at least in the context of a vertical integration to prevent the use of “privileged information generated by competitors’ use of the merged company’s facilities or products”. In addition to setting up the company to restrict information flows, staff interactions and the sharing of services, physical premises and management, the CMA also requires the commitment of “significant resources to educating staff about the requirements of the measures and supporting the measures with disciplinary procedures and independent monitoring”. 

EU

The European Commission’s merger remedies notice is quite short. It does not mention firewalls or Chinese walls by name, simply noting that any non-structural remedy is problematic “due to the absence of effective monitoring of its implementation” by the commission or even other market participants. A 2011 European Commission submission to the Organisation for Economic Co-operation and Development was gloomier: “We have also found that firewalls are virtually impossible to monitor.”

US DOJ

The US antitrust agencies have been inconsistent in their views, and not on a consistent partisan basis. Under George W Bush, the Department of Justice’s antitrust division’s 2004 merger guidance said “a properly designed and enforced firewall” could prevent certain competition harms. But it also would require the DOJ and courts to expend “considerable time and effort” on monitoring, and “may frequently destroy the very efficiency that the merger was designed to generate. For these reasons, the use of firewalls in division decrees is the exception and not the rule.”

 Under Barack Obama, the Antitrust Division revised its guidance in 2011 to omit the most sceptical language about firewalls, replacing it with a single sentence about the need for effective monitoring. Under Donald Trump, the Antitrust Division has withdrawn the 2011 guidance, and the 2004 guidance is operative.

US FTC

At the Federal Trade Commission, on the other hand, firewalls had long been relatively uncontroversial among both Republicans and Democrats. For example, the commissioners unanimously agreed to a firewall remedy for PepsiCo’s and Coca-Cola’s separate 2010 acquisitions of bottlers and distributors that also dealt with rival a rival beverage maker, the Dr Pepper Snapple Group. (The FTC later emphasised the importance in those cases of obtaining industry expert monitors, who “have provided commission staff with invaluable insight and evaluation regarding each company’s compliance with the commission’s orders”.)

In 2017, the two commissioners who remained from the Obama administration both signed off on the Broadcom/Brocade merger based on a firewall – as did the European Commission, which also mandated interoperability commitments. And the Democratic commissioners appointed by President Trump voted with their Republican colleagues in 2018 to clear the Northrop Grumman/Orbital ATK deal subject to a behavioural remedy that included supply commitments and firewalls.

Several months later, however, those Democrats dissented from the FTC’s approval of Staples/Essendant, which the agency conditioned solely on a firewall between Essendant’s wholesale business and the Staples unit that handles corporate sales. While a firewall to prevent Staples from exploiting Essendant’s commercially-sensitive data about Staples’ rivals “will reduce the chance of misuse of data, it does not eliminate it,” Commissioner Rohit Chopra said. He emphasised the difficulty of policing oral communications, and said the FTC instead could have required Essendant to return its customers’ data. Commissioner Rebecca Kelly Slaughter said she shared Chopra’s “concerns about the efficacy of the firewall to remedy the information sharing harm”.

The majority defended firewalls’ effectiveness, noting that it had used them solve competition concerns in past vertical mergers, “and the integrity of those firewalls was robust.” The Republican commissioners cited the FTC’s review of the merger remedies it had imposed from 2006 to 2012, which concluded: “All vertical merger orders were judged successful.”

Republican commissioner Christine Wilson wrote separately about the importance of choosing “a remedy that is narrowly tailored to address the likely competitive harms without doing collateral damage.” Certain behavioural remedies for past vertical mergers had gone too far and even resulted in less competition, she said. “I have substantially fewer qualms about long-standing and less invasive tools, such as the ‘firewalls, fair dealing, and transparency provisions’ the Antitrust Division endorsed in the 2004 edition of its Policy Guide.”

Why firewalls don’t work, especially for big tech

Firewalls are designed to prevent the anticompetitive harm of information exchange, but whether they work depends on whether the companies and their employees behave themselves – and if they do not, on whether antitrust enforcers can know it and prove it. Deputy assistant attorney general Barry Nigro at the Antitrust Division has questioned the effectiveness of firewalls as a remedy for deals where the relevant business units are operationally close. The same problem may arise outside the merger context.

For example, Amazon’s investment fund for products to complement its Alexa virtual assistant could be seen as having the kind of firewall that is undercut by the practicalities of how a business operates. CNBC reported in September 2017 that “Alexa Fund representatives called a handful of its portfolio companies to say a clear ‘firewall’ exists between the Alexa Fund and Amazon’s product development teams.” The chief executive from Nucleus, one of those portfolio companies, had complained that Amazon’s Echo Show was a copycat of Nucleus’s product. While Amazon claimed that the Alexa Fund has “measures” to ensure “appropriate treatment” of confidential information, the companies said the process of obtaining the fund’s investment required them to work closely with Amazon’s product teams.

CNBC contrasted with Intel Capital – a division of the technology company that manages venture capital and investment – where a former managing director said he and his colleagues “tried to be extra careful not to let trade secrets flow across the firewall into its parent company”.

Firewalls are commonplace to corporate lawyers, who instill temporary blocks to prevent transmission of information in a variety of situations, such as during due diligence on a deal. This experience may lead such attorneys to put more faith in firewalls than enforcement advocates do.

Diana Moss, the president of the American Antitrust Institute, says that like other behavioral remedies, firewalls “don’t change any incentive to exercise market power”. In contrast, structural remedies eliminate that incentive by removing the part of the business that would make the exercise of market power profitable.

No internal monitoring or compliance ensures the firewall is respected, Moss says, unless a government consent order installs a monitor in a company to make sure the business units aren’t sharing information. This would be unlikely to occur, she says.

Moss’s 2011 white paper on behavioural merger remedies, co-authored with John Kwoka, reviews how well such remedies have worked. It notes that “information firewalls in Google-ITA and Comcast-NBCU clearly impede the joint operation and coordination of business divisions that would otherwise naturally occur.” 

Lina Khan’s 2019 Columbia Law Review article, “The Separation of Platforms and Commerce,” repeatedly cites Moss and Kwoka in the course of arguing that non-separation solutions such as firewalls do not work.

Khan concedes that information firewalls “in theory could help prevent information appropriation by dominant integrated firms.” But regulating the dissemination of information is especially difficult “in multibillion dollar markets built around the intricate collection, combination, and sale of data”, as companies in those markets “will have an even greater incentive to combine different sets of information”.

Why firewalls might work, especially for big tech

Yet neither Khan nor Moss points to an example of a firewall that clearly did not work. Khan writes: “Whether the [Google-ITA] information firewall was successful in preventing Google from accessing rivals’ business information is not publicly known. A year after the remedy expired, Google shut down” the application programming interface, through which ITA had provided its customisable flight search engine.

Even as enforcement advocates throw doubt on firewalls, enforcers keep requiring them. China’s Ministry of Commerce even used them to remedy a horizontal merger, in two stages of its conditions on Western Digital’s acquisition of Hitachi’s hard disk drive.

If German courts allow Andreas Mundt’s remedy for Facebook to go into effect, it will provide an example of just how effective a firewall can be on a platform. The decision requires Facebook to detail its technical plan to implement the obligation not to share data on users from its subsidiaries and its tracking on independent websites and apps.

A section of the “frequently asked questions” about the Federal Cartel Office’s Facebook case includes: “How can the Bundeskartellamt enforce the implementation of its decision?” The authority can impose fines for known non-compliance, but that assume it could detect violations of its order. Somewhat tentatively, the agency says it could carry out random monitoring, which is “possible in principle… as the actual flow of data eg from websites to Facebook can be monitored by analysing websites and their components or by recording signals.”

As perhaps befits the digital difference between Staples and Facebook, the German authority posits monitoring that would not be able to catch the kind of “oral communications” that Commissioner Chopra worried about when the US FTC cleared Staples’ acquisition of Essendant. But the use of such high-monitors could make firewalls even more appropriate as a remedy for platforms – which look to large data flows for a competitive advantage – than for old economy sales teams that could harm competition with just a few minutes of conversation.

Rather than a human monitor installed in a company to guard against firewall breaches, which Moss said was unlikely, software installed on employee computers and email systems might detect data flows between business units that should be walled off from each other. Breakups and firewalls are both longstanding remedies, but the latter may be more amenable to the kind of solutions that “big tech” itself has provided.

Big is bad, part 1: Kafka, Coase, and Brandeis walk into a bar … There’s a quip in a well-known textbook that Nobel laureate Ronald Coase said he’d grown weary of antitrust because when prices went up, the judges said it was monopoly; when the prices went down, they said it was predatory pricing; and when they stayed the same, they said it was tacit collusion. ICLE’s Geoffrey Manne and Gus Hurwitz worry that with the rise of the neo-Brandeisians, not much has changed since Coase’s time:

[C]ompetition, on its face, is virtually indistinguishable from anticompetitive behavior. Every firm strives to undercut its rivals, to put its rivals out of business, to increase its rivals’ costs, or to steal its rivals’ customers. The consumer welfare standard provides courts with a concrete mechanism for distinguishing between good and bad conduct, based not on the effect on rival firms but on the effect on consumers. Absent such a standard, any firm could potentially be deemed to violate the antitrust laws for any act it undertakes that could impede its competitors.

Big is bad, part 2. A working paper published by researchers from Denmark and the University of California at Berkeley suggest that companies such as Google, Apple, Facebook, and Nike are taking advantage of so-called “tax havens” to cause billions of dollars of income go “missing.” There’s a lot of mumbo jumbo in this one, but it’s getting lots of attention.

We show theoretically and empirically that in the current international tax system, tax authorities of high-tax countries do not have incentives to combat profit shifting to tax havens. They instead focus their enforcement effort on relocating profits booked in other high-tax places—in effect stealing revenue from each other.

Big is bad, part 3: Can any country survive with debt-to-GDP of more than 100 percent? Apparently, the answer is “yes.” The U.K. went 80 years, from 1779 to 1858. Then, it went 47 years from 1916 to 1962. Tim Harford has a fascinating story about an effort to clear the country’s debt in that second run.

In 1928, an anonymous donor resolved to clear the UK’s national debt and gave £500,000 with that end in mind. It was a tidy sum — almost £30m at today’s prices — but not nearly enough to pay off the debt. So it sat in trust, accumulating interest, for nearly a century.

How do you make a small fortune? Begin with a big one. A lesson from Johnny Depp.

Will we ever stop debating the Trolley Problem? Apparently the answer is “no.” Also, TIL there’s a field of research that relies on “notions.”

For so long, moral psychology has relied on the notion that you can extrapolate from people’s decisions in hypothetical thought experiments to infer something meaningful about how they would behave morally in the real world. These new findings challenge that core assumption of the field.

 

The week that was on Truth on the Market

LabMD.

[T]argets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action  is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.

Google Android.

Thus, had Google opted instead to create a separate walled garden of its own on the Apple model, everything it had done would have otherwise been fine. This means that Google is now subject to an antitrust investigation for attempting to develop a more open platform.

AT&T-Time Warner. First this:

The government’s contention that, after the merger, AT&T and rival Comcast could coordinate to restrict access to popular Time Warner and NBC content to harm emerging competitors was always a weak argument.

Then this:

Doing no favors to its case, the government turned to a seemingly contradictory argument that AT&T and Comcast would coordinate to demand virtual providers take too much content.

 

 

The Eleventh Circuit’s LabMD opinion came out last week and has been something of a rorschach test for those of us who study consumer protection law.

Neil Chilson found the result to be a disturbing sign of slippage in Congress’s command that the FTC refrain from basing enforcement on “public policy.” Berin Szóka, on the other hand, saw the ruling as a long-awaited rebuke against the FTC’s expansive notion of its “unfairness” authority. Whereas Daniel Solove and Woodrow Hartzog described the decision as “quite narrow and… far from crippling,” in part, because “[t]he opinion says very little about the FTC’s general power to enforce Section 5 unfairness.” Even among the ICLE crew, our understandings of the opinion reflect our priors, from it being best understood as expressing due process concerns about injury-based enforcement of Section 5, on the one hand, to being about the meaning of Section 5(n)’s causation requirement, on the other.

You can expect to hear lots more about these and other LabMD-related issues from us soon, but for now we want to write about the only thing more exciting than dueling histories of the FTC’s 1980 Unfairness Statement: administrative law.

While most of those watching the LabMD case come from some nexus of FTC watchers, data security specialists, and privacy lawyers, the reality is that the case itself is mostly about administrative law (the law that governs how federal agencies are given and use their power). And the court’s opinion is best understood from a primarily administrative law perspective.

From that perspective, the case should lead to some significant introspection at the Commission. While the FTC may find ways to comply with the letter of the opinion without substantially altering its approach to data security cases, it will likely face difficulty defending that approach before the courts. True compliance with this decision will require the FTC to define what makes certain data security practices unfair in a more-coherent and far-more-readily ascertainable fashion.

The devil is in the (well-specified) details

The actual holding in the case comes in Part III of the 11th Circuit’s opinion, where the court finds for LabMD on the ground that, owing to a fatal lack of specificity in the FTC’s proposed order, “the Commission’s cease and desist order is itself unenforceable.”  This is the punchline of the opinion, to which we will return. But it is worth spending some time on the path that the court takes to get there.

It should be stressed at the outset that Part II of the opinion — in which the Court walks through the conceptual and statutory framework that supports an “unfairness” claim — is surprisingly unimportant to the court’s ultimate holding. This was the meat of the case for FTC watchers and privacy and data security lawyers, and it is a fascinating exposition. Doubtless it will be the focus of most analysis of the opinion.

But, for purposes of the court’s disposition of the case, it’s of (perhaps-frustratingly) scant importance. In short, the court assumes, arguendo, that the FTC has sufficient basis to make out an unfairness claim against LabMD before moving on to Part III of the opinion analyzing the FTC’s order given that assumption.

It’s not clear why the court took this approach — and it is dangerous to assume any particular explanation (although it is and will continue to be the subject of much debate). There are several reasonable explanations for the approach, ranging from the court thinking it obvious that the FTC’s unfairness analysis was correct, to it side-stepping the thorny question of how to define injury under Section 5, to the court avoiding writing a decision that could call into question the fundamental constitutionality of a significant portion of the FTC’s legal portfolio. Regardless — and regardless of its relative lack of importance to the ultimate holding — the analysis offered in Part II bears, and will receive, significant attention.

The FTC has two basic forms of consumer protection authority: It can take action against 1) unfair acts or practices and 2) deceptive acts or practices. The FTC’s case against LabMD was framed in terms of unfairness. Unsurprisingly, “unfairness” is a broad, ambiguous concept — one that can easily grow into an amorphous blob of ill-defined enforcement authority.

As discussed by the court (as well as by us, ad nauseum), in the 1970s the FTC made very aggressive use of its unfairness authority to regulate the advertising industry, effectively usurping Congress’ authority to legislate in that area. This over-aggressive enforcement didn’t sit well with Congress, of course, and led it to shut down the FTC for a period of time until the agency adopted a more constrained understanding of the meaning of its unfairness authority. This understanding was communicated to Congress in the FTC’s 1980 Unfairness Statement. That statement was subsequently codified by Congress, in slightly modified form, as Section 5(n) of the FTC Act.

Section 5(n) states that

The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

The meaning of Section 5(n) has been the subject of intense debate for years (for example, here, here and here). In particular, it is unclear whether Section 5(n) defines a test for what constitutes unfair conduct (that which “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition”) or whether instead imposes a necessary, but not necessarily sufficient, condition on the extent of the FTC’s authority to bring cases. The meaning of “cause” under 5(n) is also unclear because, unlike causation in traditional legal contexts, Section 5(n) also targets conduct that is “likely to cause” harm.

Section 5(n) concludes with an important, but also somewhat inscrutable, discussion of the role of “public policy” in the Commission’s unfairness enforcement, indicating that that Commission is free to consider “established public policies” as evidence of unfair conduct, but may not use such considerations “as a primary basis” for its unfairness enforcement.

Just say no to public policy

Section 5 empowers and directs the FTC to police unfair business practices, and there is little reason to think that bad data security practices cannot sometimes fall under its purview. But the FTC’s efforts with respect to data security (and, for that matter, privacy) over the past nearly two decades have focused extensively on developing what it considers to be a comprehensive jurisprudence to address data security concerns. This creates a distinct impression that the FTC has been using its unfairness authority to develop a new area of public policy — to legislate data security standards, in other words — as opposed to policing data security practices that are unfair under established principles of unfairness.

This is a subtle distinction — and there is frankly little guidance for understanding when the agency is acting on the basis of public policy versus when it is proscribing conduct that falls within the meaning of unfairness.

But it is an important distinction. If it is the case — or, more precisely, if the courts think that it is the case — that the FTC is acting on the basis of public policy, then the FTC’s data security efforts are clearly problematic under Section 5(n)’s prohibition on the use of public policy as the primary basis for unfairness actions.

And this is where the Commission gets itself into trouble. The Commission’s efforts to develop its data security enforcement program looks an awful lot like something being driven by public policy, and not so much as merely enforcing existing policy as captured by, in the LabMD court’s words (echoing the FTC’s pre-Section 5(n) unfairness factors), “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.”

The distinction between effecting public policy and enforcing legal norms is… not very clear. Nonetheless, exploring and respecting that distinction is an important task for courts and agencies.

Unfortunately, this case does not well describe how to make that distinction. The opinion is more than a bit muddled and difficult to clearly interpret. Nonetheless, reading the court’s dicta in Part II is instructive. It’s clearly the case that some bad security practices, in some contexts, can be unfair practices. So the proper task for the FTC is to discover how to police “unfairness” within data security cases rather than setting out to become a first-order data security enforcement agency.

How does public policy become well-established law?

Part II of the Eleventh Circuit’s opinion — even if dicta — is important for future interpretations of Section 5 cases. The court goes to great lengths to demonstrate, based on the FTC’s enforcement history and related Congressional rebukes, that the Commission may not rely upon vague “public policy” standards for bringing “unfairness” actions.

But this raises a critical question about the nature of the FTC’s unfairness authority. The Commission was created largely to police conduct that could not readily be proscribed by statute or simple rules. In some cases this means conduct that is hard to label or describe in text with any degree of precision — “I know it when I see it” kinds of acts and practices. In other cases, it may refer to novel or otherwise unpredictable conduct that could not be foreseen by legislators or regulators. In either case, the very purpose of the FTC is to be able to protect consumers from conduct that is not necessarily proscribed elsewhere.

This means that the Commission must have some ability to take action against “unfair” conduct that has not previously been enshrined as “unfair” in “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.” But that ability is not unbounded, of course.

The court explained that the Commission could expound upon what acts fall within the meaning of “unfair” in one of two ways: It could use its rulemaking authority to issue Congressionally reviewable rules, or it could proceed on a case-by-case basis.

In either case, the court’s discussion of how the Commission is to determine what is “unfair” within the constraints of Section 5(n) is frustratingly vague. The earlier parts of the opinion tell us that unfairness is to be adjudged based upon “well-established legal standards,” but here the court tells us that the scope of unfairness can be altered — that is, those well-established legal standards can be changed — through adjudication. It is difficult to square what the court means by this. Regardless, it is the guidance that we have been given by the court.

This is Admin Law 101

And yet perhaps there is some resolution to this conundrum in administrative law. For administrative law scholars, the 11th Circuit’s discussion of the permissibility of agencies developing binding legal norms using either rulemaking or adjudication procedures, is straight out of Chenery II.

Chenery II is a bedrock case of American administrative law, standing broadly for the proposition (as echoed by the 11th Circuit) that agencies can generally develop legal rules through either rulemaking or adjudication, that there may be good reasons to use either in any given case, and that (assuming Congress has empowered the agency to use both) it is primarily up to the agency to determine which approach is preferable in any given case.

But, while Chenery II certainly allows agencies to proceed on a case-by-case basis, that permission is not a broad license to eschew the development of determinate legal standards. And the reason is fairly obvious: if an agency develops rules that are difficult to know ex ante, they can hardly provide guidance for private parties as they order their affairs.

Chenery II places an important caveat on the use of case-by-case adjudication. Much like the judges in the LabMD opinion, the Chenery II court was concerned with specificity and clarity, and tells us that agencies may not rely on vague bases for their rules or enforcement actions and expect courts to “chisel” out the details. Rather:

If the administrative action is to be tested by the basis upon which it purports to rest, that basis must be set forth with such clarity as to be understandable. It will not do for a court to be compelled to guess at the theory underlying the agency’s action; nor can a court be expected to chisel that which must be precise from what the agency has left vague and indecisive. In other words, ‘We must know what a decision means before the duty becomes ours to say whether it is right or wrong.’ (emphasis added)

The parallels between the 11th Circuit’s opinion in LabMD and the Supreme Court’s opinion in Chenery II 70 years earlier are uncanny. It is also not very surprising that the 11th Circuit opinion would reflect the principles discussed in Chenery II, nor that it would do so without reference to Chenery II: these are, after all, bedrock principles of administrative law.  

The principles set out in Chenery II, of course, do not answer the data-security law question whether the FTC properly exercised its authority in this (or any) case under Section 5. But they do provide an intelligible basis for the court sidestepping this question, and asking whether the FTC sufficiently defined what it was doing in the first place.  

Conclusion

The FTC’s data security mission has been, in essence, a voyage of public policy exploration. Its method of case-by-case adjudication, based on ill-defined consent decrees, non-binding guidance documents, and broadly-worded complaints creates the vagueness that the Court in Chenery II rejected, and that the 11th Circuit held results in unenforceable remedies.

Even in its best light, the Commission’s public materials are woefully deficient as sources of useful (and legally-binding) guidance. In its complaints the FTC does typically mention some of the facts that led it to investigate, and presents some rudimentary details of how those facts relate to its Section 5 authority. Yet the FTC issues complaints based merely on its “reason to believe” that an unfair act has taken place. This is a far different standard than that faced in district court, and undoubtedly leads the Commission to construe facts liberally in its own favor.

Moreover, targets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action  is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.

So, while the court’s opinion in this case was narrowly focused on the FTC’s proposed order, the underlying legal analysis that supports its holding should be troubling to the Commission.

The specificity the 11th Circuit demands in the remedial order must exist no less in the theories of harm the Commission alleges against targets. And those theories cannot be based on mere public policy preferences. Courts that follow the Eleventh Circuit’s approach — which indeed Section 5(n) reasonably seems to require — will look more deeply into the Commission’s allegations of “unreasonable” data security in order to determine if it is actually attempting to pursue harms by proving something like negligence, or is instead simply ascribing “unfairness” to certain conduct that the Commission deems harmful.

The FTC may find ways to comply with the letter of this particular opinion without substantially altering its overall approach — but that seems unlikely. True compliance with this decision will require the FTC to respect real limits on its authority and to develop ascertainable data security requirements out of much more than mere consent decrees and kitchen-sink complaints.

As has been rumored in the press for a few weeks, today Comcast announced it is considering making a renewed bid for a large chunk of Twenty-First Century Fox’s (Fox) assets. Fox is in the process of a significant reorganization, entailing primarily the sale of its international and non-television assets. Fox itself will continue, but with a focus on its US television business.

In December of last year, Fox agreed to sell these assets to Disney, in the process rejecting a bid from Comcast. Comcast’s initial bid was some 16% higher than Disney’s, although there were other differences in the proposed deals, as well.

In April of this year, Disney and Fox filed a proxy statement with the SEC explaining the basis for the board’s decision, including predominantly the assertion that the Comcast bid (NB: Comcast is identified as “Party B” in that document) presented greater regulatory (antitrust) risk.

As noted, today Comcast announced it is in “advanced stages” of preparing another unsolicited bid. This time,

Any offer for Fox would be all-cash and at a premium to the value of the current all-share offer from Disney. The structure and terms of any offer by Comcast, including with respect to both the spin-off of “New Fox” and the regulatory risk provisions and the related termination fee, would be at least as favorable to Fox shareholders as the Disney offer.

Because, as we now know (since the April proxy filing), Fox’s board rejected Comcast’s earlier offer largely on the basis of the board’s assessment of the antitrust risk it presented, and because that risk assessment (and the difference between an all-cash and all-share offer) would now be the primary distinguishing feature between Comcast’s and Disney’s bids, it is worth evaluating that conclusion as Fox and its shareholders consider Comcast’s new bid.

In short: There is no basis for ascribing a greater antitrust risk to Comcast’s purchase of Fox’s assets than to Disney’s.

Summary of the Proposed Deal

Post-merger, Fox will continue to own Fox News Channel, Fox Business Network, Fox Broadcasting Company, Fox Sports, Fox Television Stations Group, and sports cable networks FS1, FS2, Fox Deportes, and Big Ten Network.

The deal would transfer to Comcast (or Disney) the following:

  • Primarily, international assets, including Fox International (cable channels in Latin America, the EU, and Asia), Star India (the largest cable and broadcast network in India), and Fox’s 39% interest in Sky (Europe’s largest pay TV service).
  • Fox’s film properties, including 20th Century Fox, Fox Searchlight, and Fox Animation. These would bring along with them studios in Sydney and Los Angeles, but would not include the Fox Los Angeles backlot. Like the rest of the US film industry, the majority of Fox’s film revenue is earned overseas.
  • FX cable channels, National Geographic cable channels (of which Fox currently owns 75%), and twenty-two regional sports networks (RSNs). In terms of relative demand for the two cable networks, FX is a popular basic cable channel, but fairly far down the list of most-watched channels, while National Geographic doesn’t even crack the top 50. Among the RSNs, only one geographic overlap exists with Comcast’s current RSNs, and most of the Fox RSNs (at least 14 of the 22) are not in areas where Comcast has a substantial service presence.
  • The deal would also entail a shift in the companies’ ownership interests in Hulu. Hulu is currently owned in equal 30% shares by Disney, Comcast, and Fox, with the remaining, non-voting 10% owned by Time Warner. Either Comcast or Disney would hold a controlling 60% share of Hulu following the deal with Fox.

Analysis of the Antitrust Risk of a Comcast/Fox Merger

According to the joint proxy statement, Fox’s board discounted Comcast’s original $34.36/share offer — but not the $28.00/share offer from Disney — because of “the level of regulatory issues posed and the proposed risk allocation arrangements.” Significantly on this basis, the Fox board determined Disney’s offer to be superior.

The claim that a merger with Comcast poses sufficiently greater antitrust risk than a purchase by Disney to warrant its rejection out of hand is unsupportable, however. From an antitrust perspective, it is even plausible that a Comcast acquisition of the Fox assets would be on more-solid ground than would be a Disney acquisition.

Vertical Mergers Generally Present Less Antitrust Risk

A merger between Comcast and Fox would be predominantly vertical, while a merger between Disney and Fox, in contrast, would be primarily horizontal. Generally speaking, it is easier to get antitrust approval for vertical mergers than it is for horizontal mergers. As Bruce Hoffman, Director of the FTC’s Bureau of Competition, noted earlier this year:

[V]ertical merger enforcement is still a small part of our merger workload….

There is a strong theoretical basis for horizontal enforcement because economic models predict at least nominal potential for anticompetitive effects due to elimination of horizontal competition between substitutes.

Where horizontal mergers reduce competition on their face — though that reduction could be minimal or more than offset by benefits — vertical mergers do not…. [T]here are plenty of theories of anticompetitive harm from vertical mergers. But the problem is that those theories don’t generally predict harm from vertical mergers; they simply show that harm is possible under certain conditions.

On its face, and consistent with the last quarter century of merger enforcement by the DOJ and FTC, the Comcast acquisition would be less likely to trigger antitrust scrutiny, and the Disney acquisition raises more straightforward antitrust issues.

This is true even in light of the fact that the DOJ decided to challenge the AT&T-Time Warner (AT&T/TWX) merger.

The AT&T/TWX merger is a single data point in a long history of successful vertical mergers that attracted little scrutiny, and no litigation, by antitrust enforcers (although several have been approved subject to consent orders).

Just because the DOJ challenged that one merger does not mean that antitrust enforcers generally, nor even the DOJ in particular, have suddenly become more hostile to vertical mergers.

Of particular importance to the conclusion that the AT&T/TWX merger challenge is of minimal relevance to predicting the DOJ’s reception in this case, the theory of harm argued by the DOJ in that case is far from well-accepted, while the potential theory that could underpin a challenge to a Disney/Fox merger is. As Bruce Hoffman further remarks:

I am skeptical of arguments that vertical mergers cause harm due to an increased bargaining skill; this is likely not an anticompetitive effect because it does not flow from a reduction in competition. I would contrast that to the elimination of competition in a horizontal merger that leads to an increase in bargaining leverage that could raise price or reduce output.

The Relatively Lower Risk of a Vertical Merger Challenge Hasn’t Changed Following the DOJ’s AT&T/Time Warner Challenge

Judge Leon is expected to rule on the AT&T/TWX merger in a matter of weeks. The theory underpinning the DOJ’s challenge is problematic (to say the least), and the case it presented was decidedly weak. But no litigated legal outcome is ever certain, and the court could, of course, rule against the merger nevertheless.

Yet even if the court does rule against the AT&T/TWX merger, this hardly suggests that a Comcast/Fox deal would create a greater antitrust risk than would a Disney/Fox merger.

A single successful challenge to a vertical merger — what would be, in fact, the first successful vertical merger challenge in four decades — doesn’t mean that the courts are becoming hostile to vertical mergers any more than the DOJ’s challenge means that vertical mergers suddenly entail heightened enforcement risk. Rather, it would simply mean that that, given the specific facts of the case, the DOJ was able to make out its prima facie case, and that the defendants were unable to rebut it.  

A ruling for the DOJ in the AT&T/TWX merger challenge would be rooted in a highly fact-specific analysis that could have no direct bearing on future cases.

In the AT&T/TWX case, the court’s decision will turn on its assessment of the DOJ’s argument that the merged firm could raise subscriber prices by a few pennies per subscriber. But as AT&T’s attorney aptly pointed out at trial (echoing the testimony of AT&T’s economist, Dennis Carlton):

The government’s modeled price increase is so negligible that, given the inherent uncertainty in that predictive exercise, it is not meaningfully distinguishable from zero.

Even minor deviations from the facts or the assumptions used in the AT&T/TWX case could completely upend the analysis — and there are important differences between the AT&T/TWX merger and a Comcast/Fox merger. True, both would be largely vertical mergers that would bring together programming and distribution assets in the home video market. But the foreclosure effects touted by the DOJ in the AT&T/TWX merger are seemingly either substantially smaller or entirely non-existent in the proposed Comcast/Fox merger.

Most importantly, the content at issue in AT&T/TWX is at least arguably (and, in fact, argued by the DOJ) “must have” programming — Time Warner’s premium HBO channels and its CNN news programming, in particular, were central to the DOJ’s foreclosure argument. By contrast, the programming that Comcast would pick up as a result of the proposed merger with Fox — FX (a popular, but non-essential, basic cable channel) and National Geographic channels (which attract a tiny fraction of cable viewing) — would be extremely unlikely to merit that designation.

Moreover, the DOJ made much of the fact that AT&T, through DirectTV, has a national distribution footprint. As a result, its analysis was dependent upon the company’s potential ability to attract new subscribers decamping from competing providers from whom it withholds access to Time Warner content in every market in the country. Comcast, on the other hand, provides cable service in only about 35% of the country. This significantly limits its ability to credibly threaten competitors because its ability to recoup lost licensing fees by picking up new subscribers is so much more limited.

And while some RSNs may offer some highly prized live sports programming, the mismatch between Comcast’s footprint and the FOX RSNs (only about 8 of the 22 Fox RSNs are in Comcast service areas) severely limits any ability or incentive the company would have to leverage that content for higher fees. Again, to the extent that RSN programming is not “must-have,” and to the extent there is not overlap between the RSN’s geographic area and Comcast’s service area, the situation is manifestly not the same as the one at issue in the AT&T/TWX merger.

In sum, a ruling in favor of the DOJ in the AT&T/TWX case would be far from decisive in predicting how the agency and the courts would assess any potential concerns arising from Comcast’s ownership of Fox’s assets.

A Comcast/Fox Deal May Entail Lower Antitrust Risk than a Disney/Fox Merger

As discussed below, concerns about antitrust enforcement risk from a Comcast/Fox merger are likely overstated. Perhaps more importantly, however, to the extent these concerns are legitimate, they apply at least as much to a Disney/Fox merger. There is, at minimum, no basis for assuming a Comcast deal would present any greater regulatory risk.

The Antitrust Risk of a Comcast/Fox Merger Is Likely Overstated

The primary theory upon which antitrust enforcers could conceivably base a Comcast/Fox merger challenge would be a vertical foreclosure theory. Importantly, such a challenge would have to be based on the incremental effect of adding the Fox assets to Comcast, and not on the basis of its existing assets. Thus, for example, antitrust enforcers would not be able to base a merger challenge on the possibility that Comcast could leverage NBC content it currently owns to extract higher fees from competitors. Rather, only if the combination of NBC programming with additional content from Fox could create a new antitrust risk would a case be tenable.

Enforcers would be unlikely to view the addition of FX and National Geographic to the portfolio of programming content Comcast currently owns as sufficient to raise concerns that the merger would give Comcast anticompetitive bargaining power or the ability to foreclose access to its content.

Although even less likely, enforcers could be concerned with the (horizontal) addition of 20th Century Fox filmed entertainment to Universal’s existing film production and distribution. But the theatrical film market is undeniably competitive, with the largest studio by revenue (Disney) last year holding only 22% of the market. The combination of 20th Century Fox with Universal would still result in a market share only around 25% based on 2017 revenues (and, depending on the year, not even result in the industry’s largest share).

There is also little reason to think that a Comcast controlling interest in Hulu would attract problematic antitrust attention. Comcast has already demonstrated an interest in diversifying its revenue across cable subscriptions and licensing, broadband subscriptions, and licensing to OVDs, as evidenced by its recent deal to offer Netflix as part of its Xfinity packages. Hulu likely presents just one more avenue for pursuing this same diversification strategy. And Universal has a history (see, e.g., this, this, and this) of very broad licensing across cable providers, cable networks, OVDs, and the like.

In the case of Hulu, moreover, the fact that Comcast is vertically integrated in broadband as well as cable service likely reduces the anticompetitive risk because more-attractive OVD content has the potential to increase demand for Comcast’s broadband service. Broadband offers larger margins (and is growing more rapidly) than cable, and it’s quite possible that any loss in Comcast’s cable subscriber revenue from Hulu’s success would be more than offset by gains in its content licensing and broadband subscription revenue. The same, of course, goes for Comcast’s incentives to license content to OVD competitors like Netflix: Comcast plausibly gains broadband subscription revenue from heightened consumer demand for Netflix, and this at least partially offsets any possible harm to Hulu from Netflix’s success.

At the same time, especially relative to Netflix’s vast library of original programming (an expected $8 billion worth in 2018 alone) and content licensed from other sources, the additional content Comcast would gain from a merger with Fox is not likely to appreciably increase its bargaining leverage or its ability to foreclose Netflix’s access to its content.     

Finally, Comcast’s ownership of Fox’s RSNs could, as noted, raise antitrust enforcers’ eyebrows. Enforcers could be concerned that Comcast would condition competitors’ access to RSN programming on higher licensing fees or prioritization of its NBC Sports channels.

While this is indeed a potential risk, it is hardly a foregone conclusion that it would draw an enforcement action. Among other things, NBC is far from the market leader, and improving its competitive position relative to ESPN could be viewed as a benefit of the deal. In any case, potential problems arising from ownership of the RSNs could easily be dealt with through divestiture or behavioral conditions; they are extremely unlikely to lead to an outright merger challenge.

The Antitrust Risk of a Disney Deal May Be Greater than Expected

While a Comcast/Fox deal doesn’t entail no antitrust enforcement risk, it certainly doesn’t entail sufficient risk to deem the deal dead on arrival. Moreover, it may entail less antitrust enforcement risk than would a Disney/Fox tie-up.

Yet, curiously, the joint proxy statement doesn’t mention any antitrust risk from the Disney deal at all and seems to suggest that the Fox board applied no risk discount in evaluating Disney’s bid.

Disney — already the market leader in the filmed entertainment industry — would acquire an even larger share of box office proceeds (and associated licensing revenues) through acquisition of Fox’s film properties. Perhaps even more important, the deal would bring the movie rights to almost all of the Marvel Universe within Disney’s ambit.

While, as suggested above, even that combination probably wouldn’t trigger any sort of market power presumption, it would certainly create an entity with a larger share of the market and stronger control of the industry’s most valuable franchises than would a Comcast/Fox deal.

Another relatively larger complication for a Disney/Fox merger arises from the prospect of combining Fox’s RSNs with ESPN. Whatever ability or incentive either company would have to engage in anticompetitive conduct surrounding sports programming, that risk would seem to be more significant for undisputed market leader, Disney. At the same time, although still powerful, demand for ESPN on cable has been flagging. Disney could well see the ability to bundle ESPN with regional sports content as a way to prop up subscription revenues for ESPN — a practice, in fact, that it has employed successfully in the past.   

Finally, it must be noted that licensing of consumer products is an even bigger driver of revenue from filmed entertainment than is theatrical release. No other company comes close to Disney in this space.

Disney is the world’s largest licensor, earning almost $57 billion in 2016 from licensing properties like Star Wars and Marvel Comics. Universal is in a distant 7th place, with 2016 licensing revenue of about $6 billion. Adding Fox’s (admittedly relatively small) licensing business would enhance Disney’s substantial lead (even the number two global licensor, Meredith, earned less than half of Disney’s licensing revenue in 2016). Again, this is unlikely to be a significant concern for antitrust enforcers, but it is notable that, to the extent it might be an issue, it is one that applies to Disney and not Comcast.

Conclusion

Although I hope to address these issues in greater detail in the future, for now the preliminary assessment is clear: There is no legitimate basis for ascribing a greater antitrust risk to a Comcast/Fox deal than to a Disney/Fox deal.

The populists are on the march, and as the 2018 campaign season gets rolling we’re witnessing more examples of political opportunism bolstered by economic illiteracy aimed at increasingly unpopular big tech firms.

The latest example comes in the form of a new investigation of Google opened by Missouri’s Attorney General, Josh Hawley. Mr. Hawley — a Republican who, not coincidentally, is running for Senate in 2018alleges various consumer protection violations and unfair competition practices.

But while Hawley’s investigation may jump start his campaign and help a few vocal Google rivals intent on mobilizing the machinery of the state against the company, it is unlikely to enhance consumer welfare — in Missouri or anywhere else.  

According to the press release issued by the AG’s office:

[T]he investigation will seek to determine if Google has violated the Missouri Merchandising Practices Act—Missouri’s principal consumer-protection statute—and Missouri’s antitrust laws.  

The business practices in question are Google’s collection, use, and disclosure of information about Google users and their online activities; Google’s alleged misappropriation of online content from the websites of its competitors; and Google’s alleged manipulation of search results to preference websites owned by Google and to demote websites that compete with Google.

Mr. Hawley’s justification for his investigation is a flourish of populist rhetoric:

We should not just accept the word of these corporate giants that they have our best interests at heart. We need to make sure that they are actually following the law, we need to make sure that consumers are protected, and we need to hold them accountable.

But Hawley’s “strong” concern is based on tired retreads of the same faulty arguments that Google’s competitors (Yelp chief among them), have been plying for the better part of a decade. In fact, all of his apparent grievances against Google were exhaustively scrutinized by the FTC and ultimately rejected or settled in separate federal investigations in 2012 and 2013.

The antitrust issues

To begin with, AG Hawley references the EU antitrust investigation as evidence that

this is not the first-time Google’s business practices have come into question. In June, the European Union issued Google a record $2.7 billion antitrust fine.

True enough — and yet, misleadingly incomplete. Missing from Hawley’s recitation of Google’s antitrust rap sheet are the following investigations, which were closed without any finding of liability related to Google Search, Android, Google’s advertising practices, etc.:

  • United States FTC, 2013. The FTC found no basis to pursue a case after a two-year investigation: “Challenging Google’s product design decisions in this case would require the Commission — or a court — to second-guess a firm’s product design decisions where plausible procompetitive justifications have been offered, and where those justifications are supported by ample evidence.” The investigation did result in a consent order regarding patent licensing unrelated in any way to search and a voluntary commitment by Google not to engage in certain search-advertising-related conduct.
  • South Korea FTC, 2013. The KFTC cleared Google after a two-year investigation. It opened a new investigation in 2016, but, as I have discussed, “[i]f anything, the economic conditions supporting [the KFTC’s 2013] conclusion have only gotten stronger since.”
  • Canada Competition Bureau, 2016. The CCB closed a three-year long investigation into Google’s search practices without taking any action.

Similar investigations have been closed without findings of liability (or simply lie fallow) in a handful of other countries (e.g., Taiwan and Brazil) and even several states (e.g., Ohio and Texas). In fact, of all the jurisdictions that have investigated Google, only the EU and Russia have actually assessed liability.

As Beth Wilkinson, outside counsel to the FTC during the Google antitrust investigation, noted upon closing the case:

Undoubtedly, Google took aggressive actions to gain advantage over rival search providers. However, the FTC’s mission is to protect competition, and not individual competitors. The evidence did not demonstrate that Google’s actions in this area stifled competition in violation of U.S. law.

The CCB was similarly unequivocal in its dismissal of the very same antitrust claims Missouri’s AG seems intent on pursuing against Google:

The Bureau sought evidence of the harm allegedly caused to market participants in Canada as a result of any alleged preferential treatment of Google’s services. The Bureau did not find adequate evidence to support the conclusion that this conduct has had an exclusionary effect on rivals, or that it has resulted in a substantial lessening or prevention of competition in a market.

Unfortunately, rather than follow the lead of these agencies, Missouri’s investigation appears to have more in common with Russia’s effort to prop up a favored competitor (Yandex) at the expense of consumer welfare.

The Yelp Claim

Take Mr. Hawley’s focus on “Google’s alleged misappropriation of online content from the websites of its competitors,” for example, which cleaves closely to what should become known henceforth as “The Yelp Claim.”

While the sordid history of Yelp’s regulatory crusade against Google is too long to canvas in its entirety here, the primary elements are these:

Once upon a time (in 2005), Google licensed Yelp’s content for inclusion in its local search results. In 2007 Yelp ended the deal. By 2010, and without a license from Yelp (asserting fair use), Google displayed small snippets of Yelp’s reviews that, if clicked on, led to Yelp’s site. Even though Yelp received more user traffic from those links as a result, Yelp complained, and Google removed Yelp snippets from its local results.

In its 2013 agreement with the FTC, Google guaranteed that Yelp could opt-out of having even snippets displayed in local search results by committing Google to:

make available a web-based notice form that provides website owners with the option to opt out from display on Google’s Covered Webpages of content from their website that has been crawled by Google. When a website owner exercises this option, Google will cease displaying crawled content from the domain name designated by the website owner….

The commitments also ensured that websites (like Yelp) that opt out would nevertheless remain in Google’s general index.

Ironically, Yelp now claims in a recent study that Google should show not only snippets of Yelp reviews, but even more of Yelp’s content. (For those interested, my colleagues and I have a paper explaining why the study’s claims are spurious).

The key bit here, of course, is that Google stopped pulling content from Yelp’s pages to use in its local search results, and that it implemented a simple mechanism for any other site wishing to opt out of the practice to do so.

It’s difficult to imagine why Missouri’s citizens might require more than this to redress alleged anticompetitive harms arising from the practice.

Perhaps AG Hawley thinks consumers would be better served by an opt-in mechanism? Of course, this is absurd, particularly if any of Missouri’s citizens — and their businesses — have websites. Most websites want at least some of their content to appear on Google’s search results pages as prominently as possible — see this and this, for example — and making this information more accessible to users is why Google exists.

To be sure, some websites may take issue with how much of their content Google features and where it places that content. But the easy opt out enables them to prevent Google from showing their content in a manner they disapprove of. Yelp is an outlier in this regard because it views Google as a direct competitor, especially to the extent it enables users to read some of Yelp’s reviews without visiting Yelp’s pages.

For Yelp and a few similarly situated companies the opt out suffices. But for almost everyone else the opt out is presumably rarely exercised, and any more-burdensome requirement would just impose unnecessary costs, harming instead of helping their websites.

The privacy issues

The Missouri investigation also applies to “Google’s collection, use, and disclosure of information about Google users and their online activities.” More pointedly, Hawley claims that “Google may be collecting more information from users than the company was telling consumers….”

Presumably this would come as news to the FTC, which, with a much larger staff and far greater expertise, currently has Google under a 20 year consent order (with some 15 years left to go) governing its privacy disclosures and information-sharing practices, thus ensuring that the agency engages in continual — and well-informed — oversight of precisely these issues.

The FTC’s consent order with Google (the result of an investigation into conduct involving Google’s short-lived Buzz social network, allegedly in violation of Google’s privacy policies), requires the company to:

  • “[N]ot misrepresent in any manner, expressly or by implication… the extent to which respondent maintains and protects the privacy and confidentiality of any [user] information…”;
  • “Obtain express affirmative consent from” users “prior to any new or additional sharing… of the Google user’s identified information with any third party” if doing so would in any way deviate from previously disclosed practices;
  • “[E]stablish and implement, and thereafter maintain, a comprehensive privacy program that is reasonably designed to [] address privacy risks related to the development and management of new and existing products and services for consumers, and (2) protect the privacy and confidentiality of [users’] information”; and
  • Along with a laundry list of other reporting requirements, “[submit] biennial assessments and reports [] from a qualified, objective, independent third-party professional…, approved by the [FTC] Associate Director for Enforcement, Bureau of Consumer Protection… in his or her sole discretion.”

What, beyond the incredibly broad scope of the FTC’s consent order, could the Missouri AG’s office possibly hope to obtain from an investigation?

Google is already expressly required to provide privacy reports to the FTC every two years. It must provide several of the items Hawley demands in his CID to the FTC; others are required to be made available to the FTC upon demand. What materials could the Missouri AG collect beyond those the FTC already receives, or has the authority to demand, under its consent order?

And what manpower and expertise could Hawley apply to those materials that would even begin to equal, let alone exceed, those of the FTC?

Lest anyone think the FTC is falling down on the job, a year after it issued that original consent order the Commission fined Google $22.5 million for violating the order in a questionable decision that was signed on to by all of the FTC’s Commissioners (both Republican and Democrat) — except the one who thought it didn’t go far enough.

That penalty is of undeniable import, not only for its amount (at the time it was the largest in FTC history) and for stemming from alleged problems completely unrelated to the issue underlying the initial action, but also because it was so easy to obtain. Having put Google under a 20-year consent order, the FTC need only prove (or threaten to prove) contempt of the consent order, rather than the specific elements of a new violation of the FTC Act, to bring the company to heel. The former is far easier to prove, and comes with the ability to impose (significant) damages.

So what’s really going on in Jefferson City?

While states are, of course, free to enforce their own consumer protection laws to protect their citizens, there is little to be gained — other than cold hard cash, perhaps — from pursuing cases that, at best, duplicate enforcement efforts already undertaken by the federal government (to say nothing of innumerable other jurisdictions).

To take just one relevant example, in 2013 — almost a year to the day following the court’s approval of the settlement in the FTC’s case alleging Google’s violation of the Buzz consent order — 37 states plus DC (not including Missouri) settled their own, follow-on litigation against Google on the same facts. Significantly, the terms of the settlement did not impose upon Google any obligation not already a part of the Buzz consent order or the subsequent FTC settlement — but it did require Google to fork over an additional $17 million.  

Not only is there little to be gained from yet another ill-conceived antitrust campaign, there is much to be lost. Such massive investigations require substantial resources to conduct, and the opportunity cost of doing so may mean real consumer issues go unaddressed. The Consumer Protection Section of the Missouri AG’s office says it receives some 100,000 consumer complaints a year. How many of those will have to be put on the back burner to accommodate an investigation like this one?

Even when not politically motivated, state enforcement of CPAs is not an unalloyed good. In fact, empirical studies of state consumer protection actions like the one contemplated by Mr. Hawley have shown that such actions tend toward overreach — good for lawyers, perhaps, but expensive for taxpayers and often detrimental to consumers. According to a recent study by economists James Cooper and Joanna Shepherd:

[I]n recent decades, this thoughtful balance [between protecting consumers and preventing the proliferation of lawsuits that harm both consumers and businesses] has yielded to damaging legislative and judicial overcorrections at the state level with a common theoretical mistake: the assumption that more CPA litigation automatically yields more consumer protection…. [C]ourts and legislatures gradually have abolished many of the procedural and remedial protections designed to cabin state CPAs to their original purpose: providing consumers with redress for actual harm in instances where tort and contract law may provide insufficient remedies. The result has been an explosion in consumer protection litigation, which serves no social function and for which consumers pay indirectly through higher prices and reduced innovation.

AG Hawley’s investigation seems almost tailored to duplicate the FTC’s extensive efforts — and to score political points. Or perhaps Mr. Hawley is just perturbed that Missouri missed out its share of the $17 million multistate settlement in 2013.

Which raises the spectre of a further problem with the Missouri case: “rent extraction.”

It’s no coincidence that Mr. Hawley’s investigation follows closely on the heels of Yelp’s recent letter to the FTC and every state AG (as well as four members of Congress and the EU’s chief competition enforcer, for good measure) alleging that Google had re-started scraping Yelp’s content, thus violating the terms of its voluntary commitments to the FTC.

It’s also no coincidence that Yelp “notified” Google of the problem only by lodging a complaint with every regulator who might listen rather than by actually notifying Google. But an action like the one Missouri is undertaking — not resolution of the issue — is almost certainly exactly what Yelp intended, and AG Hawley is playing right into Yelp’s hands.  

Google, for its part, strongly disputes Yelp’s allegation, and, indeed, has — even according to Yelp — complied fully with Yelp’s request to keep its content off Google Local and other “vertical” search pages since 18 months before Google entered into its commitments with the FTC. Google claims that the recent scraping was inadvertent, and that it would happily have rectified the problem if only Yelp had actually bothered to inform Google.

Indeed, Yelp’s allegations don’t really pass the smell test: That Google would suddenly change its practices now, in violation of its commitments to the FTC and at a time of extraordinarily heightened scrutiny by the media, politicians of all stripes, competitors like Yelp, the FTC, the EU, and a host of other antitrust or consumer protection authorities, strains belief.

But, again, identifying and resolving an actual commercial dispute was likely never the goal. As a recent, fawning New York Times article on “Yelp’s Six-Year Grudge Against Google” highlights (focusing in particular on Luther Lowe, now Yelp’s VP of Public Policy and the author of the letter):

Yelp elevated Mr. Lowe to the new position of director of government affairs, a job that more or less entails flying around the world trying to sic antitrust regulators on Google. Over the next few years, Yelp hired its first lobbyist and started a political action committee. Recently, it has started filing complaints in Brazil.

Missouri, in other words, may just be carrying Yelp’s water.

The one clear lesson of the decades-long Microsoft antitrust saga is that companies that struggle to compete in the market can profitably tax their rivals by instigating antitrust actions against them. As Milton Friedman admonished, decrying “the business community’s suicidal impulse” to invite regulation:

As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington [or is it Jefferson City?] and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.

Taking a tough line on Silicon Valley firms in the midst of today’s anti-tech-company populist resurgence may help with the electioneering in Mr. Hawley’s upcoming bid for a US Senate seat and serve Yelp, but it doesn’t offer any clear, actual benefits to Missourians. As I’ve wondered before: “Exactly when will regulators be a little more skeptical of competitors trying to game the antitrust laws for their own advantage?”

Earlier this week I testified before the U.S. House Subcommittee on Commerce, Manufacturing, and Trade regarding several proposed FTC reform bills.

You can find my written testimony here. That testimony was drawn from a 100 page report, authored by Berin Szoka and me, entitled “The Federal Trade Commission: Restoring Congressional Oversight of the Second National Legislature — An Analysis of Proposed Legislation.” In the report we assess 9 of the 17 proposed reform bills in great detail, and offer a host of suggested amendments or additional reform proposals that, we believe, would help make the FTC more accountable to the courts. As I discuss in my oral remarks, that judicial oversight was part of the original plan for the Commission, and an essential part of ensuring that its immense discretion is effectively directed toward protecting consumers as technology and society evolve around it.

The report is “Report 2.0” of the FTC: Technology & Reform Project, which was convened by the International Center for Law & Economics and TechFreedom with an inaugural conference in 2013. Report 1.0 lays out some background on the FTC and its institutional dynamics, identifies the areas of possible reform at the agency, and suggests the key questions/issues each of them raises.

The text of my oral remarks follow, or, if you prefer, you can watch them here:

Chairman Burgess, Ranking Member Schakowsky, and Members of the Subcommittee, thank you for the opportunity to appear before you today.

I’m Executive Director of the International Center for Law & Economics, a non-profit, non-partisan research center. I’m a former law professor, I used to work at Microsoft, and I had what a colleague once called the most illustrious FTC career ever — because, at approximately 2 weeks, it was probably the shortest.

I’m not typically one to advocate active engagement by Congress in anything (no offense). But the FTC is different.

Despite Congressional reforms, the FTC remains the closest thing we have to a second national legislature. Its jurisdiction covers nearly every company in America. Section 5, at its heart, runs just 20 words — leaving the Commission enormous discretion to make policy decisions that are essentially legislative.

The courts were supposed to keep the agency on course. But they haven’t. As Former Chairman Muris has written, “the agency has… traditionally been beyond judicial control.”

So it’s up to Congress to monitor the FTC’s processes, and tweak them when the FTC goes off course, which is inevitable.

This isn’t a condemnation of the FTC’s dedicated staff. Rather, this one way ratchet of ever-expanding discretion is simply the nature of the beast.

Yet too many people lionize the status quo. They see any effort to change the agency from the outside as an affront. It’s as if Congress was struck by a bolt of lightning in 1914 and the Perfect Platonic Agency sprang forth.

But in the real world, an agency with massive scope and discretion needs oversight — and feedback on how its legal doctrines evolve.

So why don’t the courts play that role? Companies essentially always settle with the FTC because of its exceptionally broad investigatory powers, its relatively weak standard for voting out complaints, and the fact that those decisions effectively aren’t reviewable in federal court.

Then there’s the fact that the FTC sits in judgment of its own prosecutions. So even if a company doesn’t settle and actually wins before the ALJ, FTC staff still wins 100% of the time before the full Commission.

Able though FTC staffers are, this can’t be from sheer skill alone.

Whether by design or by neglect, the FTC has become, as Chairman Muris again described it, “a largely unconstrained agency.”

Please understand: I say this out of love. To paraphrase Churchill, the FTC is the “worst form of regulatory agency — except for all the others.”

Eventually Congress had to course-correct the agency — to fix the disconnect and to apply its own pressure to refocus Section 5 doctrine.

So a heavily Democratic Congress pressured the Commission to adopt the Unfairness Policy Statement in 1980. The FTC promised to restrain itself by balancing the perceived benefits of its unfairness actions against the costs, and not acting when injury is insignificant or consumers could have reasonably avoided injury on their own. It is, inherently, an economic calculus.

But while the Commission pays lip service to the test, you’d be hard-pressed to identify how (or whether) it’s implemented it in practice. Meanwhile, the agency has essentially nullified the “materiality” requirement that it volunteered in its 1983 Deception Policy Statement.

Worst of all, Congress failed to anticipate that the FTC would resume exercising its vast discretion through what it now proudly calls its “common law of consent decrees” in data security cases.

Combined with a flurry of recommended best practices in reports that function as quasi-rulemakings, these settlements have enabled the FTC to circumvent both Congressional rulemaking reforms and meaningful oversight by the courts.

The FTC’s data security settlements aren’t an evolving common law. They’re a static statement of “reasonable” practices, repeated about 55 times over the past 14 years. At this point, it’s reasonable to assume that they apply to all circumstances — much like a rule (which is, more or less, the opposite of the common law).

Congressman Pompeo’s SHIELD Act would help curtail this practice, especially if amended to include consent orders and reports. It would also help focus the Commission on the actual elements of the Unfairness Policy Statement — which should be codified through Congressman Mullins’ SURE Act.

Significantly, only one data security case has actually come before an Article III court. The FTC trumpets Wyndham as an out-and-out win. But it wasn’t. In fact, the court agreed with Wyndham on the crucial point that prior consent orders were of little use in trying to understand the requirements of Section 5.

More recently the FTC suffered another rebuke. While it won its product design suit against Amazon, the Court rejected the Commission’s “fencing in” request to permanently hover over the company and micromanage practices that Amazon had already ended.

As the FTC grapples with such cutting-edge legal issues, it’s drifting away from the balance it promised Congress.

But Congress can’t fix these problems simply by telling the FTC to take its bedrock policy statements more seriously. Instead it must regularly reassess the process that’s allowed the FTC to avoid meaningful judicial scrutiny. The FTC requires significant course correction if its model is to move closer to a true “common law.”

Yesterday a federal district court in Washington state granted the FTC’s motion for summary judgment against Amazon in FTC v. Amazon — the case alleging unfair trade practices in Amazon’s design of the in-app purchases interface for apps available in its mobile app store. The headlines score the decision as a loss for Amazon, and the FTC, of course, claims victory. But the court also granted Amazon’s motion for partial summary judgment on a significant aspect of the case, and the Commission’s win may be decidedly pyrrhic.

While the district court (very wrongly, in my view) essentially followed the FTC in deciding that a well-designed user experience doesn’t count as a consumer benefit for assessing substantial harm under the FTC Act, it rejected the Commission’s request for a permanent injunction against Amazon. It also called into question the FTC’s calculation of monetary damages. These last two may be huge. 

The FTC may have “won” the case, but it’s becoming increasingly apparent why it doesn’t want to take these cases to trial. First in Wyndham, and now in Amazon, courts have begun to chip away at the FTC’s expansive Section 5 discretion, even while handing the agency nominal victories.

The Good News

The FTC largely escapes judicial oversight in cases like these because its targets almost always settle (Amazon is a rare exception). These settlements — consent orders — typically impose detailed 20-year injunctions and give the FTC ongoing oversight of the companies’ conduct for the same period. The agency has wielded the threat of these consent orders as a powerful tool to micromanage tech companies, and it currently has at least one consent order in place with Twitter, Google, Apple, Facebook and several others.

As I wrote in a WSJ op-ed on these troubling consent orders:

The FTC prefers consent orders because they extend the commission’s authority with little judicial oversight, but they are too blunt an instrument for regulating a technology company. For the next 20 years, if the FTC decides that Google’s product design or billing practices don’t provide “express, informed consent,” the FTC could declare Google in violation of the new consent decree. The FTC could then impose huge penalties—tens or even hundreds of millions of dollars—without establishing that any consumer had actually been harmed.

Yesterday’s decision makes that outcome less likely. Companies will be much less willing to succumb to the FTC’s 20-year oversight demands if they know that courts may refuse the FTC’s injunction request and accept companies’ own, independent and market-driven efforts to address consumer concerns — without any special regulatory micromanagement.

In the same vein, while the court did find that Amazon was liable for repayment of unauthorized charges made without “express, informed authorization,” it also found the FTC’s monetary damages calculation questionable and asked for further briefing on the appropriate amount. If, as seems likely, it ultimately refuses to simply accept the FTC’s damages claims, that, too, will take some of the wind out of the FTC’s sails. Other companies have settled with the FTC and agreed to 20-year consent decrees in part, presumably, because of the threat of excessive damages if they litigate. That, too, is now less likely to happen.

Collectively, these holdings should help to force the FTC to better target its complaints to cases of still-ongoing and truly-harmful practices — the things the FTC Act was really meant to address, like actual fraud. Tech companies trying to navigate ever-changing competitive waters by carefully constructing their user interfaces and payment mechanisms (among other things) shouldn’t be treated the same way as fraudulent phishing scams.

The Bad News

The court’s other key holding is problematic, however. In essence, the court, like the FTC, seems to believe that regulators are better than companies’ product managers, designers and engineers at designing app-store user interfaces:

[A] clear and conspicuous disclaimer regarding in-app purchases and request for authorization on the front-end of a customer’s process could actually prove to… be more seamless than the somewhat unpredictable password prompt formulas rolled out by Amazon.

Never mind that Amazon has undoubtedly spent tremendous resources researching and designing the user experience in its app store. And never mind that — as Amazon is certainly aware — a consumer’s experience of a product is make-or-break in the cut-throat world of online commerce, advertising and search (just ask Jet).

Instead, for the court (and the FTC), the imagined mechanism of “affirmatively seeking a customer’s authorized consent to a charge” is all benefit and no cost. Whatever design decisions may have informed the way Amazon decided to seek consent are either irrelevant, or else the user-experience benefits they confer are negligible.

As I’ve written previously:

Amazon has built its entire business around the “1-click” concept — which consumers love — and implemented a host of notification and security processes hewing as much as possible to that design choice, but nevertheless taking account of the sorts of issues raised by in-app purchases. Moreover — and perhaps most significantly — it has implemented an innovative and comprehensive parental control regime (including the ability to turn off all in-app purchases) — Kindle Free Time — that arguably goes well beyond anything the FTC required in its Apple consent order.

Amazon is not abdicating its obligation to act fairly under the FTC Act and to ensure that users are protected from unauthorized charges. It’s just doing so in ways that also take account of the costs such protections may impose — particularly, in this case, on the majority of Amazon customers who didn’t and wouldn’t suffer such unauthorized charges.

Amazon began offering Kindle Free Time in 2012 as an innovative solution to a problem — children’s access to apps and in-app purchases — that affects only a small subset of Amazon’s customers. To dismiss that effort without considering that Amazon might have made a perfectly reasonable judgment that balanced consumer protection and product design disregards the cost-benefit balancing required by Section 5 of the FTC Act.

Moreover, the FTC Act imposes liability for harm only when they are not “reasonably avoidable.” Kindle Free Time is an outstanding example of an innovative mechanism that allows consumers at risk of unauthorized purchases by children to “reasonably avoid” harm. The court’s and the FTC’s disregard for it is inconsistent with the statute.

Conclusion

The court’s willingness to reinforce the FTC’s blackboard design “expertise” (such as it is) to second guess user-interface and other design decisions made by firms competing in real markets is unfortunate. But there’s a significant silver lining. By reining in the FTC’s discretion to go after these companies as if they were common fraudsters, the court has given consumers an important victory. After all, it is consumers who otherwise bear the costs (both directly and as a result of reduced risk-taking and innovation) of the FTC’s largely unchecked ability to extract excessive concessions from its enforcement targets.

As the organizer of this retrospective on Josh Wright’s tenure as FTC Commissioner, I have the (self-conferred) honor of closing out the symposium.

When Josh was confirmed I wrote that:

The FTC will benefit enormously from Josh’s expertise and his error cost approach to antitrust and consumer protection law will be a tremendous asset to the Commission — particularly as it delves further into the regulation of data and privacy. His work is rigorous, empirically grounded, and ever-mindful of the complexities of both business and regulation…. The Commissioners and staff at the FTC will surely… profit from his time there.

Whether others at the Commission have really learned from Josh is an open question, but there’s no doubt that Josh offered an enormous amount from which they could learn. As Tim Muris said, Josh “did not disappoint, having one of the most important and memorable tenures of any non-Chair” at the agency.

Within a month of his arrival at the Commission, in fact, Josh “laid down the cost-benefit-analysis gauntlet” in a little-noticed concurring statement regarding a proposed amendment to the Hart-Scott-Rodino Rules. The technical details of the proposed rule don’t matter for these purposes, but, as Josh noted in his statement, the situation intended to be avoided by the rule had never arisen:

The proposed rulemaking appears to be a solution in search of a problem. The Federal Register notice states that the proposed rules are necessary to prevent the FTC and DOJ from “expend[ing] scarce resources on hypothetical transactions.” Yet, I have not to date been presented with evidence that any of the over 68,000 transactions notified under the HSR rules have required Commission resources to be allocated to a truly hypothetical transaction.

What Josh asked for in his statement was not that the rule be scrapped, but simply that, before adopting the rule, the FTC weigh its costs and benefits.

As I noted at the time:

[I]t is the Commission’s responsibility to ensure that the rules it enacts will actually be beneficial (it is a consumer protection agency, after all). The staff, presumably, did a perfectly fine job writing the rule they were asked to write. Josh’s point is simply that it isn’t clear the rule should be adopted because it isn’t clear that the benefits of doing so would outweigh the costs.

As essentially everyone who has contributed to this symposium has noted, Josh was singularly focused on the rigorous application of the deceptively simple concept that the FTC should ensure that the benefits of any rule or enforcement action it adopts outweigh the costs. The rest, as they say, is commentary.

For Josh, this basic principle should permeate every aspect of the agency, and permeate the way it thinks about everything it does. Only an entirely new mindset can ensure that outcomes, from the most significant enforcement actions to the most trivial rule amendments, actually serve consumers.

While the FTC has a strong tradition of incorporating economic analysis in its antitrust decision-making, its record in using economics in other areas is decidedly mixed, as Berin points out. But even in competition policy, the Commission frequently uses economics — but it’s not clear it entirely understands economics. The approach that others have lauded Josh for is powerful, but it’s also subtle.

Inherent limitations on anyone’s knowledge about the future of technology, business and social norms caution skepticism, as regulators attempt to predict whether any given business conduct will, on net, improve or harm consumer welfare. In fact, a host of factors suggests that even the best-intentioned regulators tend toward overconfidence and the erroneous condemnation of novel conduct that benefits consumers in ways that are difficult for regulators to understand. Coase’s famous admonition in a 1972 paper has been quoted here before (frequently), but bears quoting again:

If an economist finds something – a business practice of one sort or another – that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be very large, and the reliance on a monopoly explanation, frequent.

Simply “knowing” economics, and knowing that it is important to antitrust enforcement, aren’t enough. Reliance on economic formulae and theoretical models alone — to say nothing of “evidence-based” analysis that doesn’t or can’t differentiate between probative and prejudicial facts — doesn’t resolve the key limitations on regulatory decisionmaking that threaten consumer welfare, particularly when it comes to the modern, innovative economy.

As Josh and I have written:

[O]ur theoretical knowledge cannot yet confidently predict the direction of the impact of additional product market competition on innovation, much less the magnitude. Additionally, the multi-dimensional nature of competition implies that the magnitude of these impacts will be important as innovation and other forms of competition will frequently be inversely correlated as they relate to consumer welfare. Thus, weighing the magnitudes of opposing effects will be essential to most policy decisions relating to innovation. Again, at this stage, economic theory does not provide a reliable basis for predicting the conditions under which welfare gains associated with greater product market competition resulting from some regulatory intervention will outweigh losses associated with reduced innovation.

* * *

In sum, the theoretical and empirical literature reveals an undeniably complex interaction between product market competition, patent rules, innovation, and consumer welfare. While these complexities are well understood, in our view, their implications for the debate about the appropriate scale and form of regulation of innovation are not.

Along the most important dimensions, while our knowledge has expanded since 1972, the problem has not disappeared — and it may only have magnified. As Tim Muris noted in 2005,

[A] visitor from Mars who reads only the mathematical IO literature could mistakenly conclude that the U.S. economy is rife with monopoly power…. [Meanwhile, Section 2’s] history has mostly been one of mistaken enforcement.

It may not sound like much, but what is needed, what Josh brought to the agency, and what turns out to be absolutely essential to getting it right, is unflagging awareness of and attention to the institutional, political and microeconomic relationships that shape regulatory institutions and regulatory outcomes.

Regulators must do their best to constantly grapple with uncertainty, problems of operationalizing useful theory, and, perhaps most important, the social losses associated with error costs. It is not (just) technicians that the FTC needs; it’s regulators imbued with the “Economic Way of Thinking.” In short, what is needed, and what Josh brought to the Commission, is humility — the belief that, as Coase also wrote, sometimes the best answer is to “do nothing at all.”

The technocratic model of regulation is inconsistent with the regulatory humility required in the face of fast-changing, unexpected — and immeasurably valuable — technological advance. As Virginia Postrel warns in The Future and Its Enemies:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation…. By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

For Josh, the first JD/Econ PhD appointed to the FTC,

economics provides a framework to organize the way I think about issues beyond analyzing the competitive effects in a particular case, including, for example, rulemaking, the various policy issues facing the Commission, and how I weigh evidence relative to the burdens of proof and production. Almost all the decisions I make as a Commissioner are made through the lens of economics and marginal analysis because that is the way I have been taught to think.

A representative example will serve to illuminate the distinction between merely using economics and evidence and understanding them — and their limitations.

In his Nielson/Arbitron dissent Josh wrote:

The Commission thus challenges the proposed transaction based upon what must be acknowledged as a novel theory—that is, that the merger will substantially lessen competition in a market that does not today exist.

[W]e… do not know how the market will evolve, what other potential competitors might exist, and whether and to what extent these competitors might impose competitive constraints upon the parties.

Josh’s straightforward statement of the basis for restraint stands in marked contrast to the majority’s decision to impose antitrust-based limits on economic activity that hasn’t even yet been contemplated. Such conduct is directly at odds with a sensible, evidence-based approach to enforcement, and the economic problems with it are considerable, as Josh also notes:

[I]t is an exceedingly difficult task to predict the competitive effects of a transaction where there is insufficient evidence to reliably answer the[] basic questions upon which proper merger analysis is based.

When the Commission’s antitrust analysis comes unmoored from such fact-based inquiry, tethered tightly to robust economic theory, there is a more significant risk that non-economic considerations, intuition, and policy preferences influence the outcome of cases.

Compare in this regard Josh’s words about Nielsen with Deborah Feinstein’s defense of the majority from such charges:

The Commission based its decision not on crystal-ball gazing about what might happen, but on evidence from the merging firms about what they were doing and from customers about their expectations of those development plans. From this fact-based analysis, the Commission concluded that each company could be considered a likely future entrant, and that the elimination of the future offering of one would likely result in a lessening of competition.

Instead of requiring rigorous economic analysis of the facts, couched in an acute awareness of our necessary ignorance about the future, for Feinstein the FTC fulfilled its obligation in Nielsen by considering the “facts” alone (not economic evidence, mind you, but customer statements and expressions of intent by the parties) and then, at best, casually applying to them the simplistic, outdated structural presumption – the conclusion that increased concentration would lead inexorably to anticompetitive harm. Her implicit claim is that all the Commission needed to know about the future was what the parties thought about what they were doing and what (hardy disinterested) customers thought they were doing. This shouldn’t be nearly enough.

Worst of all, Nielsen was “decided” with a consent order. As Josh wrote, strongly reflecting the essential awareness of the broader institutional environment that he brought to the Commission:

[w]here the Commission has endorsed by way of consent a willingness to challenge transactions where it might not be able to meet its burden of proving harm to competition, and which therefore at best are competitively innocuous, the Commission’s actions may alter private parties’ behavior in a manner that does not enhance consumer welfare.

Obviously in this regard his successful effort to get the Commission to adopt a UMC enforcement policy statement is a most welcome development.

In short, Josh is to be applauded not because he brought economics to the Commission, but because he brought the economic way of thinking. Such a thing is entirely too rare in the modern administrative state. Josh’s tenure at the FTC was relatively short, but he used every moment of it to assiduously advance his singular, and essential, mission. And, to paraphrase the last line of the movie The Right Stuff (it helps to have the rousing film score playing in the background as you read this): “for a brief moment, [Josh Wright] became the greatest [regulator] anyone had ever seen.”

I would like to extend my thanks to everyone who participated in this symposium. The contributions here will stand as a fitting and lasting tribute to Josh and his legacy at the Commission. And, of course, I’d also like to thank Josh for a tenure at the FTC very much worth honoring.

Much ink will be spilled at this site lauding Commissioner Joshua (Josh) Wright’s many contributions to the Federal Trade Commission (FTC), and justly so. I will focus narrowly on Josh Wright as a law and economics “provocateur,” who used his writings and speeches to “stir the pot” and subject the FTC’s actions to a law and economics spotlight. In particular, Josh highlighted the importance of decision theory, which teaches that bureaucratic agencies (such as the FTC) are inherently subject to error and high administrative costs, and should adopt procedures and rules of decision accordingly. Thus, to maximize welfare, an agency should adopt “optimal” rules, directed at minimizing the sum of false positives, false negatives, and administrative costs. In that regard, the FTC should pay particular attention to empirical evidence of actual harm, and not bring cases based on mere theoretical models of possible harm – models that are inherently likely to generate substantial false positives (predictions of consumer harm) and thereby run counter to a well-run decision-theoretical regime.

Josh became a Commissioner almost three years ago, so there are many of his writings to comment upon. Nevertheless, he is so prolific that a very good understanding of his law and economics approach may be gleaned merely by a perusal of his 2015 contributions. I will selectively focus upon a few representative examples of wisdom drawn from Josh Wright’s (hereinafter JW) 2015 writings, going in reverse chronological order. (A fuller and more detailed exposition of his approach over the years would warrant a long law review article.)

Earlier this month, in commenting on the importance of granting FTC economists (housed in the FTC’s Bureau of Economics (BE)) a greater public role in the framing of FTC decisions, JW honed in on the misuse of consent decrees to impose constraints on private sector behavior without hard evidence of consumer harm:

One [unfortunate] phenomenon is the so‐called “compromise recommendation,” that is, a BE staff economist might recommend the FTC accept a consent decree rather than litigate or challenge a proposed merger when the underlying economic analysis reveals very little actual economic support for liability. In my experience, it is not uncommon for a BE staff analysis to convincingly demonstrate that competitive harm is possible but unlikely, but for BE staff to recommend against litigation on those grounds, but in favor of a consent order. The problem with this compromise approach is, of course, that a recommendation to enter into a consent order must also require economic evidence sufficient to give the Commission reason to believe that competitive harm is likely. . . . [What, then, is the solution?] Requiring BE to make public its economic rationale for supporting or rejecting a consent decree voted out by the Commission could offer a number of benefits at little cost. First, it offers BE a public avenue to communicate its findings to the public. Second, it reinforces the independent nature of the recommendation that BE offers. Third, it breaks the agency monopoly the FTC lawyers currently enjoy in terms of framing a particular matter to the public. The internal leverage BE gains by the ability to publish such a document may increase conflict between bureaus on the margin in close cases, but it will also provide BE a greater role in the consent process and a mechanism to discipline consents that are not supported by sound economics. I believe this would go a long ways towards minimizing the “compromise” recommendation that is most problematic in matters involving consent decrees.

In various writings, JW has cautioned that the FTC should apply an “evidence-based” approach to adjudication, and not lightly presume that particular conduct is anticompetitive – including in the area of patents. JW’s most recent pronouncement regarding an evidence-based approach is found in his July 2015 statement with fellow Commissioner Maureen Ohlhausen filed with the U.S. International Trade Commission (ITC), recommending that the ITC apply an “evidence-based” approach in deciding (on public interest grounds) whether to exclude imports that infringe “standard essential patents” (SEPs):

There is no empirical evidence to support the theory that patent holdup is a common problem in real world markets. The theory that patent holdup is prevalent predicts that the threat of injunction leads to higher prices, reduced output, and lower rates of innovation. These are all testable implications. Contrary to these predictions, the empirical evidence is not consistent with the theory that patent holdup has resulted in a reduction of competition. . . .  An evidence-based approach to the public interest inquiry, i.e., one that requires proof that holdup actually occurred in a particular case, protects incentives to participate in standard setting by allowing SEP holders to seek and obtain exclusion orders when permitted by the SSO agreement at issue and in the absence of a showing of any improper use. In contrast, any proposal that would require the ITC to presume the existence of holdup and shift the burden of proof to SEP holders to show unwillingness threatens to deter participation in standard setting, particularly if an accused infringer can prove willingness simply by agreeing to be bound by terms determined by neutral adjudication.

In such matters as Cephalon (May 2015) and Cardinal Health (April 2015), JW teamed up with Commissioner Ohlhausen to caution that disgorgement of profits as an FTC remedy in competition cases should not be lightly pursued, and indeed should be subject to a policy statement that limits FTC discretion, in order to reduce costly business uncertainty and enforcement error.

JW also brought to bear decision-theoretic insights on consumer protection matters. For example, in his April 2015 dissent in Nomi Technologies, he castigated the FTC for entering into a consent decree when the evidence of consumer harm was exceedingly weak (suggesting a high probability of a false positive, in decision-theoretic terms):

The Commission’s decision to issue a complaint and accept a consent order for public comment in this matter is problematic for both legal and policy reasons. Section 5(b) of the FTC Act requires us, before issuing any complaint, to establish “reason to believe that [a violation has occurred]” and that an enforcement action would “be to the interest of the public.” While the Act does not set forth a separate standard for accepting a consent decree, I believe that threshold should be at least as high as for bringing the initial complaint. The Commission has not met the relatively low “reason to believe” bar because its complaint does not meet the basic requirements of the Commission’s 1983 Deception Policy Statement. Further, the complaint and proposed settlement risk significant harm to consumers by deterring industry participants from adopting business practices that benefit consumers.

Consistent with public choice insights, JW stated in an April 2015 speech that greater emphasis should be placed on public advocacy efforts aimed at opposing government-imposed restraints of trade, which have a greater potential for harm than purely private restraints. Thus, welfare would be enhanced by a reallocation of agency resources toward greater advocacy and less private enforcement:

[P]ublic restraints are especially pernicious for consumers and an especially worthy target for antitrust agencies. I am quite confident that a significant shift of agency resources away from enforcement efforts aimed at taming private restraints of trade and instead toward fighting public restraints would improve consumer welfare.

In March 2015 congressional testimony, JW explained his opposition to Federal Communications Commission (FCC) net neutrality regulation, honing in on the low likelihood of harm from private conduct (and thus implicitly the high risk of costly error and unwarranted regulatory costs) in this area:

Today I will discuss my belief that the FCC’s newest regulation does not make sense from an economic perspective. By this I mean that the FCC’s decision to regulate broadband providers as common carriers under Title II of the Communications Act of 1934 will make consumers of broadband internet service worse off, rather than better off. Central to my conclusion that the FCC’s attempts to regulate so-called “net neutrality” in the broadband industry will ultimately do more harm than good for consumers is that the FCC and commentators have failed to identify a problem worthy of regulation, much less cumbersome public-utility-style regulation under Title II.

At the same time, JW’s testimony also explained that in the face of hard evidence of actual consumer harm, the FTC could take – and indeed has taken on several instances – case-specific enforcement action.

Also in March 2015, in his dissent in Par Petroleum, JW further developed the theme that the FTC should not enter into a consent decree unless it has hard evidence of competitive harm – a mere theory does not suffice:

Prior to entering into a consent agreement with the merging parties, the Commission must first find reason to believe that a merger likely will substantially lessen competition under Section 7 of the Clayton Act. The fact that the Commission believes the proposed consent order is costless is not relevant to this determination. A plausible theory may be sufficient to establish the mere possibility of competitive harm, but that theory must be supported by record evidence to establish reason to believe its likelihood. Modern economic analysis supplies a variety of tools to assess rigorously the likelihood of competitive harm. These tools are particularly important where, as here, the conduct underlying the theory of harm – that is, vertical integration – is empirically established to be procompetitive more often than not. Here, to the extent those tools were used, they uncovered evidence that, consistent with the record as a whole, is insufficient to support a reason to believe the proposed transaction is likely to harm competition. Thus, I respectfully dissent and believe the Commission should close the investigation and allow the parties to complete the merger without imposing a remedy.

In a February 2015 speech on the need for greater clarity with respect to “unfair methods of competition” under Section 5 of the FTC Act, JW emphasized the problem of uncertainty generated by the FTC’s failure to adequately define unfair methods of competition:

The lack of institutional commitment to a stable definition of what constitutes an “unfair method of competition” leads to two sources of problematic variation in the agency’s interpretation of Section 5. One is that the agency’s interpretation of the statute in different cases need not be consistent even when the individual Commissioners remain constant. Another is that as the members of the Commission change over time, so does the agency’s Section 5 enforcement policy, leading to wide variations in how the Commission prosecutes “unfair methods of competition” over time. In short, the scope of the Commission’s Section 5 authority today is as broad or as narrow as a majority of commissioners believes it is.

Focusing on the empirical record, JW offered a sharp critique of FTC administrative adjudication (and the value of the FTC’s non-adjudicative research function) in another February 2015 speech:

The data show three things with significant implications for those  important questions. The first is that, despite modest but important achievements in administrative adjudication, it can offer in its defense only a mediocre substantive record and a dubious one when it comes to process. The second is that the FTC can and does influence antitrust law and competition policy through its unique research-and-reporting function. The third is, as measured by appeal and reversal rates, generalist courts get a fairly bad wrap relative to the performance of expert agencies like the FTC.

In the same speech, JW endorsed proposed congressional reforms to the FTC’s exercise of jurisdiction over mergers, embodied in the draft “Standard Merger and Acquisition Reviews Through Equal Rules (SMARTER) Act.” Those reforms include harmonizing the FTC and Justice Department’s preliminary injunction standards, and divesting the FTC of its authority to initiate and pursue administrative challenges to unconsummated mergers, thus requiring the agency to challenge those deals in federal court.

Finally, JW dissented from the FTC’s publication of an FTC staff report (based on an FTC workshop) on the “Internet of Things,” in light of the report’s failure to impose a cost-benefit framework on the recommendations it set forth:

[T]he Commission and our staff must actually engage in a rigorous cost-benefit analysis prior to disseminating best practices or legislative recommendations, given the real world consequences for the consumers we are obligated to protect. Acknowledging in passing, as the Workshop Report does, that various courses of actions related to the Internet of Things may well have some potential costs and benefits does not come close to passing muster as cost-benefit analysis. The Workshop Report does not perform any actual analysis whatsoever to ensure that, or even to give a rough sense of the likelihood that the benefits of the staff’s various proposals exceed their attendant costs.  Instead, the Workshop Report merely relies upon its own assertions and various surveys that are not necessarily representative and, in any event, do not shed much light on actual consumer preferences as revealed by conduct in the marketplace. This is simply not good enough; there is too much at stake for consumers as the Digital Revolution begins to transform their homes, vehicles, and other aspects of daily life. Paying lip service to the obvious fact that the various best practices and proposals discussed in the Workshop Report might have both costs and benefits, without in fact performing such an analysis, does nothing to inform the recommendations made in the Workshop Report.

To conclude, FTC Commissioner Josh Wright went beyond merely emphasizing the application of economic theory to individual FTC cases, by explaining the need to focus economic thinking on FTC policy formulation – in other words, viewing FTC administrative processes and decision-making from an economics-based, decision-theoretical perspective, with hard facts (not mere theory) a key consideration. If the FTC is to be true to its goal of advancing consumer welfare, it should fully adopt such a perspective on a going-forward basis. One may only hope that current and future FTC Commissioners will heed this teaching.

The Federal Trade Commission’s recent enforcement actions against Amazon and Apple raise important questions about the FTC’s consumer protection practices, especially its use of economics. How does the Commission weigh the costs and benefits of its enforcement decisions? How does the agency employ economic analysis in digital consumer protection cases generally?

Join the International Center for Law and Economics and TechFreedom on Thursday, July 31 at the Woolly Mammoth Theatre Company for a lunch and panel discussion on these important issues, featuring FTC Commissioner Joshua Wright, Director of the FTC’s Bureau of Economics Martin Gaynor, and several former FTC officials. RSVP here.

Commissioner Wright will present a keynote address discussing his dissent in Apple and his approach to applying economics in consumer protection cases generally.

Geoffrey Manne, Executive Director of ICLE, will briefly discuss his recent paper on the role of economics in the FTC’s consumer protection enforcement. Berin Szoka, TechFreedom President, will moderate a panel discussion featuring:

  • Martin Gaynor, Director, FTC Bureau of Economics
  • David Balto, Fmr. Deputy Assistant Director for Policy & Coordination, FTC Bureau of Competition
  • Howard Beales, Fmr. Director, FTC Bureau of Consumer Protection
  • James Cooper, Fmr. Acting Director & Fmr. Deputy Director, FTC Office of Policy Planning
  • Pauline Ippolito, Fmr. Acting Director & Fmr. Deputy Director, FTC Bureau of Economics

Background

The FTC recently issued a complaint and consent order against Apple, alleging its in-app purchasing design doesn’t meet the Commission’s standards of fairness. The action and resulting settlement drew a forceful dissent from Commissioner Wright, and sparked a discussion among the Commissioners about balancing economic harms and benefits in Section 5 unfairness jurisprudence. More recently, the FTC brought a similar action against Amazon, which is now pending in federal district court because Amazon refused to settle.

Event Info

The “FTC: Technology and Reform” project brings together a unique collection of experts on the law, economics, and technology of competition and consumer protection to consider challenges facing the FTC in general, and especially regarding its regulation of technology. The Project’s initial report, released in December 2013, identified critical questions facing the agency, Congress, and the courts about the FTC’s future, and proposed a framework for addressing them.

The event will be live streamed here beginning at 12:15pm. Join the conversation on Twitter with the #FTCReform hashtag.

When:

Thursday, July 31
11:45 am – 12:15 pm — Lunch and registration
12:15 pm – 2:00 pm — Keynote address, paper presentation & panel discussion

Where:

Woolly Mammoth Theatre Company – Rehearsal Hall
641 D St NW
Washington, DC 20004

Questions? – Email mail@techfreedom.orgRSVP here.

See ICLE’s and TechFreedom’s other work on FTC reform, including:

  • Geoffrey Manne’s Congressional testimony on the the FTC@100
  • Op-ed by Berin Szoka and Geoffrey Manne, “The Second Century of the Federal Trade Commission”
  • Two posts by Geoffrey Manne on the FTC’s Amazon Complaint, here and here.

About The International Center for Law and Economics:

The International Center for Law and Economics is a non-profit, non-partisan research center aimed at fostering rigorous policy analysis and evidence-based regulation.

About TechFreedom:

TechFreedom is a non-profit, non-partisan technology policy think tank. We work to chart a path forward for policymakers towards a bright future where technology enhances freedom, and freedom enhances technology.