Archives For federal trade commission

The U.S. House this week passed H.R. 2668, the Consumer Protection and Recovery Act (CPRA), which authorizes the Federal Trade Commission (FTC) to seek monetary relief in federal courts for injunctions brought under Section 13(b) of the Federal Trade Commission Act.

Potential relief under the CPRA is comprehensive. It includes “restitution for losses, rescission or reformation of contracts, refund of money, return of property … and disgorgement of any unjust enrichment that a person, partnership, or corporation obtained as a result of the violation that gives rise to the suit.” What’s more, under the CPRA, monetary relief may be obtained for violations that occurred up to 10 years before the filing of the suit in which relief is requested by the FTC.

The Senate should reject the House version of the CPRA. Its monetary-recovery provisions require substantial narrowing if it is to pass cost-benefit muster.

The CPRA is a response to the Supreme Court’s April 22 decision in AMG Capital Management v. FTC, which held that Section 13(b) of the FTC Act does not authorize the commission to obtain court-ordered equitable monetary relief. As I explained in an April 22 Truth on the Market post, Congress’ response to the court’s holding should not be to grant the FTC carte blanche authority to obtain broad monetary exactions for any and all FTC Act violations. I argued that “[i]f Congress adopts a cost-beneficial error-cost framework in shaping targeted legislation, it should limit FTC monetary relief authority (recoupment and disgorgement) to situations of consumer fraud or dishonesty arising under the FTC’s authority to pursue unfair or deceptive acts or practices.”

Error cost and difficulties of calculation counsel against pursuing monetary recovery in FTC unfair methods of competition cases. As I explained in my post:

Consumer redress actions are problematic for a large proportion of FTC antitrust enforcement (“unfair methods of competition”) initiatives. Many of these antitrust cases are “cutting edge” matters involving novel theories and complex fact patterns that pose a significant threat of type I [false positives] error. (In comparison, type I error is low in hardcore collusion cases brought by the U.S. Justice Department where the existence, nature, and effects of cartel activity are plain). What’s more, they generally raise extremely difficult if not impossible problems in estimating the degree of consumer harm. (Even DOJ price-fixing cases raise non-trivial measurement difficulties.)

These error-cost and calculation difficulties became even more pronounced as of July 1. On that date, the FTC unwisely voted 3-2 to withdraw a bipartisan 2015 policy statement providing that the commission would apply consumer welfare and rule-of-reason (weighing efficiencies against anticompetitive harm) considerations in exercising its unfair methods of competition authority (see my commentary here). This means that, going forward, the FTC will arrogate to itself unbounded discretion to decide what competitive practices are “unfair.” Business uncertainty, and the costly risk aversion it engenders, would be expected to grow enormously if the FTC could extract monies from firms due to competitive behavior deemed “unfair,” based on no discernible neutral principle.

Error costs and calculation problems also strongly suggest that monetary relief in FTC consumer-protection matters should be limited to cases of fraud or clear deception. As I noted:

[M]atters involving a higher likelihood of error and severe measurement problems should be the weakest candidates for consumer redress in the consumer protection sphere. For example, cases involve allegedly misleading advertising regarding the nature of goods, or allegedly insufficient advertising substantiation, may generate high false positives and intractable difficulties in estimating consumer harm. As a matter of judgment, given resource constraints, seeking financial recoveries solely in cases of fraud or clear deception where consumer losses are apparent and readily measurable makes the most sense from a cost-benefit perspective.

In short, the Senate should rewrite its Section 13(b) amendments to authorize FTC monetary recoveries only when consumer fraud and dishonesty is shown.

Finally, the Senate would be wise to sharply pare back the House language that allows the FTC to seek monetary exactions based on conduct that is a decade old. Serious problems of making accurate factual determinations of economic effects and specific-damage calculations would arise after such a long period of time. Allowing retroactive determinations based on a shorter “look-back” period prior to the filing of a complaint (three years, perhaps) would appear to strike a better balance in allowing reasonable redress while controlling error costs.

In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.

Bork notes the Competition and Antitrust Law Enforcement Reform Act, introduced by Sen. Amy Klobuchar (D-Minn.), would “outlaw any mergers or acquisitions for the more than 80 large U.S. companies valued over $100 billion.”

Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.

One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.

If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.

But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.

Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.

But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.

Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.

So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.

The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.

Publicly traded companies with more than $100 billion in market capitalization

3MDanaher Corp.PepsiCo
Abbott LaboratoriesDeere & Co.Pfizer
AbbVieEli Lilly and Co.Philip Morris International
Adobe Inc.ExxonMobilProcter & Gamble
Advanced Micro DevicesFacebook Inc.Qualcomm
Alphabet Inc.General Electric Co.Raytheon Technologies
AmazonGoldman SachsSalesforce
American ExpressHoneywellServiceNow
American TowerIBMSquare Inc.
AmgenIntelStarbucks
Apple Inc.IntuitTarget Corp.
Applied MaterialsIntuitive SurgicalTesla Inc.
AT&TJohnson & JohnsonTexas Instruments
Bank of AmericaJPMorgan ChaseThe Coca-Cola Co.
Berkshire HathawayLockheed MartinThe Estée Lauder Cos.
BlackRockLowe’sThe Home Depot
BoeingMastercardThe Walt Disney Co.
Bristol Myers SquibbMcDonald’sThermo Fisher Scientific
Broadcom Inc.MedtronicT-Mobile US
Caterpillar Inc.Merck & Co.Union Pacific Corp.
Charles Schwab Corp.MicrosoftUnited Parcel Service
Charter CommunicationsMorgan StanleyUnitedHealth Group
Chevron Corp.NetflixVerizon Communications
Cisco SystemsNextEra EnergyVisa Inc.
CitigroupNike Inc.Walmart
ComcastNvidiaWells Fargo
CostcoOracle Corp.Zoom Video Communications
CVS HealthPayPal

Publicly traded companies with more than $100 billion in current assets

Ally FinancialFreddie Mac
American International GroupKeyBank
BNY MellonM&T Bank
Capital OneNorthern Trust
Citizens Financial GroupPNC Financial Services
Fannie MaeRegions Financial Corp.
Fifth Third BankState Street Corp.
First Republic BankTruist Financial
Ford Motor Co.U.S. Bancorp

Publicly traded companies with more than $100 billion in sales

AmerisourceBergenDell Technologies
AnthemGeneral Motors
Cardinal HealthKroger
Centene Corp.McKesson Corp.
CignaWalgreens Boots Alliance

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges
.]

A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.

Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.

But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.

The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.

The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.

Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).

Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.

The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.

Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules?  In antitrust, vagueness in critical statutory terms allows room for change.[1] The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.

The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.

Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.

Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?

Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).

There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.

Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.

Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”

More importantly, the findings of the various expert reports never seriously contemplate the possibility of competition by differentiation in business models among the platforms. Take privacy, for example. As Peter Klein reasonably writes in his symposium article, we should not be quick to assume market failure. After all, we might have more choice than meets the eye, with Google free but ad-based, and Apple pricy but less-targeted. More generally, Richard Langlois makes a very convincing point that diversification is at the heart of competition between the large digital gatekeepers. We might just be too short-termist—here, digital communications technology might help create a false sense of urgency—to wait for the end state of the Big Tech moligopoly.

Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.

Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.


[1] But some threshold conditions like agreement or dominance might also become dated. 

The Biden Administration’s July 9 Executive Order on Promoting Competition in the American Economy is very much a mixed bag—some positive aspects, but many negative ones.

It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.

But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)

Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.

An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.

In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:

  1. Deals effectively with serious competitive problems; while at the same time
  2. Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.

Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.

Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.

Advocates of legislative action to “reform” antitrust law have already pointed to the U.S. District Court for the District of Columbia’s dismissal of the state attorneys general’s case and the “conditional” dismissal of the Federal Trade Commission’s case against Facebook as evidence that federal antitrust case law is lax and demands correction. In fact, the court’s decisions support the opposite implication. 

The Risks of Antitrust by Anecdote

The failure of a well-resourced federal regulator, and more than 45 state attorney-general offices, to avoid dismissal at an early stage of the litigation testifies to the dangers posed by a conclusory approach toward antitrust enforcement that seeks to unravel acquisitions consummated almost a decade ago without even demonstrating the factual predicates to support consideration of such far-reaching interventions. The dangers to the rule of law are self-evident. Irrespective of one’s views on the appropriate direction of antitrust law, this shortcut approach would substitute prosecutorial fiat, ideological predilection, and popular sentiment for decades of case law and agency guidelines grounded in the rigorous consideration of potential evidence of competitive harm. 

The paucity of empirical support for the exceptional remedial action sought by the FTC is notable. As the district court observed, there was little systematic effort made to define the economically relevant market or provide objective evidence of market power, beyond the assertion that Facebook has a market share of “in excess of 60%.” Remarkably, the denominator behind that 60%-plus assertion is not precisely defined, since the FTC’s brief does not supply any clear metric by which to measure market share. As the court pointed out, this is a nontrivial task in multi-sided environments in which one side of the potentially relevant market delivers services to users at no charge.  

While the point may seem uncontroversial, it is important to re-appreciate why insisting on a rigorous demonstration of market power is critical to preserving a coherent body of law that provides the market with a basis for reasonably anticipating the likelihood of antitrust intervention. At least since the late 1970s, courts have recognized that “big is not always bad” and can often yield cost savings that ultimately redound to consumers’ benefit. That is: firm size and consumer welfare do not stand in inherent opposition. If courts were to abandon safeguards against suits that cannot sufficiently define the relevant market and plausibly show market power, antitrust litigation could easily be used as a tool to punish successful firms that prevail over competitors simply by being more efficient. In other words: antitrust law could become a tool to preserve competitor welfare at the expense of consumer welfare.

The Specter of No-Fault Antitrust Liability

The absence of any specific demonstration of market power suggests deficient lawyering or the inability to gather supporting evidence. Giving the FTC litigation team the benefit of the doubt, the latter becomes the stronger possibility. If that is the case, this implies an effort to persuade courts to adopt a de facto rule of per se illegality for any firm that achieves a certain market share. (The same concept lies behind legislative proposals to bar acquisitions for firms that cross a certain revenue or market capitalization threshold.) Effectively, any firm that reached a certain size would operate under the presumption that it has market power and has secured or maintained such power due to anticompetitive practices, rather than business prowess. This would effectively convert leading digital platforms into quasi-public utilities subject to continuous regulatory intervention. Such an approach runs counter to antitrust law’s mission to preserve, rather than displace, private ordering by market forces.  

Even at the high-water point of post-World War II antitrust zealotry (a period that ultimately ended in economic malaise), proposals to adopt a rule of no-fault liability for alleged monopolization were rejected. This was for good reason. Any such rule would likely injure consumers by precluding them from enjoying the cost savings that result from the “sweet spot” scenario in which the scale and scope economies of large firms are combined with sufficiently competitive conditions to yield reduced prices and increased convenience for consumers. Additionally, any such rule would eliminate incumbents’ incentives to work harder to offer consumers reduced prices and increased convenience, since any market share preserved or acquired as a result would simply invite antitrust scrutiny as a reward.

Remembering Why Market Power Matters

To be clear, this is not to say that “Big Tech” does not deserve close antitrust scrutiny, does not wield market power in certain segments, or has not potentially engaged in anticompetitive practices.  The fundamental point is that assertions of market power and anticompetitive conduct must be demonstrated, rather than being assumed or “proved” based largely on suggestive anecdotes.  

Perhaps market power will be shown sufficiently in Facebook’s case if the FTC elects to respond to the court’s invitation to resubmit its brief with a plausible definition of the relevant market and indication of market power at this stage of the litigation. If that threshold is satisfied, then thorough consideration of the allegedly anticompetitive effect of Facebook’s WhatsApp and Instagram acquisitions may be merited. However, given the policy interest in preserving the market’s confidence in relying on the merger-review process under the Hart-Scott-Rodino Act, the burden of proof on the government should be appropriately enhanced to reflect the significant time that has elapsed since regulatory decisions not to intervene in those transactions.  

It would once have seemed mundane to reiterate that market power must be reasonably demonstrated to support a monopolization claim that could lead to a major divestiture remedy. Given the populist thinking that now leads much of the legislative and regulatory discussion on antitrust policy, it is imperative to reiterate the rationale behind this elementary principle. 

This principle reflects the fact that, outside collusion scenarios, antitrust law is typically engaged in a complex exercise to balance the advantages of scale against the risks of anticompetitive conduct. At its best, antitrust law weighs competing facts in a good faith effort to assess the net competitive harm posed by a particular practice. While this exercise can be challenging in digital markets that naturally converge upon a handful of leading platforms or multi-dimensional markets that can have offsetting pro- and anti-competitive effects, these are not reasons to treat such an exercise as an anachronistic nuisance. Antitrust cases are inherently challenging and proposed reforms to make them easier to win are likely to endanger, rather than preserve, competitive markets.

There is little doubt that Federal Trade Commission (FTC) unfair methods of competition rulemaking proceedings are in the offing. Newly named FTC Chair Lina Khan and Commissioner Rohit Chopra both have extolled the benefits of competition rulemaking in a major law review article. What’s more, in May, Commissioner Rebecca Slaughter (during her stint as acting chair) established a rulemaking unit in the commission’s Office of General Counsel empowered to “explore new rulemakings to prohibit unfair or deceptive practices and unfair methods of competition” (emphasis added).

In short, a majority of sitting FTC commissioners apparently endorse competition rulemaking proceedings. As such, it is timely to ask whether FTC competition rules would promote consumer welfare, the paramount goal of competition policy.

In a recently published Mercatus Center research paper, I assess the case for competition rulemaking from a competition perspective and find it wanting. I conclude that, before proceeding, the FTC should carefully consider whether such rulemakings would be cost-beneficial. I explain that any cost-benefit appraisal should weigh both the legal risks and the potential economic policy concerns (error costs and “rule of law” harms). Based on these considerations, competition rulemaking is inappropriate. The FTC should stick with antitrust enforcement as its primary tool for strengthening the competitive process and thereby promoting consumer welfare.

A summary of my paper follows.

Section 6(g) of the original Federal Trade Commission Act authorizes the FTC “to make rules and regulations for the purpose of carrying out the provisions of this subchapter.” Section 6(g) rules are enacted pursuant to the “informal rulemaking” requirements of Section 553 of the Administrative Procedures Act (APA), which apply to the vast majority of federal agency rulemaking proceedings.

Before launching Section 6(g) competition rulemakings, however, the FTC would be well-advised first to weigh the legal risks and policy concerns associated with such an endeavor. Rulemakings are resource-intensive proceedings and should not lightly be undertaken without an eye to their feasibility and implications for FTC enforcement policy.

Only one appeals court decision addresses the scope of Section 6(g) rulemaking. In 1971, the FTC enacted a Section 6(g) rule stating that it was both an “unfair method of competition” and an “unfair act or practice” for refiners or others who sell to gasoline retailers “to fail to disclose clearly and conspicuously in a permanent manner on the pumps the minimum octane number or numbers of the motor gasoline being dispensed.” In 1973, in the National Petroleum Refiners case, the U.S. Court of Appeals for the D.C. Circuit upheld the FTC’s authority to promulgate this and other binding substantive rules. The court rejected the argument that Section 6(g) authorized only non-substantive regulations concerning regarding the FTC’s non-adjudicatory, investigative, and informative functions, spelled out elsewhere in Section 6.

In 1975, two years after National Petroleum Refiners was decided, Congress granted the FTC specific consumer-protection rulemaking authority (authorizing enactment of trade regulation rules dealing with unfair or deceptive acts or practices) through Section 202 of the Magnuson-Moss Warranty Act, which added Section 18 to the FTC Act. Magnuson-Moss rulemakings impose adjudicatory-type hearings and other specific requirements on the FTC, unlike more flexible section 6(g) APA informal rulemakings. However, the FTC can obtain civil penalties for violation of Magnuson-Moss rules, something it cannot do if 6(g) rules are violated.

In a recent set of public comments filed with the FTC, the Antitrust Section of the American Bar Association stated:

[T]he Commission’s [6(g)] rulemaking authority is buried in within an enumerated list of investigative powers, such as the power to require reports from corporations and partnerships, for example. Furthermore, the [FTC] Act fails to provide any sanctions for violating any rule adopted pursuant to Section 6(g). These two features strongly suggest that Congress did not intend to give the agency substantive rulemaking powers when it passed the Federal Trade Commission Act.

Rephrased, this argument suggests that the structure of the FTC Act indicates that the rulemaking referenced in Section 6(g) is best understood as an aid to FTC processes and investigations, not a source of substantive policymaking. Although the National Petroleum Refiners decision rejected such a reading, that ruling came at a time of significant judicial deference to federal agency activism, and may be dated.

The U.S. Supreme Court’s April 2021 decision in AMG Capital Management v. FTC further bolsters the “statutory structure” argument that Section 6(g) does not authorize substantive rulemaking. In AMG, the U.S. Supreme Court unanimously held that Section 13(b) of the FTC Act, which empowers the FTC to seek a “permanent injunction” to restrain an FTC Act violation, does not authorize the FTC to seek monetary relief from wrongdoers. The court’s opinion rejected the FTC’s argument that the term “permanent injunction” had historically been understood to include monetary relief. The court explained that the injunctive language was “buried” in a lengthy provision that focuses on injunctive, not monetary relief (note that the term “rules” is similarly “buried” within 6(g) language dealing with unrelated issues). The court also pointed to the structure of the FTC Act, with detailed and specific monetary-relief provisions found in Sections 5(l) and 19, as “confirm[ing] the conclusion” that Section 13(b) does not grant monetary relief.

By analogy, a court could point to Congress’ detailed enumeration of substantive rulemaking provisions in Section 18 (a mere two years after National Petroleum Refiners) as cutting against the claim that Section 6(g) can also be invoked to support substantive rulemaking. Finally, the Supreme Court in AMG flatly rejected several relatively recent appeals court decisions that upheld Section 13(b) monetary-relief authority. It follows that the FTC cannot confidently rely on judicial precedent (stemming from one arguably dated court decision, National Petroleum Refiners) to uphold its competition rulemaking authority.

In sum, the FTC will have to overcome serious fundamental legal challenges to its section 6(g) competition rulemaking authority if it seeks to promulgate competition rules.

Even if the FTC’s 6(g) authority is upheld, it faces three other types of litigation-related risks.

First, applying the nondelegation doctrine, courts might hold that the broad term “unfair methods of competition” does not provide the FTC “an intelligible principle” to guide the FTC’s exercise of discretion in rulemaking. Such a judicial holding would mean the FTC could not issue competition rules.

Second, a reviewing court might strike down individual proposed rules as “arbitrary and capricious” if, say, the court found that the FTC rulemaking record did not sufficiently take into account potentially procompetitive manifestations of a condemned practice.

Third, even if a final competition rule passes initial legal muster, applying its terms to individual businesses charged with rule violations may prove difficult. Individual businesses may seek to structure their conduct to evade the particular strictures of a rule, and changes in commercial practices may render less common the specific acts targeted by a rule’s language.

Economic Policy Concerns Raised by Competition Rulemaking

In addition to legal risks, any cost-benefit appraisal of FTC competition rulemaking should consider the economic policy concerns raised by competition rulemaking. These fall into two broad categories.

First, competition rules would generate higher error costs than adjudications. Adjudications cabin error costs by allowing for case-specific analysis of likely competitive harms and procompetitive benefits. In contrast, competition rules inherently would be overbroad and would suffer from a very high rate of false positives. By characterizing certain practices as inherently anticompetitive without allowing for consideration of case-specific facts bearing on actual competitive effects, findings of rule violations inevitably would condemn some (perhaps many) efficient arrangements.

Second, competition rules would undermine the rule of law and thereby reduce economic welfare. FTC-only competition rules could lead to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency. Also, economic efficiency gains could be lost due to the chilling of aggressive efficiency-seeking business arrangements in those sectors subject to rules.

Conclusion

A combination of legal risks and economic policy harms strongly counsels against the FTC’s promulgation of substantive competition rules.

First, litigation issues would consume FTC resources and add to the costly delays inherent in developing competition rules in the first place. The compounding of separate serious litigation risks suggests a significant probability that costs would be incurred in support of rules that ultimately would fail to be applied.

Second, even assuming competition rules were to be upheld, their application would raise serious economic policy questions. The inherent inflexibility of rule-based norms is ill-suited to deal with dynamic evolving market conditions, compared with matter-specific antitrust litigation that flexibly applies the latest economic thinking to particular circumstances. New competition rules would also exacerbate costly policy inconsistencies stemming from the existence of dual federal antitrust enforcement agencies, the FTC and the Justice Department.

In conclusion, an evaluation of rule-related legal risks and economic policy concerns demonstrates that a reallocation of some FTC enforcement resources to the development of competition rules would not be cost-effective. Continued sole reliance on case-by-case antitrust litigation would generate greater economic welfare than a mixture of litigation and competition rules.

The recent launch of the international Multilateral Pharmaceutical Merger Task Force (MPMTF) is just the latest example of burgeoning cooperative efforts by leading competition agencies to promote convergence in antitrust enforcement. (See my recent paper on the globalization of antitrust, which assesses multinational cooperation and convergence initiatives in greater detail.) In what is a first, the U.S. Federal Trade Commission (FTC), the U.S. Justice Department’s (DOJ) Antitrust Division, offices of state Attorneys General, the European Commission’s Competition Directorate, Canada’s Competition Bureau, and the U.K.’s Competition and Market Authority (CMA) jointly created the MPMTF in March 2021 “to update their approach to analyzing the effects of pharmaceutical mergers.”

To help inform its analysis, in May 2021 the MPMTF requested public comments concerning the effects of pharmaceutical mergers. The MPMTF sought submissions regarding (among other issues) seven sets of questions:   

  1. What theories of harm should enforcement agencies consider when evaluating pharmaceutical mergers, including theories of harm beyond those currently considered?
  2. What is the full range of a pharmaceutical merger’s effects on innovation? What challenges arise when mergers involve proprietary drug discovery and manufacturing platforms?
  3. In pharmaceutical merger review, how should we consider the risks or effects of conduct such as price-setting practices, reverse payments, and other ways in which pharmaceutical companies respond to or rely on regulatory processes?
  4. How should we approach market definition in pharmaceutical mergers, and how is that implicated by new or evolving theories of harm?
  5. What evidence may be relevant or necessary to assess and, if applicable, challenge a pharmaceutical merger based on any new or expanded theories of harm?
  6. What types of remedies would work in the cases to which those theories are applied?
  7. What factors, such as the scope of assets and characteristics of divestiture buyers, influence the likelihood and success of pharmaceutical divestitures to resolve competitive concerns?

My research assistant Andrew Mercado and I recently submitted comments for the record addressing the questions posed by the MPMTF. We concluded:

Federal merger enforcement in general and FTC pharmaceutical merger enforcement in particular have been effective in promoting competition and consumer welfare. Proposed statutory amendments to strengthen merger enforcement not only are unnecessary, but also would, if enacted, tend to undermine welfare and would thus be poor public policy. A brief analysis of seven questions propounded by the Multilateral Pharmaceutical Merger Task Force suggests that: (a) significant changes in enforcement policies are not warranted; and (b) investigators should employ sound law and economics analysis, taking full account of merger-related efficiencies, when evaluating pharmaceutical mergers. 

While we leave it to interested readers to review our specific comments, this commentary highlights one key issue which we stressed—the importance of giving due weight to efficiencies (and, in particular, dynamic efficiencies) in evaluating pharma mergers. We also note an important critique by FTC Commissioner Christine Wilson of the treatment accorded merger-related efficiencies by U.S. antitrust enforcers.   

Discussion

Innovation in pharmaceuticals and vaccines has immensely significant economic and social consequences, as demonstrated most recently in the handling of the COVID-19 pandemic. As such, it is particularly important that public policy not stand in the way of realizing efficiencies that promote innovation in these markets. This observation applies directly, of course, to pharmaceutical antitrust enforcement, in general, and to pharma merger enforcement, in particular.

Regrettably, however, though general merger-enforcement policy has been generally sound, it has somewhat undervalued merger-related efficiencies.

Although U.S. antitrust enforcers give lip service to their serious consideration of efficiencies in merger reviews, the reality appears to be quite different, as documented by Commissioner Wilson in a 2020 speech.

Wilson’s General Merger-Efficiencies Critique: According to Wilson, the combination of finding narrow markets and refusing to weigh out-of-market efficiencies has created major “legal and evidentiary hurdles a defendant must clear when seeking to prove offsetting procompetitive efficiencies.” What’s more, the “courts [have] largely continue[d] to follow the Agencies’ lead in minimizing the importance of efficiencies.” Wilson shows that “the Horizontal Merger Guidelines text and case law appear to set different standards for demonstrating harms and efficiencies,” and argues that this “asymmetric approach has the obvious potential consequence of preventing some procompetitive mergers that increase consumer welfare.” Wilson concludes on a more positive note that this problem can be addressed by having enforcers: (1) treat harms and efficiencies symmetrically; and (2) establish clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.

While our filing with the MPMTF did not discuss Wilson’s general treatment of merger efficiencies, one would hope that the task force will appropriately weigh it in its deliberations. Our filing instead briefly addressed two “informational efficiencies” that may arise in the context of pharmaceutical mergers. These include:

More Efficient Resource Reallocation: The theory of the firm teaches that mergers may be motivated by the underutilization or misallocation of assets, or the opportunity to create welfare-enhancing synergies. In the pharmaceutical industry, these synergies may come from joining complementary research and development programs, combining diverse and specialized expertise that may be leveraged for better, faster drug development and more innovation.

Enhanced R&D: Currently, much of the R&D for large pharmaceutical companies is achieved through partnerships or investment in small biotechnology and research firms specializing in a single type of therapy. Whereas large pharmaceutical companies have expertise in marketing, navigating regulation, and undertaking trials of new drugs, small, research-focused firms can achieve greater advancements in medicine with smaller budgets. Furthermore, changes within firms brought about by a merger may increase innovation.

With increases in intellectual property and proprietary data that come from the merging of two companies, smaller research firms that work with the merged entity may have access to greater pools of information, enhancing the potential for innovation without increasing spending. This change not only raises the efficiency of the research being conducted in these small firms, but also increases the probability of a breakthrough without an increase in risk.

Conclusion

U.S. pharmaceutical merger enforcement has been fairly effective in forestalling anticompetitive combinations while allowing consumer welfare-enhancing transactions to go forward. Policy in this area should remain generally the same. Enforcers should continue to base enforcement decisions on sound economic theory fully supported by case-specific facts. Enforcement agencies could benefit, however, by placing a greater emphasis on efficiencies analysis. In particular, they should treat harms and efficiencies symmetrically (as recommend by Commissioner Wilson), and fully take into account likely resource reallocation and innovation-related efficiencies. 

PHOTO: C-Span

Lina Khan’s appointment as chair of the Federal Trade Commission (FTC) is a remarkable accomplishment. At 32 years old, she is the youngest chair ever. Her longstanding criticisms of the Consumer Welfare Standard and alignment with the neo-Brandeisean school of thought make her appointment a significant achievement for proponents of those viewpoints. 

Her appointment also comes as House Democrats are preparing to mark up five bills designed to regulate Big Tech and, in the process, vastly expand the FTC’s powers. This expansion may combine with Khan’s appointment in ways that lawmakers considering the bills have not yet considered.

This is a critical time for the FTC. It has lost a number of high-profile lawsuits and is preparing to expand its rulemaking powers to regulate things like employment contracts and businesses’ use of data. Khan has also argued in favor of additional rulemaking powers around “unfair methods of competition.”

As things stand, the FTC under Khan’s leadership is likely to push for more extensive regulatory powers, akin to those held by the Federal Communications Commission (FCC). But these expansions would be trivial compared to what is proposed by many of the bills currently being prepared for a June 23 mark-up in the House Judiciary Committee. 

The flagship bill—Rep. David Cicilline’s (D-R.I.) American Innovation and Choice Online Act—is described as a platform “non-discrimination” bill. I have already discussed what the real-world effects of this bill would likely be. Briefly, it would restrict platforms’ ability to offer richer, more integrated services at all, since those integrations could be challenged as “discrimination” at the cost of would-be competitors’ offerings. Things like free shipping on Amazon Prime, pre-installed apps on iPhones, or even including links to Gmail and Google Calendar at the top of a Google Search page could be precluded under the bill’s terms; in each case, there is a potential competitor being undermined. 

In fact, the bill’s scope is so broad that some have argued that the FTC simply would not challenge “innocuous self-preferencing” like, say, Apple pre-installing Apple Music on iPhones. Economist Hal Singer has defended the proposals on the grounds that, “Due to limited resources, not all platform integration will be challenged.” 

But this shifts the focus to the FTC itself, and implies that it would have potentially enormous discretionary power under these proposals to enforce the law selectively. 

Companies found guilty of breaching the bill’s terms would be liable for civil penalties of up to 15 percent of annual U.S. revenue, a potentially significant sum. And though the Supreme Court recently ruled unanimously against the FTC’s powers to levy civil fines unilaterally—which the FTC opposed vociferously, and may get restored by other means—there are two scenarios through which it could end up getting extraordinarily extensive control over the platforms covered by the bill.

The first course is through selective enforcement. What Singer above describes as a positive—the fact that enforcers would just let “benign” violations of the law be—would mean that the FTC itself would have tremendous scope to choose which cases it brings, and might do so for idiosyncratic, politicized reasons.

This approach is common in countries with weak rule of law. Anti-corruption laws are frequently used to punish opponents of the regime in China, who probably are also corrupt, but are prosecuted because they have challenged the regime in some way. Hong Kong’s National Security law has also been used to target peaceful protestors and critical media thanks to its vague and overly broad drafting. 

Obviously, that’s far more sinister than what we’re talking about here. But these examples highlight how excessively broad laws applied at the enforcer’s discretion give broad powers to the enforcer to penalize defendants for other, unrelated things. Or, to quote Jay-Z: “Am I under arrest or should I guess some more? / ‘Well, you was doing 55 in a 54.’

The second path would be to use these powers as leverage to get broad consent decrees to govern the conduct of covered platforms. These occur when a lawsuit is settled, with the defendant company agreeing to change its business practices under supervision of the plaintiff agency (in this case, the FTC). The Cambridge Analytica lawsuit ended this way, with Facebook agreeing to change its data-sharing practices under the supervision of the FTC. 

This path would mean the FTC creating bespoke, open-ended regulation for each covered platform. Like the first path, this could create significant scope for discretionary decision-making by the FTC and potentially allow FTC officials to impose their own, non-economic goals on these firms. And it would require costly monitoring of each firm subject to bespoke regulation to ensure that no breaches of that regulation occurred.

Khan, as a critic of the Consumer Welfare Standard, believes that antitrust ought to be used to pursue non-economic objectives, including “the dispersion of political and economic control.” She, and the FTC under her, may wish to use this discretionary power to prosecute firms that she feels are hurting society for unrelated reasons, such as because of political stances they have (or have not) taken.

Khan’s fellow commissioner, Rebecca Kelly Slaughter, has argued that antitrust should be “antiracist”; that “as long as Black-owned businesses and Black consumers are systematically underrepresented and disadvantaged, we know our markets are not fair”; and that the FTC should consider using its existing rulemaking powers to address racist practices. These may be desirable goals, but their application would require contentious value judgements that lawmakers may not want the FTC to make.

Khan herself has been less explicit about the goals she has in mind, but has given some hints. In her essay “The Ideological Roots of America’s Market Power Problem”, Khan highlights approvingly former Associate Justice William O. Douglas’s account of:

“economic power as inextricably political. Power in industry is the power to steer outcomes. It grants outsized control to a few, subjecting the public to unaccountable private power—and thereby threatening democratic order. The account also offers a positive vision of how economic power should be organized (decentralized and dispersed), a recognition that forms of economic power are not inevitable and instead can be restructured.” [italics added]

Though I have focused on Cicilline’s flagship bill, others grant significant new powers to the FTC, as well. The data portability and interoperability bill doesn’t actually define what “data” is; it leaves it to the FTC to “define the term ‘data’ for the purpose of implementing and enforcing this Act.” And, as I’ve written elsewhere, data interoperability needs significant ongoing regulatory oversight to work at all, a responsibility that this bill also hands to the FTC. Even a move as apparently narrow as data portability will involve a significant expansion of the FTC’s powers and give it a greater role as an ongoing economic regulator.

It is concerning enough that this legislative package would prohibit conduct that is good for consumers, and that actually increases the competition faced by Big Tech firms. Congress should understand that it also gives extensive discretionary powers to an agency intent on using them to pursue broad, political goals. If Khan’s appointment as chair was a surprise, what her FTC does with the new powers given to her by Congress should not be.

Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms. 

Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services. 

All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.

The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.

In general, the bills are misguided for three main reasons. 

One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars). 

Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.

Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business. 

For more, our “Joint Submission of Antitrust Economists, Legal Scholars, and Practitioners” set out why many of the House Democrats’ assumptions about the state of the economy and antitrust enforcement were mistaken. And my post, “Buck’s “Third Way”: A Different Road to the Same Destination”, argued that House Republicans like Ken Buck were misguided in believing they could support some of the proposals and avoid the massive regulatory oversight that they said they rejected.

Platform Anti-Monopoly Act 

The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.

Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including: 

  • Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
  • Conditioning access or status on purchasing other products or services from the platform; 
  • Using user data to support the platform’s own products in ways not extended to competitors; 
  • Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
  • Restricting platform users from uninstalling software pre-installed on the platform;
  • Restricting platform users from providing links to facilitate business off of the platform;
  • Preferencing the platform’s own products or services in search results or rankings;
  • Interfering with how a dependent business prices its products; 
  • Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
  • Retaliating against users who raise concerns with law enforcement about potential violations of the act.

On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.

Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face. 

It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system. 

This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.

For more, check out “Against the Vertical Discrimination Presumption” by Geoffrey Manne and Dirk Auer’s piece “On the Origin of Platforms: An Evolutionary Perspective”.

Ending Platform Monopolies Act 

Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.

This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors. 

Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.

For more, check out Geoffrey Manne’s written testimony to the House Antitrust Subcommittee and “Platform Self-Preferencing Can Be Good for Consumers and Even Competitors” by Geoffrey and me. 

Platform Competition and Opportunity Act

Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position. 

The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals

So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.

For more, check out Dirk Auer’s piece “Facebook and the Pros and Cons of Ex Post Merger Reviews” and my piece “Cracking down on mergers would leave us all worse off”. 

ACCESS Act

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, sponsored by Rep. Mary Gay Scanlon (D-Pa.), would establish data portability and interoperability requirements for platforms. 

Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability. 

Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such. 

It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system. 

Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator. 

In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.

For more, check out Gus Hurwitz’s piece “Portable Social Media Aren’t Like Portable Phone Numbers” and my piece “Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience”.

Merger Filing Fee Modernization Act

A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.

Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million. 

In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places. 

It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful. 

For more, check out my post “Buck’s “Third Way”: A Different Road to the Same Destination” and Thom Lambert’s post “Bad Blood at the FTC”.

Bad Blood at the FTC

Thom Lambert —  9 June 2021

John Carreyrou’s marvelous book Bad Blood chronicles the rise and fall of Theranos, the one-time Silicon Valley darling that was revealed to be a house of cards.[1] Theranos’s Svengali-like founder, Elizabeth Holmes, convinced scores of savvy business people (mainly older men) that her company was developing a machine that could detect all manner of maladies from a small quantity of a patient’s blood. Turns out it was a fraud. 

I had a couple of recurring thoughts as I read Bad Blood. First, I kept thinking about how Holmes’s fraud might impair future medical innovation. Something like Theranos’s machine would eventually be developed, I figured, but Holmes’s fraud would likely set things back by making investors leery of blood-based, multi-disease diagnostics.

I also had a thought about the causes of Theranos’s spectacular failure. A key problem, it seemed, was that the company tried to do too many things at once: develop diagnostic technologies, design an elegant machine (Holmes was obsessed with Steve Jobs and insisted that Theranos’s machine resemble a sleek Apple device), market the product, obtain regulatory approval, scale the operation by getting Theranos machines in retail chains like Safeway and Walgreens, and secure third-party payment from insurers.

A thought that didn’t occur to me while reading Bad Blood was that a multi-disease blood diagnostic system would soon be developed but would be delayed, or possibly even precluded from getting to market, by an antitrust enforcement action based on things the developers did to avoid the very problems that doomed Theranos. 

Sadly, that’s where we are with the Federal Trade Commission’s misguided challenge to the merger of Illumina and Grail.

Founded in 1998, San Diego-based Illumina is a leading provider of products used in genetic sequencing and genomic analysis. Illumina produces “next generation sequencing” (NGS) platforms that are used for a wide array of applications (genetic tests, etc.) developed by itself and other companies.

In 2015, Illumina founded Grail for the purpose of developing a blood test that could detect cancer in asymptomatic individuals—the “holy grail” of cancer diagnosis. Given the superior efficacy and lower cost of treatments for early- versus late-stage cancers, success by Grail could save millions of lives and billions of dollars.

Illumina created Grail as a separate entity in which it initially held a controlling interest (having provided the bulk of Grail’s $100 million Series A funding). Legally separating Grail in this fashion, rather than running it as an Illumina division, offered a number of benefits. It limited Illumina’s liability for Grail’s activities, enabling Grail to take greater risks. It mitigated the Theranos problem of managers’ being distracted by too many tasks: Grail managers could concentrate exclusively on developing a viable cancer-screening test, while Illumina’s management continued focusing on that company’s core business. It made it easier for Grail to attract talented managers, who would rather come in as corporate officers than as division heads. (Indeed, Grail landed Jeff Huber, a high-profile Google executive, as its initial CEO.) Structuring Grail as a majority-owned subsidiary also allowed Illumina to attract outside capital, with the prospect of raising more money in the future by selling new Grail stock to investors.

In 2017, Grail did exactly that, issuing new shares to investors in exchange for $1 billion. While this capital infusion enabled the company to move forward with its promising technologies, the creation of new shares meant that Illumina no longer held a controlling interest in the firm. Its ownership interest dipped below 20 percent and now stands at about 14.5 percent of Grail’s voting shares.  

Setting up Grail so as to facilitate outside capital formation and attract top managers who could focus single-mindedly on product development has paid off. Grail has now developed a blood test that, when processed on Illumina’s NGS platform, can accurately detect a number of cancers in asymptomatic individuals. Grail predicts that this “liquid biopsy,” called Galleri, will eventually be able to detect up to 50 cancers before physical symptoms manifest. Grail is also developing other blood-based cancer tests, including one that confirms cancer diagnoses in patients suspected to have cancer and another designed to detect cancer recurrence in patients who have undergone treatment.

Grail now faces a host of new challenges. In addition to continuing to develop its tests, Grail needs to:  

  • Engage in widespread testing of its cancer-detection products on up to 50 different cancers;
  • Process and present the information from its extensive testing in formats that will be acceptable to regulators;
  • Navigate the pre-market regulatory approval process in different countries across the globe;
  • Secure commitments from third-party payors (governments and private insurers) to provide coverage for its tests;
  • Develop means of manufacturing its products at scale;
  • Create and implement measures to ensure compliance with FDA’s Quality System Regulation (QSR), which governs virtually all aspects of medical device production (design, testing, production, process controls, quality assurance, labeling, packaging, handling, storage, distribution, installation, servicing, and shipping); and
  • Market its tests to hospitals and health-care professionals.

These steps are all required to secure widespread use of Grail’s tests. And, importantly, such widespread use will actually improve the quality of the tests. Grail’s tests analyze the DNA in a patient’s blood to look for methylation patterns that are known to be associated with cancer. In essence, the tests work by comparing the methylation patterns in a test subject’s DNA against a database of genomic data collected from large clinical studies. With enough comparison data, the tests can indicate not only the presence of cancer but also where in the body the cancer signal is coming from. And because Grail’s tests use machine learning to hone their algorithms in response to new data collected from test usage, the greater the use of Grail’s tests, the more accurate, sensitive, and comprehensive they become.     

To assist with the various tasks needed to achieve speedy and widespread use of its tests, Grail decided to reunite with Illumina. In September 2020, the companies entered a merger agreement under which Illumina would acquire the 85.5 percent of Grail voting shares it does not already own for cash and stock worth $7.1 billion and additional contingent payments of $1.2 billion to Grail’s non-Illumina shareholders.

Recombining with Illumina will allow Grail—which has appropriately focused heretofore solely on product development—to accomplish the tasks now required to get its tests to market. Illumina has substantial laboratory capacity that Grail can access to complete the testing needed to refine its products and establish their effectiveness. As the leading global producer of NGS platforms, Illumina has unparalleled experience in navigating the regulatory process for NGS-related products, producing and marketing those products at scale, and maintaining compliance with complex regulations like FDA’s QSR. With nearly 3,000 international employees located in 26 countries, it has obtained regulatory authorizations for NGS-based tests in more than 50 jurisdictions around the world.  It also has long-standing relationships with third-party payors, health systems, and laboratory customers. Grail, by contrast, has never obtained FDA approval for any products, has never manufactured NGS-based tests at scale, has only a fledgling regulatory affairs team, and has far less extensive contacts with potential payors and customers. By remaining focused on its key objective (unlike Theranos), Grail has achieved product-development success. Recombining with Illumina will now enable it, expeditiously and efficiently, to deploy its products across the globe, generating user data that will help improve the products going forward.

In addition to these benefits, the combination of Illumina and Grail will eliminate a problem that occurs when producers of complementary products each operate in markets that are not fully competitive: double marginalization. When sellers of products that are used together each possess some market power due to a lack of competition, their uncoordinated pricing decisions may result in less surplus for each of them and for consumers of their products. Combining so that they can coordinate pricing will leave them and their customers better off.

Unlike a producer participating in a competitive market, a producer that faces little competition can enhance its profits by raising its price above its incremental cost.[2] But there are limits on its ability to do so. As the well-known monopoly pricing model shows, even a monopolist has a “profit-maximizing price” beyond which any incremental price increase would lose money.[3] Raising price above that level would hurt both consumers and the monopolist.

When consumers are deciding whether to purchase products that must be used together, they assess the final price of the overall bundle. This means that when two sellers of complementary products both have market power, there is an above-cost, profit-maximizing combined price for their products. If the complement sellers individually raise their prices so that the combined price exceeds that level, they will reduce their own aggregate welfare and that of their customers.

This unfortunate situation is likely to occur when market power-possessing complement producers are separate companies that cannot coordinate their pricing. In setting its individual price, each separate firm will attempt to capture as much surplus for itself as possible. This will cause the combined price to rise above the profit-maximizing level. If they could unite, the complement sellers would coordinate their prices so that the combined price was lower and the sellers’ aggregate profits higher.

Here, Grail and Illumina provide complementary products (cancer-detection tests and the NGS platforms on which they are processed), and each faces little competition. If they price separately, their aggregate prices are likely to exceed the profit-maximizing combined price for the cancer test and NGS platform access. If they combine into a single firm, that firm would maximize its profits by lowering prices so that the aggregate test/platform price is the profit-maximizing combined price.  This would obviously benefit consumers.

In light of the social benefits the Grail/Illumina merger offers—speeding up and lowering the cost of getting Grail’s test approved and deployed at scale, enabling improvement of the test with more extensive user data, eliminating double marginalization—one might expect policymakers to cheer the companies’ recombination. The FTC, however, is trying to block it.  In late March, the commission brought an action claiming that the merger would violate Section 7 of the Clayton Act by substantially reducing competition in a line of commerce.

The FTC’s theory is that recombining Illumina and Grail will impair competition in the market for “multi-cancer early detection” (MCED) tests. The commission asserts that the combined company would have both the opportunity and the motivation to injure rival producers of MCED tests.

The opportunity to do so would stem from the fact that MCED tests must be processed on NGS platforms, which are produced exclusively by Illumina. Illumina could charge Grail’s rivals or their customers higher prices for access to its NGS platforms (or perhaps deny access altogether) and could withhold the technical assistance rivals would need to secure both regulatory approval of their tests and coverage by third-party payors.

But why would Illumina take this tack, given that it would be giving up profits on transactions with producers and users of other MCED tests? The commission asserts that the losses a combined Illumina/Grail would suffer in the NGS platform market would be more than offset by gains stemming from reduced competition in the MCED test market. Thus, the combined company would have a motive, as well as an opportunity, to cause anticompetitive harm.

There are multiple problems with the FTC’s theory. As an initial matter, the market the commission claims will be impaired doesn’t exist. There is no MCED test market for the simple reason that there are no commercializable MCED tests. If allowed to proceed, the Illumina/Grail merger may create such a market by facilitating the approval and deployment of the first MCED test. At present, however, there is no such market, and the chances of one ever emerging will be diminished if the FTC succeeds in blocking the recombination of Illumina and Grail.

Because there is no existing market for MCED tests, the FTC’s claim that a combined Illumina/Grail would have a motivation to injure MCED rivals—potential consumers of Illumina’s NGS platforms—is rank speculation. The commission has no idea what profits Illumina would earn from NGS platform sales related to MCED tests, what profits Grail would earn on its own MCED tests, and how the total profits of the combined company would be affected by impairing opportunities for rival MCED test producers.

In the only relevant market that does exist—the cancer-detection market—there can be no question about the competitive effect of an Illumina/Grail merger: It would enhance competition by speeding the creation of a far superior offering that promises to save lives and substantially reduce health-care costs. 

There is yet another problem with the FTC’s theory of anticompetitive harm. The commission’s concern that a recombined Illumina/Grail would foreclose Grail’s rivals from essential NGS platforms and needed technical assistance is obviated by Illumina’s commitments. Specifically, Illumina has irrevocably offered current and prospective oncology customers 12-year contract terms that would guarantee them the same access to Illumina’s sequencing products that they now enjoy, with no price increase. Indeed, the offered terms obligate Illumina not only to refrain from raising prices but also to lower them by at least 43% by 2025 and to provide regulatory and technical assistance requested by Grail’s potential rivals. Illumina’s continued compliance with its firm offer will be subject to regular audits by an independent auditor.

In the end, then, the FTC’s challenge to the Illumina/Grail merger is unjustified. The initial separation of Grail from Illumina encouraged the managerial focus and capital accumulation needed for successful test development. Recombining the two firms will now expedite and lower the costs of the regulatory approval and commercialization processes, permitting Grail’s tests to be widely used, which will enhance their quality. Bringing Grail’s tests and Illumina’s NGS platforms within a single company will also benefit consumers by eliminating double marginalization. Any foreclosure concerns are entirely speculative and are obviated by Illumina’s contractual commitments.

In light of all these considerations, one wonders why the FTC challenged this merger (and on a 4-0 vote) in the first place. Perhaps it was the populist forces from left and right that are pressuring the commission to generally be more aggressive in policing mergers. Some members of the commission may also worry, legitimately, that if they don’t act aggressively on a vertical merger, Congress will amend the antitrust laws in a deleterious fashion. But the commission has picked a poor target. This particular merger promises tremendous benefit and threatens little harm. The FTC should drop its challenge and encourage its European counterparts to do the same. 


[1] If you don’t have time for Carreyrou’s book (and you should make time if you can), HBO’s Theranos documentary is pretty solid.

[2] This ability is market power.  In a perfectly competitive market, any firm that charges an above-cost price will lose sales to rivals, who will vie for business by lowering their prices down to the level of their cost.

[3] Under the model, this is the price that emerges at the output level where the producer’s marginal revenue equals its marginal cost.

The U.S. Supreme Court’s just-published unanimous decision in AMG Capital Management LLC v. FTC—holding that Section 13(b) of the Federal Trade Commission Act does not authorize the commission to obtain court-ordered equitable monetary relief (such as restitution or disgorgement)—is not surprising. Moreover, by dissipating the cloud of litigation uncertainty that has surrounded the FTC’s recent efforts to seek such relief, the court cleared the way for consideration of targeted congressional legislation to address the issue.

But what should such legislation provide? After briefly summarizing the court’s holding, I will turn to the appropriate standards for optimal FTC consumer redress actions, which inform a welfare-enhancing legislative fix.

The Court’s Opinion

Justice Stephen Breyer’s opinion for the court is straightforward, centering on the structure and history of the FTC Act. Section 13(b) makes no direct reference to monetary relief. Its plain language merely authorizes the FTC to seek a “permanent injunction” in federal court against “any person, partnership, or corporation” that it believes “is violating, or is about to violate, any provision of law” that the commission enforces. In addition, by its terms, Section 13(b) is forward-looking, focusing on relief that is prospective, not retrospective (this cuts against the argument that payments for prior harm may be recouped from wrongdoers).

Furthermore, the FTC Act provisions that specifically authorize conditioned and limited forms of monetary relief (Section 5(l) and Section 19) are in the context of commission cease and desist orders, involving FTC administrative proceedings, unlike Section 13(b) actions that avoid the administrative route. In sum, the court concludes that:

[T]o read §13(b) to mean what it says, as authorizing injunctive but not monetary relief, produces a coherent enforcement scheme: The Commission may obtain monetary relief by first invoking its administrative procedures and then §19’s redress provisions (which include limitations). And the Commission may use §13(b) to obtain injunctive relief while administrative proceedings are foreseen or in progress, or when it seeks only injunctive relief. By contrast, the Commission’s broad reading would allow it to use §13(b) as a substitute for §5 and §19. For the reasons we have just stated, that could not have been Congress’ intent.

The court’s opinion concludes by succinctly rejecting the FTC’s arguments to the contrary.

What Comes Next

The Supreme Court’s decision has been anticipated by informed observers. All four sitting FTC Commissioners have already called for a Section 13(b) “legislative fix,” and in an April 20 hearing of Senate Commerce Committee, Chairwoman Maria Cantwell (D-Wash.) emphasized that, “[w]e have to do everything we can to protect this authority and, if necessary, pass new legislation to do so.”

What, however, should be the contours of such legislation? In considering alternative statutory rules, legislators should keep in mind not only the possible consumer benefits of monetary relief, but the costs of error, as well. Error costs are a ubiquitous element of public law enforcement, and this is particularly true in the case of FTC actions. Ideally, enforcers should seek to minimize the sum of the costs attributable to false positives (type I error), false negatives (type II error), administrative costs, and disincentive costs imposed on third parties, which may also be viewed as a subset of false positives. (See my 2014 piece “A Cost-Benefit Framework for Antitrust Enforcement Policy.”

Monetary relief is most appropriate in cases where error costs are minimal, and the quantum of harm is relatively easy to measure. This suggests a spectrum of FTC enforcement actions that may be candidates for monetary relief. Ideally, selection of targets for FTC consumer redress actions should be calibrated to yield the highest return to scarce enforcement resources, with an eye to optimal enforcement criteria.

Consider consumer protection enforcement. The strongest cases involve hardcore consumer fraud (where fraudulent purpose is clear and error is almost nil); they best satisfy accuracy in measurement and error-cost criteria. Next along the spectrum are cases of non-fraudulent but unfair or deceptive acts or practices that potentially involve some degree of error. In this category, situations involving easily measurable consumer losses (e.g., systematic failure to deliver particular goods requested or poor quality control yielding shipments of ruined goods) would appear to be the best candidates for monetary relief.

Moving along the spectrum, matters involving a higher likelihood of error and severe measurement problems should be the weakest candidates for consumer redress in the consumer protection sphere. For example, cases involve allegedly misleading advertising regarding the nature of goods, or allegedly insufficient advertising substantiation, may generate high false positives and intractable difficulties in estimating consumer harm. As a matter of judgment, given resource constraints, seeking financial recoveries solely in cases of fraud or clear deception where consumer losses are apparent and readily measurable makes the most sense from a cost-benefit perspective.

Consumer redress actions are problematic for a large proportion of FTC antitrust enforcement (“unfair methods of competition”) initiatives. Many of these antitrust cases are “cutting edge” matters involving novel theories and complex fact patterns that pose a significant threat of type I error. (In comparison, type I error is low in hardcore collusion cases brought by the U.S. Justice Department where the existence, nature, and effects of cartel activity are plain). What’s more, they generally raise extremely difficult if not impossible problems in estimating the degree of consumer harm. (Even DOJ price-fixing cases raise non-trivial measurement difficulties.)

For example, consider assigning a consumer welfare loss number to a patent antitrust settlement that may or may not have delayed entry of a generic drug by some length of time (depending upon the strength of the patent) or to a decision by a drug company to modify a drug slightly just before patent expiration in order to obtain a new patent period (raising questions of valuing potential product improvements). These and other examples suggest that only rarely should the FTC pursue requests for disgorgement or restitution in antitrust cases, if error-cost-centric enforcement criteria are to be honored.

Unfortunately, the FTC currently has nothing to say about when it will seek monetary relief in antitrust matters. Commendably, in 2003, the commission issued a Policy Statement on Monetary Equitable Remedies in Competition Cases specifying that it would only seek monetary relief in “exceptional cases” involving a “[c]lear [v]iolation” of the antitrust laws. Regrettably, in 2012, a majority of the FTC (with Commissioner Maureen Ohlhausen dissenting) withdrew that policy statement and the limitations it imposed. As I concluded in a 2012 article:

This action, which was taken without the benefit of advance notice and public comment, raises troubling questions. By increasing business uncertainty, the withdrawal may substantially chill efficient business practices that are not well understood by enforcers. In addition, it raises the specter of substantial error costs in the FTC’s pursuit of monetary sanctions. In short, it appears to represent a move away from, rather than towards, an economically enlightened antitrust enforcement policy.

In a 2013 speech, then-FTC Commissioner Josh Wright also lamented the withdrawal of the 2003 Statement, and stated that he would limit:

… the FTC’s ability to pursue disgorgement only against naked price fixing agreements among competitors or, in the case of single firm conduct, only if the monopolist’s conduct has no plausible efficiency justification. This latter category would include fraudulent or deceptive conduct, or tortious activity such as burning down a competitor’s plant.

As a practical matter, the FTC does not bring cases of this sort. The DOJ brings naked price-fixing cases and the unilateral conduct cases noted are as scarce as unicorns. Given that fact, Wright’s recommendation may rightly be seen as a rejection of monetary relief in FTC antitrust cases. Based on the previously discussed serious error-cost and measurement problems associated with monetary remedies in FTC antitrust cases, one may also conclude that the Wright approach is right on the money.

Finally, a recent article by former FTC Chairman Tim Muris, Howard Beales, and Benjamin Mundel opined that Section 13(b) should be construed to “limit[] the FTC’s ability to obtain monetary relief to conduct that a reasonable person would know was dishonest or fraudulent.” Although such a statutory reading is now precluded by the Supreme Court’s decision, its incorporation in a new statutory “fix” would appear ideal. It would allow for consumer redress in appropriate cases, while avoiding the likely net welfare losses arising from a more expansive approach to monetary remedies.

 Conclusion

The AMG Capital decision is sure to generate legislative proposals to restore the FTC’s ability to secure monetary relief in federal court. If Congress adopts a cost-beneficial error-cost framework in shaping targeted legislation, it should limit FTC monetary relief authority (recoupment and disgorgement) to situations of consumer fraud or dishonesty arising under the FTC’s authority to pursue unfair or deceptive acts or practices. Giving the FTC carte blanche to obtain financial recoveries in the full spectrum of antitrust and consumer protection cases would spawn uncertainty and could chill a great deal of innovative business behavior, to the ultimate detriment of consumer welfare.


,

Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company. 

But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.

Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.

The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention). 

Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:

But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:

— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.

— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.

— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.

— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.

The report thus asserts that:

The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.

That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]

What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard. 

Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark. 

Decisions Under Uncertainty

In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.

Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong. 

Consider the following passage from FTC economist Ken Heyer’s memo:

The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]

In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.

Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?

In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today. 

Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here). 

Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.

To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets. 

In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.

Putting Erroneous Predictions in Context

So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.

But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.

This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.

In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.

Pinterest provides a fitting example; despite relying heavily on Google for traffic, its ad-funded service has witnessed significant growth. The same is true of other vertical search engines like Airbnb, Booking.com, and Zillow. While we cannot know the counterfactual, the vertical search industry has certainly not been decimated by Google’s “monopoly”; quite the opposite. Unsurprisingly, this has coincided with a significant decrease in the cost of online advertising, and the growth of online advertising relative to other forms.

Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:

The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.

FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.

This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.

But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call: 

When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.

The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:

Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”

It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.

Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation). 

In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.

The FTC Lawyers’ Weak Case for Prosecuting Google

At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.

Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:

A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.

If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.

The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.

Moreover, as Ben Thompson argues in his Stratechery newsletter: 

The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.

This difficulty was deftly highlighted by Heyer’s memo:

If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]

Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.

And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.

Google’s ‘revenue-sharing’ agreements

It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:

FTC staff urged the agency’s five commissioners to sue Google for signing exclusive contracts with Apple and the major wireless carriers that made sure the company’s search engine came pre-installed on smartphones.

The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance. 

To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).

Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:

This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.

This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:

[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.

Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.

Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):

Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.

Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.

Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system. 

In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.

Self-preferencing

Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:

When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers

The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:

Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites…. 

…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]

More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control…. 

…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….

…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk? 

Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time. 

Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.

Competitor Harm Is Not an Indicator of the Need for Intervention

Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:

Amazon and Facebook privately complained to the FTC about Google’s conduct, saying their business suffered because of the company’s search bias, scraping of content from rival sites and restrictions on advertisers’ use of competing search engines. 

Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.

But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents. 

This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:

Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives…. 

…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest…. 

…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.

Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:

They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.

Conclusion

When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.

But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.