Archives For truth on the market

President Donald Trump has repeatedly called for repeal of Section 230. But while Trump and fellow conservatives decry Big Tech companies for their alleged anti-conservative bias, including at yet more recent hearings, their issue is not actually with Section 230. It’s with the First Amendment. 

Conservatives can’t actually do anything directly about how social media platforms moderate content because it is the First Amendment that grants those platforms a right to editorial discretion. Even FCC Commissioner Brendan Carr, who strongly opposes “Big Tech censorship,” recognizes this

By the same token, even if one were to grant that conservatives are right about the bias of moderators at these large social media platforms, it does not follow that removal of Section 230 immunity would alter that bias. In fact, in a world without Section 230 immunity, there still would be no legal cause of action for political bias. 

The truth is that conservatives use Section 230 immunity for leverage over social media platforms. The hope is that, because social media platforms desire the protections of civil immunity for third-party content, they will follow whatever conditions the government puts on their editorial discretion. But the attempt to end-run the First Amendment’s protections is also unconstitutional.

There is no cause of action for political bias by online platforms if we repeal Section 230

Consider the counterfactual: if there were no Section 230 to immunize them from liability, under what law would platforms face a viable cause of action for political bias? Conservative critics never answer this question. Instead, they focus on the irrelevant distinction between publishers and platforms. Or they talk about how Section 230 is a giveaway to Big Tech. But none consider the actual relationship between Section 230 immunity and alleged political bias.

But let’s imagine we’ve done what President Trump has called for and repealed Section 230. Where does that leave conservatives?

Unfortunately, it leaves them without any cause of action. There is no law passed by Congress or any state legislature, no regulation promulgated by the Federal Communications Commission or the Federal Trade Commission, no common law tort action that can be asserted against online platforms to force them to carry speech they don’t wish to carry. 

The difficulties of pursuing a contract claim for political bias

The best argument for conservatives is that, without Section 230 immunity, online platforms could be more easily held to any contractual restraints in their terms of service. If a platform promises, for instance, that it will moderate speech in a politically neutral way, a user could make the case that the platform violated its terms of service if it acted with political bias in her particular case.

For the vast majority of users, it is unclear whether there are damages from having a post fact-checked or removed. But for users who share in advertising revenue, the concrete injury from a moderation decision is more obvious. PragerU, for example, has (unsuccessfully) sued Google for being put in Restricted Mode on YouTube, which reduces its reach and advertising revenue. 

Even where there is a concrete injury that gets a case into court, that doesn’t necessarily mean there is a valid contract claim. In PragerU’s case against Google, a California court dismissed contract claims because the YouTube terms of service contract was written to allow the platform to retain discretion over what is published. Specifically, the court found that there can be no implied covenant of good faith and fair dealing where “YouTube reserves the right to remove Content without prior notice” and to “discontinue any aspect of the Service at any time.”

Breach-of-contract claims for moderation practices are highly dependent on what is actually promised in the terms of service. For instance, under Facebook’s TOS the company retains the right “to remove or restrict access to content that is in violation” of its community standards. Facebook does provide a process for users to request further review, but retains the right to remove content. The community standards also give Facebook broad discretion to determine, among other things, what counts as hate speech or false news. It is exceedingly unlikely that a court would ever have a basis to find a contract violation by Facebook if the company can reasonably point to a user’s violation of its terms of service. 

For example, in Ebeid v. Facebook, the U.S. Northern District of California dismissed fraud and breach of contract claims, finding the plaintiff failed to allege what contractual provision Facebook breached, that Facebook retained discretion over what ads would be posted, and that the plaintiff suffered no damages because no money was taken to be spent on the ads. The court also dismissed an implied covenant of good faith and fair dealing claim because Facebook retained the right to “remove or disapprove any post or ad at Facebook’s sole discretion.”

While the conservative critique has been that social media platforms do too much moderation—in the form of politically biased removals, fact-checking, and demonetization—others believe platforms do far too little to restrain bad conduct by users. But as long as social media platforms retain editorial discretion in their terms of service and make no other promises that can be relied upon by their users, there is little basis for a contract claim. 

The First Amendment protects the moderation policies of social media platforms, and there is no way around this

With no reasonable cause of action for political bias under the law, conservatives dangle the threat of making changes to Section 230 immunity that could prove costly to the social media platforms in order to extract concessions from the platforms to alter their practices.

This is why there are no serious efforts to actually repeal Section 230, as President Trump has asked for repeatedly. Instead, several bills propose to amend Section 230, while a rulemaking by the FCC seeks to clarify its meaning. 

But none of these proposed bills would directly affect platforms’ ability to make “biased” moderation decisions. Put simply: the First Amendment protects social media platforms’ editorial discretion. They may set rules to use their platforms, just as any private person may set rules for their own property. If I kick someone off my property for saying racist things, the First Amendment (as well as regular property law) protects my right to do so. Only under extremely limited circumstances can the government change this baseline rule and survive constitutional scrutiny.

Social media platforms’ right to editorial discretion is the same as that enjoyed by newspapers. In Miami Herald Publishing Co. v. Tornillo, the Supreme Court found:

The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time. 

Social media platforms, just like any other property owner, have the right to determine what they want displayed on their property. In other words, Facebook, Google, and Twitter have the right to moderate content on news feeds, search results, and timelines. The attempted constitutional end-run—threatening to remove immunity for third-party content unrelated to political bias, like defamation and other tortious acts, unless social media platforms give up their right to editorial discretion over political speech—is just as unconstitutional as directly imposing “fairness” requirements on social media platforms.

The Supreme Court has held that Congress may not leverage a government benefit to regulate a speech interest outside of the benefit’s scope. This is called the unconstitutional conditions doctrine. It basically delineates the level of regulation the government can undertake through subsidizing behavior. The government can’t condition a government benefit on giving up editorial discretion over political speech.

The point of Section 230 immunity is to remedy the moderator’s dilemma set up by Stratton Oakmont v. Prodigy, which held that if a platform chose to moderate third-party speech at all, they would be liable for what was not removed. Section 230 is not about compelling political neutrality on platforms, because it can’t be consistent with the First Amendment. Civil immunity for third-party speech online is an important benefit for social media platforms because it holds they are not liable for the acts of third-parties, with limited exceptions. Without it, platforms would restrict opportunities for third-parties to post out of fear of liability

In sum, the government may not condition enjoyment of a government benefit upon giving up a constitutionally protected right. Section 230 immunity is a clear government benefit. The right to editorial discretion is clearly protected by the First Amendment. Because the entire point of conservative Section 230 reform efforts is to compel social media platforms to carry speech they otherwise desire to remove, it fails this basic test.

Conclusion

Fundamentally, the conservative push to reform Section 230 in response to the alleged anti-conservative bias of major social media platforms is not about policy. Really, it’s about waging a culture war against the perceived “liberal elites” from Silicon Valley, just as there is an ongoing culture war against perceived “liberal elites” in the mainstream media, Hollywood, and academia. But fighting this culture war is not worth giving up conservative principles of free speech, limited government, and free markets.

The Limits of Rivalry

Kelly Fayne —  2 November 2020 — Leave a comment
[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Kelly Fayne (Antitrust Associate, Latham & Watkins).
]

Nicholas Petit, with Big Tech and the Digital Economy: The Moligopoly Scenario, enters the fray at this moment of peak consternation about big tech platforms to reexamine antitrust’s role as referee.  Amongst calls on the one hand like those in the Majority Staff Report and Recommendation from the Subcommittee on Antitrust (“these firms have too much power, and that power must be reined in and subject to appropriate oversight and enforcement”) and, on the other hand, understandably strong disagreement from the firms targeted, Petit offers a diagnosis.  A focus on the protection of rivalry for rivalry’s sake is insufficiently adaptive to the “distinctive features of digital industries, firms, and markets.”

I am left wondering, however, if he’s misdiagnosed the problem – or at least whether the cure he offers would be seen as sufficient by those most vocally asserting that antitrust is failing.  And, of course, I recognize that his objective in writing this book is not to bring harmony to a deeply divided debate, but to offer an improved antitrust framework for navigating big tech.

Petit, in Chapter 5 (“Antitrust in Moligopoly Markets”), says: “So the real question is this: should we abandon, or at least radically alter traditional antitrust principals modeled on rivalry in digital markets? The answer is yes.”  He argues that “protecting rivalry is not perforce socially beneficial in industries with increasing returns to adoption.”  But it is his tethering to the notion of what is “socially beneficial” that creates a challenge.

Petit argues that the function of the current antitrust legal regimes – most significantly the US and EU – is to protect rivalry.   He observes several issues with rivalry when applied as both a test and a remedy for market power.  One of the most valuable insights Petit offers in his impressive work in this book, is that tipped markets may not be all that bad.  In fact, when markets exhibit increasing returns to adoption, allowing the winner to take it all (or most) may be more welfare enhancing than trying to do the antitrust equivalent of forcing two magnets to remain apart.  And, assuming all the Schumpeterian dynamics align, he’s right.  Or rather, he’s right if you agree that welfare is the standard by which what is socially beneficial should be measured.  

Spoiler alert: My own view is that antitrust requires an underlying system of measurement, and the best available system is welfare-based. More on this below. 

When it comes to evaluating horizontal mergers, Petit suggests an alternative regime calibrated to handle the unique circumstances that arise in tech deals.  But his new framework remains largely tethered to (or at least based in the intuitions of) a variation of the welfare standard that, for the most part, still underlies modern applications of antitrust laws. So the question becomes, if you alter the means, but leave the ends unchanged, do you get different results?  At least in the  merger context, I’m not so sure.  And if the results are for the most part the same, do we really need an alternative path to achieving them?  Probably not. 

The Petit horizontal merger test (1) applies a non-rebuttable (OMG!) presumption of prohibition on mergers to monopoly by the dominant platform in “tipped markets,” and (2) permits some acquisitions in untipped markets without undue regard to whether the acquiring firm is dominant in another market.  A non-rebuttable presumption, admittedly, elicited heavy-pressure red pen in the margins upon my first read.  Upon further reflection … I still don’t like it. I am, however, somewhat comforted because I suspect that its practical application would land us largely in the same place as current applications of antitrust for at least the vast majority of tech transactions.  And that is because Petit’s presumptive prohibition on mergers in tipped markets doesn’t cancel the fight, it changes the venue.  

The exercise of determining whether or not the market is tipped in effect replicates the exercise of assessing whether the dominant firm has a significant degree of market power, and concludes in the affirmative.  Enforcers around the world already look skeptically at firms with perceived market power when they make horizontal acquisitions (among an already rare group of cases in which such deals are attempted).  I recognize that there is theoretical daylight between Petit’s proposed test and one in which the merging parties are permitted an efficiencies defense, but in practice, the number of deals cleared solely on the basis of countervailing procompetitive efficiencies has historically been small. Thus, the universe of deals swept up in the per se prohibition could easily end up a null set.  (Or at least, I think it should be a null set given how quickly the tech industry evolves and transforms). 

As for the untipped markets, Petit argues that it is “unwarranted to treat firms with monopoly positions in tipped markets more strictly than others when they make indirect entry in untipped markets.”  He further argues that there is “no economic basis to prefer indirect entry by an incumbent firm from a tipped market over entry from (i) a new firm or (ii) an established firm from an untipped market.  Firm type is not determinative of the weight of social welfare brought by a unit of innovation.”  His position is closely aligned with the existing guidance on vertical and conglomerate mergers, including in the recently issued FTC and DOJ Vertical Merger Guidelines, although his discussion contains a far more nuanced perspective on how network effects and the leveraging of market power from one market to another overlay into the vertical merger math.  In the end, however, whether one applies the existing vertical merger approach or the Petit proposal, I hypothesize little divergence in outcomes.  

All of the above notwithstanding, Petit’s endeavor to devise a framework more closely calibrated to the unique features of tech platforms is admirable, as is the care and thoughtfulness he’s taken to the task.  If the audience for this book takes the view that the core principals of economic welfare should underlie antitrust laws and their application, Petit is likely to find it receptive.  While many (me included) may not think a new regime is necessary, the way that he articulates the challenges presented by platforms and evolving technologies is enlightening even for those who think an old approach can learn new tricks.  And, of course, the existing approach, but has the added benefit of being adaptable to applications outside of tech platforms. 

Still, the purpose of antitrust law is where the far more difficult debate is taking place.  And this is where, as I mentioned above, I think Petit may have misdiagnosed the shortcomings of neo-structuralism (or the neo-Brandeisian school, or Antitrust 2.0, or Hipster Antitrust, and so on). In short, these are frameworks that focus first on the number and size of players in an industry and guard against concentration, even in the absence of a causal link between these structural elements and adverse impact on consumer, and/or total welfare. Petit describes neo-structuralism as focusing on rivalry without having an “an evaluative premise” (i.e., an explanation for why big = bad).  I’m less sure that it lacks an evaluative premise, rather, I think it might have several (potentially competing) evaluative premises.  

Rivalry indeed has no inherent value, it is good – or perceived as good – as a means to an end.  If that end is consumer welfare, then the limiting principle on when rivalry is achieving its end is whether welfare is enhanced or not.  But many have argued that rivalry could have other potential benefits.  For instance, the Antitrust Subcommittee House Report, identifies several potential objectives for competition law: driving innovation and entrepreneurship, privacy, the protection of political and economic liberties, and controlling influence of private firms over the policymaking process.  Even if we grant that competition could be a means to achieving these ends, the measure of success for competition laws would have to be the degree to which the ends are achieved.  For example, if one argues that competition law should be used to promote privacy, we would measure the success of those laws by whether they do in fact promote privacy, not whether they maintain a certain number of players in an industry.  Although, we should also consider whether competition law really is the most efficient and effective means to those ends. 

Returning again to merger control, in the existing US regime, and under the Petit proposal, a dominant tech platform might be permitted to acquire a large player in an unrelated market assuming there is no augmentation of market power as a result of the interplay between the two and if the deal is, on net, efficiency enhancing.  In simpler terms, if consumers are made better off through lower prices, better services, increased innovation etc. the deal is permitted to proceed.  Yet, if antitrust were calibrated, e.g., for a primary purpose of disaggregating corporate control over capital to minimize political influence by large firms, you could see the same transition failing to achieve approval.  If privacy were the primary goal, perhaps certain deals would be blocked if the merging parties are both in possession of detailed consumer data without regard to their size or existence of other players in the same space.  

The failure of neo-structuralism (etc.) is, in my view, also likely the basis for its growing popularity.  Petit argues that the flaw is that it promotes rivalry as an end in itself.  I posit instead that neo-structuralism is flawed because it promotes rivalry as a means and is agnostic to the ends.  As a result, people with strongly differing views on the optimal ends of competition law can appear to agree with one another by agreeing on the means and in doing so, promote a competition law framework that risks being untethered and undisciplined.  In the absence of a clearly articulated policy goal – whether it is privacy, or economic equality, or diluting political influence, or even consumer welfare – there is no basis on which to evaluate whether any given competition law is structured or applied optimally.  If rivalry is to be the means by which we implement our policy goals, how do we know when we have enough rivalry, or too little?  We can’t.  

It is on this point that I think there is more work to undertake in a complete critique of the failings of neo-structuralism (and any other neo-isms to come).  In addition to other merits, welfare maximization gives us a framework to hold the construct and application of competition law accountable.  It is irresponsible to replace a system that has, as Petit puts it, an “evaluative premise” with one possesses no ends-based framework for evaluation, leaving the law rudderless and susceptible to arbitrary or even selective enforcement.

This image has an empty alt attribute; its file name is covid-image-1024x683.jpg

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Thomas W. Hazlett, (Hugh H. Macaulay Endowed Professor of Economics, John E. Walker Department of Economics Clemson University)

(Ed. Note: the following is an excerpt from a piece published by the Chicago Tribune on Oct. 16, 2020. Click here to read the full piece)

No matter your Twitter feed, “vaccines have been one of the greatest public health tools to prevent disease,” as The New York Times explained in January…

Many are terrified that the Food and Drug Administration may hastily authorize injections into hundreds of millions. The FDA and drugmakers are trying to assuage such concerns with enhanced commitments to safety. Nonetheless, fears have been stoked by President Donald Trump’s infomercial-style endorsement of hydroxychloroquine as a COVID-19 remedy, his foolhardy disdain for face masks and campaign rally boasts of a preelection cure.

Yes, politics. But the opposing political push — the demand that new vaccines must be safe at all costs — is itself a dangerous meme, and the strange bedfellow of anti-vaxxer protesters.

Pulitzer Prize-winning journalist Laurie Garrett inadvertently quantifies the problem. In a Sept. 3 article in Foreign Policy, she cited the H1N1 (swine flu) episode in 2009 as “the last mad rush to vaccinate.” Warning that those shots “caused Guillain-Barr (GBS) paralysis in … 6.2 per 10 million patients who received the vaccine,” she argues that phase 3 trials for COVID-19 vaccines, typically involving just 30,000 people, provide little protection. “There’s no way … we can spot a safety hazard that’s in 1 out of a million, much less 1 out of 10 million, vaccine recipients.” The “safety side,” she told a TV interviewer, “looks insane.”

But, in fact, the “insanity” here is not found in the push for speed or in Garrett’s skepticism about Operation Warp Speed. It lies in a lack of balance between the two. An insufficiently vetted vaccine may cost innocent lives, but so will delaying a vaccine that, on net, saves them…

When promising therapies appear, reducing time to market is often worth the risk — as reflected in a raft of pre-COVID-19 policies, including the FDA’s “emergency use authorizations,” “fast track” drug approvals and “compassionate use” permissions for experimental drugs. In phase 3 trials, independent monitors observe results, and trials may be terminated when pre-specified benefits appear. Patients in the control group become eligible for the treatment instead of the placebo. Larger samples would enhance scientific knowledge, but as probabilities shift regulators act on the reality that the ideal can become the enemy of the good.

Read the full piece at the Chicago Tribune.

Congressman Buck’s “Third Way” report offers a compromise between the House Judiciary Committee’s majority report, which proposes sweeping new regulation of tech companies, and the status quo, which Buck argues is unfair and insufficient. But though Buck rejects many of the majority’s reports proposals, what he proposes instead would lead to virtually the same outcome via a slightly longer process. 

The most significant majority proposals that Buck rejects are the structural separation to prevent a company that runs a platform from operating on that platform “in competition with the firms dependent on its infrastructure”, and line-of-business restrictions that would confine tech companies to a small number of markets, to prevent them from preferencing their other products to the detriment of competitors.

Buck rules these out, saying that they are “regulatory in nature [and] invite unforeseen consequences and divert attention away from public interest antitrust enforcement by our antitrust agencies.” He goes on to say that “this proposal is a thinly veiled call to break up Big Tech firms.”

Instead, Buck endorses, either fully or provisionally, measures including revitalising the essential facilities doctrine, imposing data interoperability mandates on platforms, and changing antitrust law to prevent “monopoly leveraging and predatory pricing”. 

Put together, though, these would amount to the same thing that the Democratic majority report proposes: a world where platforms are basically just conduits, regulated to be neutral and open, and where the companies that run them require a regulator’s go-ahead for important decisions — a process that would be just as influenced lobbying and political considerations, and insulated from market price signals, as any other regulator’s decisions are.

Revitalizing the essential facilities doctrine

Buck describes proposals to “revitalize the essential facilities doctrine” as “common ground” that warrant further consideration. This would mean that platforms deemed to be “essential facilities” would be required to offer access to their platform to third parties at a “reasonable” price, except in exceptional circumstances. The presumption would be that these platforms were anticompetitively foreclosing third party developers and merchants by either denying them access to their platforms or by charging them “too high” prices. 

This would require the kind of regulatory oversight that Buck says he wants to avoid. He says that “conservatives should be wary of handing additional regulatory authority to agencies in an attempt to micromanage platforms’ access rules.” But there’s no way to avoid this when the “facility” — and hence its pricing and access rules — changes as frequently as any digital platform does. In practice, digital platforms would have to justify their pricing rules and decisions about exclusion of third parties to courts or a regulator as often as they make those decisions.

If Apple’s App Store were deemed an essential facility such that it is presumed to be foreclosing third party developers any time it rejected their submissions, it would have to submit to regulatory scrutiny of the “reasonableness” of its commercial decisions on, literally, a daily basis.

That would likely require price controls to prevent platforms from using pricing to de facto exclude third parties they did not want to deal with. Adjudication of “fair” pricing by courts is unlikely to be a sustainable solution. Justice Breyer, in Town of Concord v. Boston Edison Co., considered this to be outside the courts’ purview:

[H]ow is a judge or jury to determine a ‘fair price?’ Is it the price charged by other suppliers of the primary product? None exist. Is it the price that competition ‘would have set’ were the primary level not monopolized? How can the court determine this price without examining costs and demands, indeed without acting like a rate-setting regulatory agency, the rate-setting proceedings of which often last for several years? Further, how is the court to decide the proper size of the price ‘gap?’ Must it be large enough for all independent competing firms to make a ‘living profit,’ no matter how inefficient they may be? . . . And how should the court respond when costs or demands change over time, as they inevitably will?

In practice, infrastructure treated as an essential facility is usually subject to pricing control by a regulator. This has its own difficulties. The UK’s energy and water infrastructure is an example. In determining optimal access pricing, regulators must determine the price that weighs competing needs to maximise short-term output, incentivise investment by the infrastructure owner, incentivise innovation and entry by competitors (e.g., local energy grids) and, of course, avoid “excessive” pricing. 

This is a near-impossible task, and the process is often drawn out and subject to challenges even in markets where the infrastructure is relatively simple. It is even less likely that these considerations would be objectively tractable in digital markets.

Treating a service as an essential facility is based on the premise that, absent mandated access, it is impossible to compete with it. But mandating access does not, on its own, prevent it from extracting monopoly rents from consumers; it just means that other companies selling inputs can have their share of the rents. 

So you may end up with two different sets of price controls: on the consumer side, to determine how much monopoly rent can be extracted from consumers, and on the access side, to determine how the monopoly rents are divided.

The UK’s energy market has both, for example. In the case of something like an electricity network, where it may simply not be physically or economically feasible to construct a second, competing network, this might be the least-bad course of action. In such circumstances, consumer-side price regulation might make sense. 

But if a service could, in fact, be competed with by others, treating it as an essential facility may be affirmatively harmful to competition and consumers if it diverts investment and time away from that potential competitor by allowing other companies to acquire some of the incumbent’s rents themselves.

The HJC report assumes that Apple is a monopolist, because, among people who own iPhones, the App Store is the only way to install third-party software. Treating the App Store as an essential facility may mean a ban on Apple charging “excessive prices” to companies like Spotify or Epic that would like to use it, or on Apple blocking them for offering users alternative in-app ways of buying their services.

If it were impossible for users to switch from iPhones, or for app developers to earn revenue through other mechanisms, this logic might be sound. But it would still not change the fact that the App Store platform was able to charge users monopoly prices; it would just mean that Epic and Spotify could capture some of those monopoly rents for themselves. Nice for them, but not for consumers. And since both companies have already grown to be pretty big and profitable with the constraints they object to in place, it seems difficult to argue that they cannot compete with these in place and sounds more like they’d just like a bigger share of the pie.

And, in fact, it is possible to switch away from the iPhone to Android. I have personally switched back and forth several times over the past few years, for example. And so have many others — despite what some claim, it’s really not that hard, especially now that most important data is stored on cloud-based services, and both companies offer an app to switch from the other. Apple also does not act like a monopolist — its Bionic chips are vastly better than any competitor’s and it continues to invest in and develop them.

So in practice, users switching from iPhone to Android if Epic’s games and Spotify’s music are not available constrains Apple, to some extent. If Apple did drive those services permanently off their platform, it would make Android relatively more attractive, and some users would move away — Apple would bear some of the costs of its ecosystem becoming worse. 

Assuming away this kind of competition, as Buck and the majority report do, is implausible. Not only that, but Buck and the majority believe that competition in this market is impossible — no policy or antitrust action could change things, and all that’s left is to regulate the market like it’s an electricity grid. 

And it means that platforms could often face situations where they could not expect to make themselves profitable after building their markets, since they could not control the supply side in order to earn revenues. That would make it harder to build platforms, and weaken competition, especially competition faced by incumbents.

Mandating interoperability

Interoperability mandates, which Buck supports, require platforms to make their products open and interoperable with third party software. If Twitter were required to be interoperable, for example, it would have to provide a mechanism (probably a set of open APIs) by which third party software could tweet and read its feeds, upload photos, send and receive DMs, and so on. 

Obviously, what interoperability actually involves differs from service to service, and involves decisions about design that are specific to each service. These variations are relevant because they mean interoperability requires discretionary regulation, including about product design, and can’t just be covered by a simple piece of legislation or a court order. 

To give an example: interoperability means a heightened security risk, perhaps from people unwittingly authorising a bad actor to access their private messages. How much is it appropriate to warn users about this, and how tight should your security controls be? It is probably excessive to require that users provide a sworn affidavit with witnesses, and even some written warnings about the risks may be so over the top as to scare off virtually any interested user. But some level of warning and user authentication is appropriate. So how much? 

Similarly, a company that has been required to offer its customers’ data through an API, but doesn’t really want to, can make life miserable for third party services that want to use it. Changing the API without warning, or letting its service drop or slow down, can break other services, and few users will be likely to want to use a third-party service that is unreliable. But some outages are inevitable, and some changes to the API and service are desirable. How do you decide how much?

These are not abstract examples. Open Banking in the UK, which requires interoperability of personal and small business current accounts, is the most developed example of interoperability in the world. It has been cited by former Chair of the Council of Economic Advisors, Jason Furman, among others, as a model for interoperability in tech. It has faced all of these questions: one bank, for instance, required that customers pass through twelve warning screens to approve a third party app to access their banking details.

To address problems like this, Open Banking has needed an “implementation entity” to design many of its most important elements. This is a de facto regulator, and it has taken years of difficult design decisions to arrive at Open Banking’s current form. 

Having helped write the UK’s industry review into Open Banking, I am cautiously optimistic about what it might be able to do for banking in Britain, not least because that market is already heavily regulated and lacking in competition. But it has been a huge undertaking, and has related to a relatively narrow set of data (its core is just two different things — the ability to read an account’s balance and transaction history, and the ability to initiate payments) in a sector that is not known for rapidly changing technology. Here, the costs of regulation may be outweighed by the benefits.

I am deeply sceptical that the same would be the case in most digital markets, where products do change rapidly, where new entrants frequently attempt to enter the market (and often succeed), where the security trade-offs are even more difficult to adjudicate, and where the economics are less straightforward, given that many services are provided at least in part because of the access to customer data they provide. 

Even if I am wrong, it is unavoidable that interoperability in digital markets would require an equivalent body to make and implement decisions when trade-offs are involved. This, again, would require a regulator like the UK’s implementation entity, and one that was enormous, given the number and diversity of services that it would have to oversee. And it would likely have to make important and difficult design decisions to which there is no clear answer. 

Banning self-preferencing

Buck’s Third Way would also ban digital platforms from self-preferencing. This typically involves an incumbent that can provide a good more cheaply than its third-party competitors — whether it’s through use of data that those third parties do not have access to, reputational advantages that mean customers will be more likely to use their products, or through scale efficiencies that allow it to provide goods to a larger customer base for a cheaper price. 

Although many people criticise self-preferencing as being unfair on competitors, “self-preferencing” is an inherent part of almost every business. When a company employs its own in-house accountants, cleaners or lawyers, instead of contracting out for them, it is engaged in internal self-preferencing. Any firm that is vertically integrated to any extent, instead of contracting externally for every single ancillary service other than the one it sells in the market, is self-preferencing. Coase’s theory of the firm is all about why this kind of behaviour happens, instead of every worker contracting on the open market for everything they do. His answer is that transaction costs make it cheaper to bring certain business relationships in-house than to contract externally for them. Virtually everyone agrees that this is desirable to some extent.

Nor does it somehow become a problem when the self-preferencing takes place on the consumer product side. Any firm that offers any bundle of products — like a smartphone that can run only the manufacturer’s operating system — is engaged in self-preferencing, because users cannot construct their own bundle with that company’s hardware and another’s operating system. But the efficiency benefits often outweigh the lack of choice.

Self-preferencing in digital platforms occurs, for example, when Google includes relevant Shopping or Maps results at the top of its general Search results, or when Amazon gives its own store-brand products (like the AmazonBasics range) a prominent place in the results listing.

There are good reasons to think that both of these are good for competition and consumer welfare. Google making Shopping results easily visible makes it a stronger competitor to Amazon, and including Maps results when you search for a restaurant just makes it more convenient to get the information you’re looking for.

Amazon sells its own private label products partially because doing so is profitable (even when undercutting rivals), partially to fill holes in product lines (like clothing, where 11% of listings were Amazon private label as of November 2018), and partially because it increases users’ likelihood to use Amazon if they expect to find a reliable product from a brand they trust. According to Amazon, they account for less than 1% of its annual retail sales, in contrast to the 19% of revenues ($54 billion) Amazon makes from third party seller services, which includes Marketplace commissions. Any analysis that ignores that Amazon has to balance those sources of revenue, and so has to tread carefully, is deficient. 

With “commodity” products (like, say, batteries and USB cables), where multiple sellers are offering very similar or identical versions of the same thing, private label competition works well for both Amazon and consumers. By Amazon’s own rules it can enter this market using aggregated data, but this doesn’t give it a significant advantage, because that data is easily obtainable from multiple sources, including Amazon itself, which makes detailed aggregated sales data freely available to third-party retailers

Amazon does profit from sales of these products, of course. And other merchants suffer by having to cut their prices to compete. That’s precisely what competition involves — competition is incompatible with a quiet life for businesses. But consumers benefit, and the biggest benefit to Amazon is that it assures its potential customers that when they visit they will be able to find a product that is cheap and reliable, so they keep coming back.

It is even hard to argue that in aggregate this practice is damaging to third-party sellers: many, like Anker, have built successful businesses on Amazon despite private-label competition precisely because the value of the platform increases for all parties as user trust and confidence in it does.

In these cases and in others, platforms act to solve market failures on the markets they host, as Andrei Hagiu has argued. To maximize profits, digital platforms need to strike a balance between being an attractive place for third-party merchants to sell their goods and being attractive to consumers by offering low prices. The latter will frequently clash with the former — and that’s the difficulty of managing a platform. 

To mistake this pro-competitive behaviour with an absence of competition is misguided. But that is a key conclusion of Buck’s Third Way: that the damage to competitors makes this behaviour harmful overall, and that it should be curtailed with “non-discrimination” rules. 

Treating below-cost selling as “predatory pricing”

Buck’s report equates below-cost selling with predatory pricing (“predatory pricing, also known as below-cost selling”). This is mistaken. Predatory pricing refers to a particular scenario where your price cut is temporary and designed to drive a competitor out of business, so that you can raise prices later and recoup your losses. 

It is easy to see that this does not describe the vast majority of below-cost selling. Buck’s formulation would describe all of the following as “predatory pricing”:

  • A restaurants that gives away ketchup for free;
  • An online retailer that offers free shipping and returns;
  • A grocery store that sells tins of beans for 3p a can. (This really happened when I was a child.)

The rationale for offering below-cost prices differs in each of these cases. Sometimes it’s a marketing ploy — Tesco sells those beans to get some free media, and to entice people into their stores, hoping they’ll decide to do the rest of their weekly shop there at the same time. Sometimes it’s about reducing frictions — the marginal cost of ketchup is so low that it’s simpler to just give it away. Sometimes it’s about reducing the fixed costs of transactions so more take place — allowing customers who buy your products to return them easily may mean more are willing to buy them overall, because there’s less risk for them if they don’t like what they buy. 

Obviously, none of these is “predatory”: none is done in the expectation that the below-cost selling will drive those businesses’ competitors out of business, allowing them to make monopoly profits later.

True predatory pricing is theoretically possible, but very difficult. As David Henderson describes, to successfully engage in predatory pricing means taking enormous and rising losses that grow for the “predatory” firm as customers switch to it from its competitor. And once the rival firm has exited the market, if the predatory firm raises prices above average cost (i.e., to recoup its losses), there is no guarantee that a new competitor will not enter the market selling at the previously competitive price. And the competing firm can either shut down temporarily or, in some cases, just buy up the “predatory” firm’s discounted goods to resell later. It is debatable whether the canonical predatory pricing case, Standard Oil, is itself even an example of that behaviour.

Offering a product below cost in a multi-sided market (like a digital platform) can be a way of building a customer base in order to incentivise entry on the other side of the market. When network effects exist, so additional users make the service more valuable to existing users, it can be worthwhile to subsidise the initial users until the service reaches a certain size. 

Uber subsidising drivers and riders in a new city is an example of this — riders want enough drivers on the road that they know they’ll be picked up fairly quickly if they order one, and drivers want enough riders that they know they’ll be able to earn a decent night’s fares if they use the app. This requires a certain volume of users on both sides — to get there, it can be in everyone’s interest for the platform to subsidise one or both sides of the market to reach that critical mass.

The slightly longer road to regulation

That is another reason for below-cost pricing: someone other than the user may be part-paying for a product, to build a market they hope to profit from later. Platforms must adjust pricing and their offerings to each side of their market to manage supply and demand. Epic, for example, is trying to build a desktop computer game store to rival the largest incumbent, Steam. To win over customers, it has been giving away games for free to users, who can own them on that store forever. 

That is clearly pro-competitive — Epic is hoping to get users over the habit of using Steam for all their games, in the hope that they will recoup the costs of doing so later in increased sales. And it is good for consumers to get free stuff. This kind of behaviour is very common. As well as Uber and Epic, smaller platforms do it too. 

Buck’s proposals would make this kind of behaviour much more difficult, and permitted only if a regulator or court allows it, instead of if the market can bear it. On both sides of the coin, Buck’s proposals would prevent platforms from the behaviour that allows them to grow in the first place — enticing suppliers and consumers and subsidising either side until critical mass has been reached that allows the platform to exist by itself, and the platform owner to recoup its investments. Fundamentally, both Buck and the majority take the existence of platforms as a given, ignoring the incentives to create new ones and compete with incumbents. 

In doing so, they give up on competition altogether. As described, Buck’s provisions would necessitate ongoing rule-making, including price controls, to work. It is unlikely that a court could do this, since the relevant costs would change too often for one-shot rule-making of the kind a court could do. To be effective at all, Buck’s proposals would require an extensive, active regulator, just as the majority report’s would. 

Buck nominally argues against this sort of outcome — “Conservatives should be wary of handing additional regulatory authority to agencies in an attempt to micromanage platforms’ access rules” — but it is probably unavoidable, given the changes he proposes. And because the rule changes he proposes would apply to the whole economy, not just tech, his proposals may, perversely, end up being even more extensive and interventionist than the majority’s.

Other than this, the differences in practice between Buck’s proposals and the Democrats’ proposals would be trivial. At best, Buck’s Third Way is just a longer route to the same destination.

One of the key recommendations of the House Judiciary Committee’s antitrust report which seems to have bipartisan support (see Rep. Buck’s report) is shifting evidentiary burdens of proof to defendants with “monopoly power.” These recommended changes are aimed at helping antitrust enforcers and private plaintiffs “win” more. The result may well be more convictions, more jury verdicts, more consent decrees, and more settlements, but there is a cost. 

Presumption of illegality for certain classes of defendants unless they can prove otherwise is inconsistent with the American traditions of the presumption of innocence and allowing persons to dispose of their property as they wish. Forcing antitrust defendants to defend themselves from what is effectively a presumption of guilt will create an enormous burden upon them. But this will be felt far beyond just antitrust defendants. Consumers who would have benefited from mergers that are deterred or business conduct that is prevented will have those benefits foregone.

The Presumption of Liberty in American Law

The Presumption of Innocence

There is nothing wrong with presumptions in law as a general matter. For instance, one of the most important presumptions in American law is that criminal defendants are presumed innocent until proven guilty. Prosecutors bear the burden of proof, and must prove guilt beyond a reasonable doubt. Even in the civil context, plaintiffs, whether public or private, have the burden of proving a violation of the law, by the preponderance of the evidence. In either case, the defendant is not required to prove they didn’t violate the law.

Fundamentally, the presumption of innocence is about liberty. As William Blackstone put it in his Commentaries on the Law of England centuries ago: “the law holds that it is better that ten guilty persons escape than that one innocent suffer.” 

In economic terms, society must balance the need to deter bad conduct, however defined, with not deterring good conduct. In a world of uncertainty, this includes the possibility that decision-makers will get it wrong. For instance, if a mere allegation of wrongdoing places the burden upon a defendant to prove his or her innocence, much good conduct would be deterred out of fear of false allegations. In this sense, the presumption of innocence is important: it protects the innocent from allegations of wrongdoing, even if that means in some cases the guilty escape judgment.

Presumptions in Property, Contract, and Corporate Law

Similarly, presumptions in other areas of law protect liberty and are against deterring the good in the name of preventing the bad. For instance, the presumption when it comes to how people dispose of their property is that unless a law says otherwise, they may do as they wish. In other words, there is no presumption that a person may not use their property in a manner they wish to do so. The presumption is liberty, unless a valid law proscribes behavior. The exceptions to this rule typically deal with situations where a use of property could harm someone else. 

In contracts, the right of persons to come to a mutual agreement is the general rule, with rare exceptions. The presumption is in favor of enforcing voluntary agreements. Default rules in the absence of complete contracting supplement these agreements, but even the default rules can be contracted around in most cases.

Bringing the two together, corporate law—essentially the nexus of contract law and property law— allows persons to come together to dispose of property and make contracts, supplying default rules which can be contracted around. The presumption again is that people are free to do as they choose with their own property. The default is never that people can’t create firms to buy or sell or make agreements.

A corollary right of the above is that people may start businesses and deal with others on whatever basis they choose, unless a generally applicable law says otherwise. In fact, they can even buy other businesses. Mergers and acquisitions are generally allowed by the law. 

Presumptions in Antitrust Law

Antitrust is a generally applicable set of laws which proscribe how people can use their property. But even there, the presumption is not that every merger or act by a large company is harmful. 

On the contrary, antitrust laws allow groups of people to dispose of property as they wish unless it can be shown that a firm has “market power” that is likely to be exercised to the detriment of competition or consumers. Plaintiffs, whether public or private, bear the burden of proving all the elements of the antitrust violation alleged.

In particular, antitrust law has incorporated the error cost framework. This framework considers the cost of getting decisions wrong. Much like the presumption of innocence is based on the tradeoff of allowing some guilty persons to go unpunished in order to protect the innocent, the error cost framework notes there is tradeoff between allowing some anticompetitive conduct to go unpunished in order to protect procompetitive conduct. American antitrust law seeks to avoid the condemnation of procompetitive conduct more than it avoids allowing the guilty to escape condemnation. 

For instance, to prove a merger or acquisition would violate the antitrust laws, a plaintiff must show the transaction will substantially lessen competition. This involves defining the market, that the defendant has power over that market, and that the transaction would lessen competition. While concentration of the market is an important part of the analysis, antitrust law must consider the effect on consumer welfare as a whole. The law doesn’t simply condemn mergers or acquisitions by large companies just because they are large.

Similarly, to prove a monopolization claim, a plaintiff must establish the defendant has “monopoly power” in the relevant market. But monopoly power isn’t enough. As stated by the Supreme Court in Trinko:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period— is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth. To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.

The plaintiff must also prove the defendant has engaged in the “willful acquisition or maintenance of [market] power, as distinguished from growth or development as a consequence of a superior product, business acumen, or historical accident.” Antitrust law is careful to avoid mistaken inferences and false condemnations, which are especially costly because they “chill the very conduct antitrust laws are designed to protect.”

The presumption isn’t against mergers or business conduct even when those businesses are large. Antitrust law only condemns mergers or business conduct when it is likely to harm consumers.

How Changing Antitrust Presumptions will Harm Society

In light of all of this, the House Judiciary Committee’s Investigation of Competition in Digital Markets proposes some pretty radical departures from the law’s normal presumption in favor of people disposing property how they choose. Unfortunately, the minority report issued by Representative Buck agrees with the recommendations to shift burdens onto antitrust defendants in certain cases.

One of the recommendations from the Subcommittee is that Congress:

“codify[] bright-line rules for merger enforcement, including structural presumptions. Under a structural presumption, mergers resulting in a single firm controlling an outsized market share, or resulting in a significant increase in concentration, would be presumptively prohibited under Section 7 of the Clayton Act. This structural presumption would place the burden of proof upon the merging parties to show that the merger would not reduce competition. A showing that the merger would result in efficiencies should not be sufficient to overcome the presumption that it is anticompetitive. It is the view of Subcommittee staff that the 30% threshold established by the Supreme Court in Philadelphia National Bank is appropriate, although a lower standard for monopsony or buyer power claims may deserve consideration by the Subcommittee. By shifting the burden of proof to the merging parties in cases involving concentrated markets and high market shares, codifying the structural presumption would help promote the efficient allocation of agency resources and increase the likelihood that anticompetitive mergers are blocked. (emphasis added)

Under this proposal, in cases where concentration meets an arbitrary benchmark based upon the market definition, the presumption will be that the merger is illegal. Defendants will now bear the burden of proof to show the merger won’t reduce competition, without even getting to refer to efficiencies that could benefit consumers. 

Changing the burden of proof to be against criminal defendants would lead to more convictions of guilty people, but it would also lead to a lot more false convictions of innocent defendants. Similarly, changing the burden of proof to be against antitrust defendants would certainly lead to more condemnations of anticompetitive mergers, but it would also lead to the deterrence of a significant portion of procompetitive mergers.

So yes, if adopted, plaintiffs would likely win more as a result of these proposed changes, including in cases where mergers are anticompetitive. But this does not necessarily mean it would be to the benefit of larger society. 

Antitrust has evolved over time to recognize that concentration alone is not predictive of likely competitive harm in merger analysis. Both the horizontal merger guidelines and the vertical merger guidelines issued by the FTC and DOJ emphasize the importance of fact-specific inquiries into competitive effects, and not just a reliance on concentration statistics. This reflected a long-standing bipartisan consensus. The HJC’s majority report overturns this consensus by suggesting a return to the structural presumptions which have largely been rejected in antitrust law.

The HJC majority report also calls for changes in presumptions when it comes to monopolization claims. For instance, the report calls on Congress to consider creating a statutory presumption of dominance by a seller with a market share of 30% or more and a presumption of dominance by a buyer with a market share of 25% or more. The report then goes on to suggest overturning a number of precedents dealing with monopolization claims which in their view restricted claims of tying, predatory pricing, refusals to deal, leveraging, and self-preferencing. In particular, they call on Congress to “[c]larify[] that ‘false positives’ (or erroneous enforcement) are not more costly than ‘false negatives’ (erroneous non-enforcement), and that, when relating to conduct or mergers involving dominant firms, ‘false negatives’ are costlier.”

This again completely turns the ordinary presumptions about innocence and allowing people to dispose of the property as they see fit on their head. If adopted, defendants would largely have to prove their innocence in monopolization cases if their shares of the market are above a certain threshold. 

Moreover, the report calls for Congress to consider making conduct illegal even if it “can be justified as an improvement for consumers.” It is highly likely that the changes proposed will harm consumer welfare in many cases, as the focus changes from economic efficiency to concentration. 

Conclusion

The HJC report’s recommendations on changing antitrust presumptions should be rejected. The harms will be felt not only by antitrust defendants, who will be much more likely to lose regardless of whether they have violated the law, but by consumers whose welfare is no longer the focus. The result is inconsistent with the American tradition that presumes innocence and the ability of people to dispose of their property as they see fit. 

In the hands of a wise philosopher-king, the Sherman Act’s hard-to-define prohibitions of “restraints of trade” and “monopolization” are tools that will operate inevitably to advance the public interest in competitive markets. In the hands of real-world litigators, regulators and judges, those same words can operate to advance competitors’ private interests in securing commercial advantages through litigation that could not be secured through competition in the marketplace. If successful, this strategy may yield outcomes that run counter to antitrust law’s very purpose.

The antitrust lawsuit filed by Epic Games against Apple in August 2020, and Apple’s antitrust lawsuit against Qualcomm (settled in April 2019), suggest that antitrust law is heading in this unfortunate direction.

From rent-minimization to rent-maximization

The first step in converting antitrust law from an instrument to minimize rents to an instrument to maximize rents lies in expanding the statute’s field of application on the apparently uncontroversial grounds of advancing the public interest in “vigorous” enforcement. In surprisingly short order, this largely unbounded vision of antitrust’s proper scope has become the dominant fashion in policy discussions, at least as expressed by some legislators, regulators, and commentators.

Following the new conventional wisdom, antitrust law has pursued over the past decades an overly narrow path, consequently overlooking and exacerbating a panoply of social ills that extend well beyond the mission to “merely” protect the operation of the market pricing mechanism. This line of argument is typically coupled with the assertion that courts, regulators and scholars have been led down this path by incumbents that welcome the relaxed scrutiny of a purportedly deferential antitrust policy.

This argument, and related theory of regulatory capture, has things roughly backwards.

Placing antitrust law at the service of a largely undefined range of social purposes set by judicial and regulatory fiat threatens to render antitrust a tool that can be easily deployed to favor the private interests of competitors rather than the public interest in competition. Without the intellectual discipline imposed by the consumer welfare standard (and, outside of per se illegal restraints, operationalized through the evidentiary requirement of competitive harm), the rhetoric of antitrust provides excellent cover for efforts to re-engineer the rules of the game in lieu of seeking to win the game as it has been played.

Epic Games v. Apple

A nascent symptom of this expansive form of antitrust is provided by the much-publicized lawsuit brought by Epic Games, the maker of the wildly popular video game, Fortnite, against Apple, the operator of the even more wildly popular App Store. On August 13, 2020, Epic added a “direct” payment processing services option to its Fortnite game, which violated the developer terms of use that govern the App Store. In response, Apple exercised its contractual right to remove Fortnite from the App Store, triggering Fortnite’s antitrust suit. The same sequence has ensued between Epic Games and Google in connection with the Google Play Store. Both litigations are best understood as a breach of contract dispute cloaked in the guise of an antitrust cause of action.

In suggesting that a jury trial would be appropriate in Epic Games’ suit against Apple, the district court judge reportedly stated that the case is “on the frontier of antitrust law” and [i]t is important enough to understand what real people think.” That statement seems to suggest that this is a close case under antitrust law. I respectfully disagree. Based on currently available information and applicable law, Epic’s argument suffers from two serious vulnerabilities that would seem to be difficult for the plaintiff to overcome.

A contestably narrow market definition

Epic states three related claims: (1) Apple has a monopoly in the relevant market, defined as the App Store, (2) Apple maintains its monopoly by contractually precluding developers from distributing iOS-compatible versions of their apps outside the App Store, and (3) Apple maintains a related monopoly in the payment processing services market for the App Store by contractually requiring developers to use Apple’s processing service.

This market definition, and the associated chain of reasoning, is subject to significant doubt, both as a legal and factual matter.

Epic’s narrow definition of the relevant market as the App Store (rather than app distribution platforms generally) conveniently results in a 100% market share for Apple. Inconveniently, federal case law is generally reluctant to adopt single-brand market definitions. While the Supreme Court recognized in 1992 a single-brand market in Eastman Kodak Co. v. Image Technical Services, the case is widely considered to be an outlier in light of subsequent case law. As a federal district court observed in Spahr v. Leegin Creative Leather Products (E.D. Tenn. 2008): “Courts have consistently refused to consider one brand to be a relevant market of its own when the brand competes with other potential substitutes.”

The App Store would seem to fall into this typical category. The customer base of existing and new Fortnite users can still accessthe gamethrough multiple platforms and on multiple devices other than the iPhone, including a PC, laptop, game console, and non-Apple mobile devices. (While Google has also removed Fortnite from the Google Play store due to the added direct payment feature, users can, at some inconvenience, access the game manually on Android phones.)

Given these alternative distribution channels, it is at a minimum unclear whether Epic is foreclosed from reaching a substantial portion of its consumer base, which may already access the game on alternative platforms or could potentially do so at moderate incremental transaction costs. In the language of platform economics, it appears to be technologically and economically feasible for the target consumer base to “multi-home.” If multi-homing and related switching costs are low, even a 100% share of the App Store submarket would not translate into market power in the broader and potentially more economically relevant market for app distribution generally.

An implausible theory of platform lock-in

Even if it were conceded that the App Store is the relevant market, Epic’s claim is not especially persuasive, both as an economic and a legal matter. That is because there is no evidence that Apple is exploiting any such hypothetically attributed market power to increase the rents extracted from developers and indirectly impose deadweight losses on consumers.

In the classic scenario of platform lock-in, a three-step sequence is observed: (1) a new firm acquires a high market share in a race for platform dominance, (2) the platform winner is protected by network effects and switching costs, and (3) the entrenched platform “exploits” consumers by inflating prices (or imposing other adverse terms) to capture monopoly rents. This economic model is reflected in the case law on lock-in claims, which typically requires that the plaintiff identify an adverse change by the defendant in pricing or other terms after users were allegedly locked-in.

The history of the App Store does not conform to this model. Apple has always assessed a 30% fee and the same is true of every other leading distributor of games for the mobile and PC market, including Google Play Store, App Store’s rival in the mobile market, and Steam, the dominant distributor of video games in the PC market. This long-standing market practice suggests that the 30% fee is most likely motivated by an efficiency-driven business motivation, rather than seeking to entrench a monopoly position that Apple did not enjoy when the practice was first adopted. That is: even if Apple is deemed to be a “monopolist” for Section 2 purposes, it is not taking any “illegitimate” actions that could constitute monopolization or attempted monopolization.

The logic of the 70/30 split

Uncovering the business logic behind the 70/30 split in the app distribution market is not too difficult.

The 30% fee appears to be a low transaction-cost practice that enables the distributor to fund a variety of services, including app development tools, marketing support, and security and privacy protections, all of which are supplied at no separately priced fee and therefore do not require service-by-service negotiation and renegotiation. The same rationale credibly applies to the integrated payment processing services that Apple supplies for purposes of in-app purchases.

These services deliver significant value and would otherwise be difficult to replicate cost-effectively, protect the App Store’s valuable stock of brand capital (which yields positive spillovers for app developers on the site), and lower the costs of joining and participating in the App Store. Additionally, the 30% fee cross-subsidizes the delivery of these services to the approximately 80% of apps on the App Store that are ad-based and for which no fee is assessed, which in turn lowers entry costs and expands the number and variety of product options for platform users. These would all seem to be attractive outcomes from a competition policy perspective.

Epic’s objection

Epic would object to this line of argument by observing that it only charges a 12% fee to distribute other developers’ games on its own Epic Games Store.

Yet Epic’s lower fee is reportedly conditioned, at least in some cases, on the developer offering the game exclusively on the Epic Games Store for a certain period of time. Moreover, the services provided on the Epic Games Store may not be comparable to the extensive suite of services provided on the App Store and other leading distributors that follow the 30% standard. Additionally, the user base a developer can expect to access through the Epic Games Store is in all likelihood substantially smaller than the audience that can be reached through the App Store and other leading app and game distributors, which is then reflected in the higher fees charged by those platforms.

Hence, even the large fee differential may simply reflect the higher services and larger audiences available on the App Store, Google Play Store and other leading platforms, as compared to the Epic Games Store, rather than the unilateral extraction of market rents at developers’ and consumers’ expense.

Antitrust is about efficiency, not distribution

Epic says the standard 70/30 split between game publishers and app distributors is “excessive” while others argue that it is historically outdated.

Neither of these are credible antitrust arguments. Renegotiating the division of economic surplus between game suppliers and distributors is not the concern of antitrust law, which (as properly defined) should only take an interest if either (i) Apple is colluding on the 30% fee with other app distributors, or (ii) Apple is taking steps that preclude entry into the apps distribution market and lack any legitimate business justification. No one claims evidence for the former possibility and, without further evidence, the latter possibility is not especially compelling given the uniform use of the 70/30 split across the industry (which, as noted, can be derived from a related set of credible efficiency justifications). It is even less compelling in the face of evidence that output is rapidly accelerating, not declining, in the gaming app market: in the first half of 2020, approximately 24,500 new games were added to the App Store.

If this conclusion is right, then Epic’s lawsuit against Apple does not seem to have much to do with the public interest in preserving market competition.

But it clearly has much to do with the business interest of an input supplier in minimizing its distribution costs and maximizing its profit margin. That category includes not only Epic Games but Tencent, the world’s largest video game publisher and the holder of a 40% equity stake in Epic. Tencent also owns Riot Games (the publisher of “League of Legends”), an 84% stake in Supercell (the publisher of “Clash of Clans”), and a 5% stake in Activision Blizzard (the publisher of “Call of Duty”). It is unclear how an antitrust claim that, if successful, would simply redistribute economic value from leading game distributors to leading game developers has any necessary relevance to antitrust’s objective to promote consumer welfare.

The prequel: Apple v. Qualcomm

Ironically (and, as Dirk Auer has similarly observed), there is a symmetry between Epic’s claims against Apple and the claims previously pursued by Apple (and, concurrently, the Federal Trade Commission) against Qualcomm.

In that litigation, Apple contested the terms of the licensing arrangements under which Qualcomm made available its wireless communications patents to Apple (more precisely, Foxconn, Apple’s contract manufacturer), arguing that the terms were incompatible with Qualcomm’s commitment to “fair, reasonable and nondiscriminatory” (“FRAND”) licensing of its “standard-essential” patents (“SEPs”). Like Epic v. Apple, Apple v. Qualcomm was fundamentally a contract dispute, with the difference that Apple was in the position of a third-party beneficiary of the commitment that Qualcomm had made to the governing standard-setting organization. Like Epic, Apple sought to recharacterize this contractual dispute as an antitrust question, arguing that Qualcomm’s licensing practices constituted anticompetitive actions to “monopolize” the market for smartphone modem chipsets.

Theory meets evidence

The rhetoric used by Epic in its complaint echoes the rhetoric used by Apple in its briefs and other filings in the Qualcomm litigation. Apple (like the FTC) had argued that Qualcomm imposed a “tax” on competitors by requiring that any purchaser of Qualcomm’s chipsets concurrently enter into a license for Qualcomm’s SEP portfolio relating to 3G and 4G/LTE-enabled mobile communications devices.

Yet the history and performance of the mobile communications market simply did not track Apple’s (and the FTC’s continuing) characterization of Qualcomm’s licensing fee as a socially costly drag on market growth and, by implication, consumer welfare.

If this assertion had merit, then the decades-old wireless market should have exhibited a dismal history of increasing prices, slow user adoption and lagging innovation. In actuality, the wireless market since its inception has grown relentlessly, characterized by declining quality-adjusted prices, expanding output, relentless innovation, and rapid adoption across a broad range of income segments.

Given this compelling real-world evidence, the only remaining line of argument (still being pursued by the FTC) that could justify antitrust intervention is a theoretical conjecture that the wireless market might have grown even faster under some alternative IP licensing arrangement. This assertion rests precariously on the speculative assumption that any such arrangement would have induced the same or higher level of aggregate investment in innovation and commercialization activities. That fragile chain of “what if” arguments hardly seems a sound basis on which to rewrite the legal infrastructure behind the billions of dollars of licensing transactions that support the economically thriving smartphone market and the even larger ecosystem that has grown around it.

Antitrust litigation as business strategy

Given the absence of compelling evidence of competitive harm from Qualcomm’s allegedly anticompetitive licensing practices, Apple’s litigation would seem to be best interpreted as an economically rational attempt by a downstream producer to renegotiate a downward adjustment in the fees paid to an upstream supplier of critical technology inputs. (In fact, those are precisely the terms on which Qualcomm in 2015 settled the antitrust action brought against it by China’s competition regulator, to the obvious benefit of local device producers.) The Epic Games litigation is a mirror image fact pattern in which an upstream supplier of content inputs seeks to deploy antitrust law strategically for the purposes of minimizing the fees it pays to a leading downstream distributor.

Both litigations suffer from the same flaw. Private interests concerning the division of an existing economic value stream—a business question that is matter of indifference from an efficiency perspective—are erroneously (or, at least, reflexively) conflated with the public interest in preserving the free play of competitive forces that maximizes the size of the economic value stream.

Conclusion: Remaking the case for “narrow” antitrust

The Epic v. Apple and Apple v. Qualcomm disputes illustrate the unproductive rent-seeking outcomes to which antitrust law will inevitably be led if, as is being widely advocated, it is decoupled from its well-established foundation in promoting consumer welfare—and not competitor welfare.

Some proponents of a more expansive approach to antitrust enforcement are convinced that expanding the law’s scope of application will improve market efficiency by providing greater latitude for expert regulators and courts to reengineer market structures to the public benefit. Yet any substitution of top-down expert wisdom for the bottom-up trial-and-error process of market competition can easily yield “false positives” in which courts and regulators take actions that counterproductively intervene in markets that are already operating under reasonably competitive conditions. Additionally, an overly expansive approach toward the scope of antitrust law will induce private firms to shift resources toward securing advantages over competitors through lobbying and litigation, rather than seeking to win the race to deliver lower-cost and higher-quality products and services. Neither outcome promotes the public’s interest in a competitive marketplace.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Peter Klein (Professor of Entrepreneurship, Baylor University).
]

Nicolas Petit’s insightful and provocative book ends with a chapter on “Big Tech’s Novel Harms,” asking whether antitrust is the appropriate remedy for popular (and academic) concerns about privacy, fake news, and hate speech. In each case, he asks whether the alleged harms are caused by a lack of competition among platforms – which could support a case for breaking them up – or by the nature of the underlying technologies and business models. He concludes that these problems are not alleviated (and may even be exacerbated) by applying competition policy and suggests that regulation, not antitrust, is the more appropriate tool for protecting privacy and truth.

What kind of regulation? Treating digital platforms like public utilities won’t work, Petit argues, because the product is multidimensional and competition takes place on multiple margins (the larger theme of the book): “there is a plausible chance that increased competition in digital markets will lead to a race to the bottom, in which price competition (e.g., on ad markets) will be the winner, and non-price competition (e.g., on privacy) will be the loser.” Utilities regulation also provides incentives for rent-seeking by less efficient rivals. Retail regulation, aimed at protecting small firms, may end up helping incumbents instead by raising rivals’ costs.

Petit concludes that consumer protection regulation (such as Europe’s GDPR) is a better tool for guarding privacy and truth, though it poses challenges as well. More generally, he highlights the vast gulf between the economic analysis of privacy and speech and the increasingly loud calls for breaking up the big tech platforms, which would do little to alleviate these problems.

As in the rest of the book, Petit’s treatment of these complex issues is thoughtful, careful, and systematic. I have more fundamental problems with conventional antitrust remedies and think that consumer protection is problematic when applied to data services (even more so than in other cases). Inspired by this chapter, let me offer some additional thoughts on privacy and the nature of data which speak to regulation of digital platforms and services.

First, privacy, like information, is not an economic good. Just as we don’t buy and sell information per se but information goods (books, movies, communications infrastructure, consultants, training programs, etc.), we likewise don’t produce and consume privacy but what we might call privacy goods: sunglasses, disguises, locks, window shades, land, fences and, in the digital realm, encryption software, cookie blockers, data scramblers, and so on.

Privacy goods and services can be analyzed just like other economic goods. Entrepreneurs offer bundled services that come with varying degrees of privacy protection: encrypted or regular emails, chats, voice and video calls; browsers that block cookies or don’t; social media sites, search engines, etc. that store information or not; and so on. Most consumers seem unwilling to sacrifice other functionality for increased privacy, as suggested by the small market shares held by DuckDuckGo, Telegram, Tor, and the like suggest. Moreover, while privacy per se is appealing, there are huge efficiency gains from matching on buyer and seller characteristics on sharing platforms, digital marketplaces, and dating sites. There are also substantial cost savings from electronic storage and sharing of private information such as medical records and credit histories. And there is little evidence of sellers exploiting such information to engage in price discrimination. (Aquisti, Taylor, and Wagman, 2016 provide a detailed discussion of many of these issues.)

Regulating markets for privacy goods via bans on third-party access to customer data, mandatory data portability, and stiff penalties for data breaches is tricky. Such policies could make digital services more valuable, but it is not obvious why the market cannot figure this out. If consumers are willing to pay for additional privacy, entrepreneurs will be eager to supply it. Of course, bans on third-party access and other forms of sharing would require a fundamental change in the ad-based revenue model that makes free or low-cost access possible, so platforms would have to devise other means of monetizing their services. (Again, many platforms already offer ad-free subscriptions, so it’s unclear why those who prefer ad-based, free usage should be prevented from doing so.)

What about the idea that I own “my” data and that, therefore, I should have full control over how it is used? Some of the utilities-based regulatory models treat platforms as neutral storage places or conduits for information belonging to users. Proposals for data portability suggest that users of technology platforms should be able to move their data from platform to platform, downloading all their personal information from one platform then uploading it to another, then enjoying the same functionality on the new platform as longtime users.

Of course, there are substantial technical obstacles to such proposals. Data would have to be stored in a universal format – not just the text or media users upload to platforms, but also records of all interactions (likes, shares, comments), the search and usage patterns of users, and any other data generated as a result of the user’s actions and interactions with other users, advertisers, and the platform itself. It is unlikely that any universal format could capture this information in a form that could be transferred from one platform to another without a substantial loss of functionality, particularly for platforms that use algorithms to determine how information is presented to users based on past use. (The extreme case is a platform like TikTok which uses usage patterns as a substitute for follows, likes, and shares, portability to construct a “feed.”)

Moreover, as each platform sets its own rules for what information is allowed, the import functionality would have to screen the data for information allowed on the original platform but not the new (and the reverse would be impossible – a user switching from Twitter to Gab, for instance, would have no way to add the content that would have been permitted on Gab but was never created in the first place because it would have violated Twitter rules).

There is a deeper, philosophical issue at stake, however. Portability and neutrality proposals take for granted that users own “their” data. Users create data, either by themselves or with their friends and contacts, and the platform stores and displays the data, just as a safe deposit box holds documents or jewelry and a display case shows of an art collection. I should be able to remove my items from the safe deposit box and take them home or to another bank, and a “neutral” display case operator should not prevent me from showing off my preferred art (perhaps subject to some general rules about obscenity or harmful personal information).

These analogies do not hold for user-generated information on internet platforms, however. “My data” is a record of all my interactions with platforms, with other users on those platforms, with contractual partners of those platforms, and so on. It is co-created by these interactions. I don’t own these records any more than I “own” the fact that someone saw me in the grocery store yesterday buying apples. Of course, if I have a contract with the grocer that says he will keep my purchase records private, and he shares them with someone else, then I can sue him for breach of contract. But this isn’t theft. He hasn’t “stolen” anything; there is nothing for him to steal. If a grocer — or an owner of a tech platform — wants to attract my business by monetizing the records of our interactions and giving me a cut, he should go for it. I still might prefer another store. In any case, I don’t have the legal right to demand this revenue stream.

Likewise, “privacy” refers to what other people know about me – it is knowledge in their heads, not mine. Information isn’t property. If I know something about you, that knowledge is in my head; it’s not something I took from you. Of course, if I obtained or used that info in violation of a prior agreement, then I’m guilty of breach, and I use that information to threaten or harass you, I may be guilty of other crimes. But the popular idea that tech companies are stealing and profiting from something that’s “ours” isn’t right.

The concept of co-creation is important, because these digital records, like other co-created assets, can be more or less relationship specific. The late Oliver Williamson devoted his career to exploring the rich variety of contractual relationships devised by market participants to solve complex contracting problems, particularly in the face of asset specificity. Relationship-specific investments can be difficult for trading parties to manage, but they typically create more value. A legal regime in which only general-purpose, easily redeployable technologies were permitted would alleviate the holdup problem, but at the cost of a huge loss in efficiency. Likewise, a world in which all digital records must be fully portable reduces switching costs, but results in technologies for creating, storing, and sharing information that are less valuable. Why would platform operators invest in efficiency improvements if they cannot capture some of that value by means of proprietary formats, interfaces, sharing rules, and other arrangements?  

In short, we should not be quick to assume “market failure” in the market for privacy goods (or “true” news, whatever that is). Entrepreneurs operating in a competitive environment – not the static, partial-equilibrium notion of competition from intermediate micro texts but the rich, dynamic, complex, and multimarket kind of competition described in Petit’s book – can provide the levels of privacy and truthiness that consumers prefer.

What is a search engine?

Dirk Auer —  21 October 2020

What is a search engine? This might seem like an innocuous question, but it lies at the heart of the US Department of Justice and state Attorneys’ General antitrust complaint against Google, as well as the European Commission’s Google Search and Android decisions. It is also central to a report published by the UK’s Competition & Markets Authority (“CMA”). To varying degrees, all of these proceedings are premised on the assumption that Google enjoys a monopoly/dominant position over online search. But things are not quite this simple. 

Despite years of competition decisions and policy discussions, there are still many unanswered questions concerning the operation of search markets. For example, it is still unclear exactly which services compete against Google Search, and how this might evolve in the near future. Likewise, there has only been limited scholarly discussion as to how a search engine monopoly would exert its market power. In other words, what does a restriction of output look like on a search platform — particularly on the user side

Answering these questions will be essential if authorities wish to successfully bring an antitrust suit against Google for conduct involving search. Indeed, as things stand, these uncertainties greatly complicate efforts (i) to rigorously define the relevant market(s) in which Google Search operates, (ii) to identify potential anticompetitive effects, and (iii) to apply the quantitative tools that usually underpin antitrust proceedings.

In short, as explained below, antitrust authorities and other plaintiffs have their work cut out if they are to prevail in court.

Consumers demand information 

For a start, identifying the competitive constraints faced by Google presents authorities and plaintiffs with an important challenge.

Even proponents of antitrust intervention recognize that the market for search is complex. For instance, the DOJ and state AGs argue that Google dominates a narrow market for “general search services” — as opposed to specialized search services, content sites, social networks, and online marketplaces, etc. The EU Commission reached the same conclusion in its Google Search decision. Finally, commenting on the CMA’s online advertising report, Fiona Scott Morton and David Dinielli argue that: 

General search is a relevant market […]

In this way, an individual specialized search engine competes with a small fraction of what the Google search engine does, because a user could employ either for one specific type of search. The CMA concludes that, from the consumer standpoint, a specialized search engine exerts only a limited competitive constraint on Google.

(Note that the CMA stressed that it did not perform a market definition exercise: “We have not carried out a formal market definition assessment, but have instead looked at competitive constraints across the sector…”).

In other words, the above critics recognize that search engines are merely tools that can serve multiple functions, and that competitive constraints may be different for some of these. But this has wider ramifications that policymakers have so far overlooked. 

When quizzed about his involvement with Neuralink (a company working on implantable brain–machine interfaces), Elon Musk famously argued that human beings already share a near-symbiotic relationship with machines (a point already made by others):

The purpose of Neuralink [is] to create a high-bandwidth interface to the brain such that we can be symbiotic with AI. […] Because we have a bandwidth problem. You just can’t communicate through your fingers. It’s just too slow.

Commentators were quick to spot this implications of this technology for the search industry:

Imagine a world when humans would no longer require a device to search for answers on the internet, you just have to think of something and you get the answer straight in your head from the internet.

As things stand, this example still belongs to the realm of sci-fi. But it neatly illustrates a critical feature of the search industry. 

Search engines are just the latest iteration (but certainly not the last) of technology that enables human beings to access specific pieces of information more rapidly. Before the advent of online search, consumers used phone directories, paper maps, encyclopedias, and other tools to find the information they were looking for. They would read newspapers and watch television to know the weather forecast. They went to public libraries to undertake research projects (some still do), etc.

And, in some respects, the search engine is already obsolete for many of these uses. For instance, virtual assistants like Alexa, Siri, Cortana and Google’s own Google Assistant offering can perform many functions that were previously the preserve of search engines: checking the weather, finding addresses and asking for directions, looking up recipes, answering general knowledge questions, finding goods online, etc. Granted, these virtual assistants partly rely on existing search engines to complete tasks. However, Google is much less dominant in this space, and search engines are not the sole source on which virtual assistants rely to generate results. Amazon’s Alexa provides a fitting example (here and here).

Along similar lines, it has been widely reported that 60% of online shoppers start their search on Amazon, while only 26% opt for Google Search. In other words, Amazon’s ability to rapidly show users the product they are looking for somewhat alleviates the need for a general search engine. In turn, this certainly constrains Google’s behavior to some extent. And much of the same applies to other websites that provide a specific type of content (think of Twitter, LinkedIn, Tripadvisor, Booking.com, etc.)

Finally, it is also revealing that the most common searches on Google are, in all likelihood, made to reach other websites — a function for which competition is literally endless:

The upshot is that Google Search and other search engines perform a bundle of functions. Most of these can be done via alternative means, and this will increasingly be the case as technology continues to advance. 

This is all the more important given that the vast majority of search engine revenue derives from roughly 30 percent of search terms (notably those that are linked to product searches). The remaining search terms are effectively a loss leader. And these profitable searches also happen to be those where competition from alternative means is, in all likelihood, the strongest (this includes competition from online retail platforms, and online travel agents like Booking.com or Kayak, but also from referral sites, direct marketing, and offline sources). In turn, this undermines US plaintiffs’ claims that Google faces little competition from rivals like Amazon, because they don’t compete for the entirety of Google’s search results (in other words, Google might face strong competition for the most valuable ads):

108. […] This market share understates Google’s market power in search advertising because many search-advertising competitors offer only specialized search ads and thus compete with Google only in a limited portion of the market. 

Critics might mistakenly take the above for an argument that Google has no market power because competition is “just a click away”. But the point is more subtle, and has important implications as far as market definition is concerned.

Authorities should not define the search market by arguing that no other rival is quite like Google (or one if its rivals) — as the DOJ and state AGs did in their complaint:

90. Other search tools, platforms, and sources of information are not reasonable substitutes for general search services. Offline and online resources, such as books, publisher websites, social media platforms, and specialized search providers such as Amazon, Expedia, or Yelp, do not offer consumers the same breadth of information or convenience. These resources are not “one-stop shops” and cannot respond to all types of consumer queries, particularly navigational queries. Few consumers would find alternative sources a suitable substitute for general search services. Thus, there are no reasonable substitutes for general search services, and a general search service monopolist would be able to maintain quality below the level that would prevail in a competitive market. 

And as the EU Commission did in the Google Search decision:

(162) For the reasons set out below, there is, however, limited demand side substitutability between general search services and other online services. […]

(163) There is limited substitutability between general search services and content sites. […]

(166) There is also limited substitutability between general search services and specialised search services. […]

(178) There is also limited substitutability between general search services and social networking sites.

Ad absurdum, if consumers suddenly decided to access information via other means, Google could be the only firm to provide general search results and yet have absolutely no market power. 

Take the example of Yahoo: Despite arguably remaining the most successful “web directory”, it likely lost any market power that it had when Google launched a superior — and significantly more successful — type of search engine. Google Search may not have provided a complete, literal directory of the web (as did Yahoo), but it offered users faster access to the information they wanted. In short, the Yahoo example shows that being unique is not equivalent to having market power. Accordingly, any market definition exercise that merely focuses on the idiosyncrasies of firms is likely to overstate their actual market power. 

Given what precedes, the question that authorities should ask is thus whether Google Search (or another search engine) performs so many unique functions that it may be in a position to restrict output. So far, no one appears to have convincingly answered this question.

Similar uncertainties surround the question of how a search engine might restrict output, especially on the user side of the search market. Accordingly, authorities will struggle to produce evidence (i) the Google has market power, especially on the user side of the market, and (ii) that its behavior has anticompetitive effects.

Consider the following:

The SSNIP test (which is the standard method of defining markets in antitrust proceedings) is inapplicable to the consumer side of search platforms. Indeed, it is simply impossible to apply a hypothetical 10% price increase to goods that are given away for free.

This raises a deeper question: how would a search engine exercise its market power? 

For a start, it seems unlikely that it would start charging fees to its users. For instance, empirical research pertaining to the magazine industry (also an ad-based two-sided market) suggests that increased concentration does not lead to higher magazine prices. Minjae Song notably finds that:

Taking the advantage of having structural models for both sides, I calculate equilibrium outcomes for hypothetical ownership structures. Results show that when the market becomes more concentrated, copy prices do not necessarily increase as magazines try to attract more readers.

It is also far from certain that a dominant search engine would necessarily increase the amount of adverts it displays. To the contrary, market power on the advertising side of the platform might lead search engines to decrease the number of advertising slots that are available (i.e. reducing advertising output), thus showing less adverts to users. 

Finally, it is not obvious that market power would lead search engines to significantly degrade their product (as this could ultimately hurt ad revenue). For example, empirical research by Avi Goldfarb and Catherine Tucker suggests that there is some limit to the type of adverts that search engines could profitably impose upon consumers. They notably find that ads that are both obtrusive and targeted decrease subsequent purchases:

Ads that match both website content and are obtrusive do worse at increasing purchase intent than ads that do only one or the other. This failure appears to be related to privacy concerns: the negative effect of combining targeting with obtrusiveness is strongest for people who refuse to give their income and for categories where privacy matters most.

The preceding paragraphs find some support in the theoretical literature on two-sided markets literature, which suggests that competition on the user side of search engines is likely to be particularly intense and beneficial to consumers (because they are more likely to single-home than advertisers, and because each additional user creates a positive externality on the advertising side of the market). For instance, Jean Charles Rochet and Jean Tirole find that:

The single-homing side receives a large share of the joint surplus, while the multi-homing one receives a small share.

This is just a restatement of Mark Armstrong’s “competitive bottlenecks” theory:

Here, if it wishes to interact with an agent on the single-homing side, the multi-homing side has no choice but to deal with that agent’s chosen platform. Thus, platforms have monopoly power over providing access to their single-homing customers for the multi-homing side. This monopoly power naturally leads to high prices being charged to the multi-homing side, and there will be too few agents on this side being served from a social point of view (Proposition 4). By contrast, platforms do have to compete for the single-homing agents, and high profits generated from the multi-homing side are to a large extent passed on to the single-homing side in the form of low prices (or even zero prices).

All of this is not to suggest that Google Search has no market power, or that monopoly is necessarily less problematic in the search engine industry than in other markets. 

Instead, the argument is that analyzing competition on the user side of search platforms is unlikely to yield dispositive evidence of market power or anticompetitive effects. This is because market power is hard to measure on this side of the market, and because even a monopoly platform might not significantly restrict user output. 

That might explain why the DOJ and state AGs analysis of anticompetitive effects is so limited. Take the following paragraph (provided without further supporting evidence):

167. By restricting competition in general search services, Google’s conduct has harmed consumers by reducing the quality of general search services (including dimensions such as privacy, data protection, and use of consumer data), lessening choice in general search services, and impeding innovation. 

Given these inherent difficulties, antitrust investigators would do better to focus on the side of those platforms where mainstream IO tools are much easier to apply and where a dominant search engine would likely restrict output: the advertising market. Not only is it the market where search engines are most likely to exert their market power (thus creating a deadweight loss), but — because it involves monetary transactions — this side of the market lends itself to the application of traditional antitrust tools.  

Looking at the right side of the market

Finally, and unfortunately for Google’s critics, available evidence suggests that its position on the (online) advertising market might not meet the requirements necessary to bring a monopolization case (at least in the US).

For a start, online advertising appears to exhibit the prima facie signs of a competitive market. As Geoffrey Manne, Sam Bowman and Eric Fruits have argued:

Over the past decade, the price of advertising has fallen steadily while output has risen. Spending on digital advertising in the US grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period the Producer Price Index for Internet advertising sales declined by nearly 40%. The rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year. Since 2000, advertising spending has been falling as a share of GDP, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost, and increasing total revenues are consistent with a growing and increasingly competitive market.

Second, empirical research suggests that the market might need to be widened to include offline advertising. For instance, Avi Goldfarb and Catherine Tucker show that there can be important substitution effects between online and offline advertising channels:

Using data on the advertising prices paid by lawyers for 139 Google search terms in 195 locations, we exploit a natural experiment in “ambulance-chaser” regulations across states. When lawyers cannot contact clients by mail, advertising prices per click for search engine advertisements are 5%–7% higher. Therefore, online advertising substitutes for offline advertising.

Of course, a careful examination of the advertising industry could also lead authorities to define a narrower relevant market. For example, the DOJ and state AG complaint argued that Google dominated the “search advertising” market:

97. Search advertising in the United States is a relevant antitrust market. The search advertising market consists of all types of ads generated in response to online search queries, including general search text ads (offered by general search engines such as Google and Bing) […] and other, specialized search ads (offered by general search engines and specialized search providers such as Amazon, Expedia, or Yelp). 

Likewise, the European Commission concluded that Google dominated the market for “online search advertising” in the AdSense case (though the full decision has not yet been made public). Finally, the CMA’s online platforms report found that display and search advertising belonged to separate markets. 

But these are empirical questions that could dispositively be answered by applying traditional antitrust tools, such as the SSNIP test. And yet, there is no indication that the authorities behind the US complaint undertook this type of empirical analysis (and until its AdSense decision is made public, it is not clear that the EU Commission did so either). Accordingly, there is no guarantee that US courts will go along with the DOJ and state AGs’ findings.

In short, it is far from certain that Google currently enjoys an advertising monopoly, especially if the market is defined more broadly than that for “search advertising” (or the even narrower market for “General Search Text Advertising”). 

Concluding remarks

The preceding paragraphs have argued that a successful antitrust case against Google is anything but a foregone conclusion. In order to successfully bring a suit, authorities would notably need to figure out just what market it is that Google is monopolizing. In turn, that would require a finer understanding of what competition, and monopoly, look like in the search and advertising industries.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Shane Greenstein (Professor of Business Administration, Harvard Business School).
]

In his book, Nicolas Petit approaches antitrust issues by analyzing their economic foundations, and he aspires to bridge gaps between those foundations and the common points of view. In light of the divisiveness of today’s debates, I appreciate Petit’s calm and deliberate view of antitrust, and I respect his clear and engaging prose.

I spent a lot of time with this topic when writing a book (How the Internet Became Commercial, 2015, Princeton Press). If I have something unique to add to a review of Petit’s book, it comes from the role Microsoft played in the events in my book.

Many commentators have speculated on what precise charges could be brought against Facebook, Google/Alphabet, Apple, and Amazon. For the sake of simplicity, let’s call these the “big four.” While I have no special insight to bring to such speculation, for this post I can do something different, and look forward by looking back. For the time being, Microsoft has been spared scrutiny by contemporary political actors. (It seems safe to presume Microsoft’s managers prefer to be left out.) While it is tempting to focus on why this has happened, let’s focus on a related issue: What shadow did Microsoft’s trials cast on the antitrust issues facing the big four?

Two types of lessons emerged from Microsoft’s trials, and both tend to be less appreciated by economists. One set of lessons emerged from the media flood of the flotsam and jetsam of sensationalistic factoids and sound bites, drawn from Congressional and courtroom testimony. That yielded lessons about managing sound and fury – i.e., mostly about reducing the cringe-worthy quotes from CEOs and trial witnesses.

Another set of lessons pertained to the role and limits of economic reasoning. Many decision makers reasoned by analogy and metaphor. That is especially so for lawyers and executives. These metaphors do not make economic reasoning wrong, but they do tend to shape how an antitrust question takes center stage with a judge, as well as in the court of public opinion. These metaphors also influence the stories a CEO tells to employees.

If you asked me to forecast how things will go for the big four, based on what I learned from studying Microsoft’s trials, I forecast that the outcome depends on which metaphor and analogy gets the upper hand.

In that sense, I want to argue that Microsoft’s experience depended on “the fox and shepherd problem.” When is a platform leader better thought of as a shepherd, helping partners achieve a healthy outcome, or as a fox in charge of a henhouse, ready to sacrifice a partner for self-serving purposes? I forecast the same metaphors will shape experience of the big four.

Gaps and analysis

The fox-shepherd problem never shows up when a platform leader is young and its platform is small. As the platform reaches bigger scale, however, the problem becomes more salient. Conflicts of interests emerge and focus attention on platform leadership.

Petit frames these issues within a Schumpeterian vision. In this view, firms compete for dominant positions over time, potentially with one dominant firm replacing another. Potential competition has a salutary effect if established firms perceive a threat from the future shadow of such competitors, motivating innovation. In this view, antitrust’s role might be characterized as “keeping markets open so there is pressure on the dominant firm from potential competition.”

In the Microsoft trial economists framed the Schumpeterian tradeoff in the vocabulary of economics. Firms who supply complements at one point could become suppliers of substitutes at a later point if they are allowed to. In other words, platform leaders today support complements that enhance the value of the platform, while also having the motive and ability to discourage those same business partners from developing services that substitute for the platform’s services, which could reduce the platform’s value. Seen through this lens, platform leaders inherently face a conflict of interest, and antitrust law should intervene if platform leaders could place excessive limitations on existing business partners.

This economic framing is not wrong. Rather, it is necessary, but not sufficient. If I take a sober view of events in the Microsoft trial, I am not convinced the economics alone persuaded the judge in Microsoft’s case, or, for that matter, the public.

As judges sort through the endless detail of contracting provisions, they need a broad perspective, one that sharpens their focus on a key question. One central question in particular inhabits a lot of a judge’s mindshare: how did the platform leader use its discretion, and for what purposes? In case it is not obvious, shepherds deserve a lot of discretion, while only a fool gives a fox much license.

Before the trial, when it initially faced this question from reporters and Congress, Microsoft tried to dismiss the discussion altogether. Their representatives argued that high technology differs from every other market in its speed and productivity, and, therefore, ought to be thought of as incomparable to other antitrust examples. This reflected the high tech elite’s view of their own exceptionalism.

Reporters dutifully restated this argument, and, long story short, it did not get far with the public once the sensationalism started making headlines, and it especially did not get far with the trial judge. To be fair, if you watched recent congressional testimony, it appears as if the lawyers for the big four instructed their CEOs not to try it this approach this time around.

Origins

Well before lawyers and advocates exaggerate claims, the perspective of both sides usually have some merit, and usually the twain do not meet. Most executives tend to remember every detail behind growth, and know the risks confronted and overcome, and usually are reluctant to give up something that works for their interests, and sometimes these interests can be narrowly defined. In contrast, many partners will know examples of a rule that hindered them, and point to complaints that executives ignored, and aspire to have rules changed, and, again, their interests tend to be narrow.

Consider the quality-control process today for iPhone apps as an example. The merits and absurdity of some of Apples conduct get a lot of attention in online forums, especially the 30% take for Apple. Apple can reasonably claim the present set of rules work well overall, and only emerged after considerable experimentation, and today they seek to protect all who benefit from the entire system, like a shepherd. It is no surprise however, that some partners accuse Apple of tweaking rules to their own benefit, and using the process to further Apple’s ambitions at the expense of the partner’s, like a fox in a henhouse. So it goes.

More generally, based on publically available information, all of the big four already face this debate. Self-serving behavior shows up in different guise in different parts of the big four’s business, but it is always there. As noted, Apple’s apps compete with the apps of others, so it has incentives to shape distribution of other apps. Amazon’s products compete with some products coming from its third—party sellers, and it too faces mixed incentives. Google’s services compete with online services who also advertise on their search engine, and they too face issues over their charges for listing on the Play store. Facebook faces an additional issues, because it has bought firms that were trying to grow their own platforms to compete with Facebook.

Look, those four each contain rather different businesses in their details, which merits some caution in making a sweeping characterization. My only point: the question about self-serving behavior arises in each instance. That frames a fox-shepherd problem for prosecutors in each case.

Lessons from prior experience

Circling back to lessons of the past for antitrust today, the Shepherd-Fox problem was one of the deeper sources of miscommunication leading up to the Microsoft trial. In the late 1990s Microsoft could reasonably claim to be a shepherd for all its platform’s partners, and it could reasonably claim to have improved the platform in ways that benefited partners. Moreover, for years some of the industry gossip about their behavior stressed misinformed nonsense. Accordingly, Microsoft’s executives had learned to trust their own judgment and to mistrust the complaints of outsiders. Right in line with that mistrust, many employees and executives took umbrage to being characterized as a fox in a henhouse, dismissing the accusations out of hand.

Those habits-of-mind poorly positioned the firm for a court case. As any observer of the trial knowns, When prosecutors came looking, they found lots of examples that looked like fox-like behavior. Onerous contract restrictions and cumbersome processes for business partners produced plenty of bad optics in court, and fueled the prosecution’s case that the platform had become too self-serving at the expense of competitive processes. Prosecutors had plenty to work with when it came time to prove motive, intent, and ability to misuse discretion. 

What is the lesson for the big four? Ask an executive in technology today, and sometimes you will hear the following: As long as a platform’s actions can be construed as friendly to customers, the platform leader will be off the hook. That is not wrong lessons, but it is an incomplete one. Looking with hindsight and foresight, that perspective seems too sanguine about the prospects for the big four. Microsoft had done plenty for its customers, but so what? There was plenty of evidence of acting like a fox in a hen-house. The bigger lesson is this: all it took were a few bad examples to paint a picture of a pattern, and every firm has such examples.

Do not get me wrong. I am not saying a fox and hen-house analogy is fair or unfair to platform leaders. Rather, I am saying that economists like to think the economic trade-off between the interests of platform leaders, platform partners, and platform customers emerge from some grand policy compromise. That is not how prosecutors think, nor how judges decide. In the Microsoft case there was no such grand consideration. The economic framing of the case only went so far. As it was, the decision was vulnerable to metaphor, shrewdly applied and convincingly argued. Done persuasively, with enough examples of selfish behavior, excuses about “helping customers” came across as empty.

Policy

Some advocates argue, somewhat philosophically, that platforms deserve discretion, and governments are bound to err once they intervene. I have sympathy with that point of view, but only up to a point. Below are two examples from outside antitrust where government routinely do not give the big four a blank check.

First, when it started selling ads, Google banned ads for cigarettes, porn and alcohol, and it downgraded in its quality score for websites that used deceptive means to attract users. That helped the service foster trust with new users, enabling it to grow. After it became bigger should Google have continued to have unqualified discretion to shepherd the entire ad system? Nobody thinks so. A while ago the Federal Trade Commission decided to investigate deceptive online advertising, just as it investigates deceptive advertising in other media. It is not a big philosophical step to next ask whether Google should have unfettered discretion to structure the ad business, search process, and related e-commerce to its own benefit.

Here is another example, this one about Facebook. Over the years Facebook cycled through a number of rules for sharing information with business partners, generally taking a “relaxed” attitude enforcing those policies. Few observers cared when Facebook was small, but many governments started to care after Facebook grew to billions of users. Facebook’s lax monitoring did not line up with the preferences of many governments. It should not come as a surprise now that many governments want to regulate Facebook’s handling of data. Like it or not, this question lies squarely within the domain of government privacy policy. Again, the next step is small. Why should other parts of its business remain solely in Facebook’s discretion, like its ability to buy other businesses?

This gets us to the other legacy of the Microsoft case: As we think about future policy dilemmas, are there a general set of criteria for the antitrust issues facing all four firms? Veterans of court cases will point out that every court case is its own circus. Just because Microsoft failed to be persuasive in its day does not imply any of the big four will be unpersuasive.

Looking back on the Microsoft trial, it did not articulate a general set of principles about acceptable or excusable self-serving behavior from a platform leader. It did not settle what criteria best determine when a court should consider a platform leader’s behavior closer to that of a shepherd or a fox. The appropriate general criteria remains unclear.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Richard N. Langlois
(Professor of Economics, University of Connecticut).]

Market share has long been the talisman of antitrust economics.  Once we properly define what “the product” is, all we have to do is look at shares in the relevant market.  In such an exercise, today’s high-tech firms come off badly.  Each of them has a large share of the market for some “product.” What I appreciate about Nicolas Petit’s notion of “moligopoly” is that it recognizes that genuine competition is a far more complex and interesting phenomenon, one that goes beyond the category of “the product.”

In his chapter 4, Petit lays out how this works with six of today’s large high-tech companies, adding Netflix to the usual Big Five of Amazon, Apple, Facebook, Google, and Microsoft.  If I understand properly, what he means by “moligopoly” is that these large firms have their hands in many different relevant markets.  Because they seem to be selling different “products,” they don’t seem to be competing with one another.  Yet, in a fundamental sense, they are very much competing with one another, and perhaps with firms that do not yet exist.  

In this view, diversification is at the heart of competition.  Indeed, Petit wonders at one point whether we are in a new era of “conglomeralism.”  I would argue that the diversified high-tech firms we see today are actually very unlike the conglomerates of the late twentieth century.  In my view, the earlier conglomerates were not equilibrium phenomena but rather short-lived vehicles for the radical restructuring of the American economy in the post- Bretton Woods era of globalization.  A defining characteristic of those firms was that their diversification was unrelated, not just in terms of the SIC codes of their products but also in terms of their underlying capabilities.  If we look only at the products on the demand side, today’s high-tech firms might also seem to reflect unrelated diversification.  In fact, however, unlike in the twentieth-century conglomerates, the activities of present-day high-tech firms are connected on the supply side by a common set of capabilities involving the deployment of digital technology. 

Thus the boundaries of markets can shift and morph unexpectedly.  Enterprises that may seem entirely different actually harbor the potential to invade one other’s territory (or invade new territory – “competing against non-consumption”).  What Amazon can do, Google can do; and so can Microsoft.  The arena is competitive not because firms have a small share of relevant markets but because all of them sit beneath four or five damocletian swords, suspended by the thinnest of horsehairs.  No wonder the executives of high-tech firms sound paranoid.

Petit speculates that today’s high-tech companies have diversified (among other reasons) because of complementarities.  That may be part of the story.  But as Carliss Baldwin argues (and as Petit mentions in passing), we can think about the investments high-tech firms seem to be making as options – experiments that may or may not pay off.  The more uncertain the environment, the more valuable it is to have many diverse options.  A decade or so after the breakup of AT&T, the “baby Bells” were buying into landline, cellular, cable, satellite, and many other things, not because, as many thought at the time, that these were complementary, but because no one had any idea what would be important in the future (including whether there would be any complementarities).  As uncertainty resolved, these lines of business became more specialized, and the babies unbundled.  (As I write, AT&T, the baby Bell that snagged the original company name, is probably about to sell off DirectTV at a loss.)  From this perspective, the high degree of diversification we observe today implies not control of markets but the opposite – existential uncertainty about the future.

I wonder whether this kind of competition is unique to the age of the Internet.  There is an entire genre of business-school case built around an epiphany of the form: “we thought we were in the X business, but we were really in the Y business all along!”  I have recently read (listened to, technically) Marc Levinson’s wonderful history of containerized shipping.  Here the real competition occurred across modes of transport, not within existing well-defined markets.  The innovators came to realize that they were in the logistics business, not in the trucking business or the railroad business or the ocean-shipping business.  (Some of the most interesting parts of the story were about how entrepreneurship happens in a heavily regulated environment.  At one point early in the story, Malcolm McLean, the most important of these entrepreneurs, had to buy up other trucking firms just to obtain the ICC permits necessary to redesign routes efficiently.)  Of course, containerized shipping is also a modular system that some economists have accused of being a general-purpose technology like the Internet.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Doug Melamed (Professor of the Practice of Law, Stanford law School).
]

The big digital platforms make people uneasy.  Part of the unease is no doubt attributable to widespread populist concerns about large and powerful business entities.  Platforms like Facebook and Google in particular cause unease because they affect sensitive issues of communications, community, and politics.  But the platforms also make people uneasy because they seem boundless – enduring monopolies protected by ever-increasing scale and network economies, and growing monopolies aided by scope economies that enable them to conquer complementary markets.  They provoke a discussion about whether antitrust law is sufficient for the challenge.

Nicolas Petit’s Big Tech and the Digital Economy: The Moligopoly Scenario provides an insightful and valuable antidote to this unease.  While neither Panglossian nor comprehensive, Petit’s analysis persuasively argues that some of the concerns about the platforms are misguided or at least overstated.  As Petit sees it, the platforms are not so much monopolies in discrete markets – search, social networking, online commerce, and so on – as “multibusiness firms with business units in partly overlapping markets” that are engaged in a “dynamic oligopoly game” that might be “the socially optimal industry structure.”  Petit suggests that we should “abandon or at least radically alter traditional antitrust principles,” which are aimed at preserving “rivalry,” and “adapt to the specific non-rival economics of digital markets.”  In other words, the law should not try to diminish the platforms’ unique dominance in their individual sectors, which have already tipped to a winner-take-all (or most) state and in which protecting rivalry is not “socially beneficial.”  Instead, the law should encourage reductions of output in tipped markets in which the dominant firm “extracts a monopoly rent” in order to encourage rivalry in untipped markets. 

Petit’s analysis rests on the distinction between “tipped markets,” in which “tech firms with observed monopoly positions can take full advantage of their market power,” and “untipped markets,” which are “characterized by entry, instability and uncertainty.”  Notably, however, he does not expect “dispositive findings” as to whether a market is tipped or untipped.  The idea is to define markets, not just by “structural” factors like rival goods and services, market shares and entry barriers, but also by considering “uncertainty” and “pressure for change.”

Not surprisingly, given Petit’s training and work as a European scholar, his discussion of “antitrust in moligopoly markets” includes prescriptions that seem to one schooled in U.S. antitrust law to be a form of regulation that goes beyond proscribing unlawful conduct.  Petit’s principal concern is with reducing monopoly rents available to digital platforms.  He rejects direct reduction of rents by price regulation as antithetical to antitrust’s DNA and proposes instead indirect reduction of rents by permitting users on the inelastic side of a platform (the side from which the platform gains most of its revenues) to collaborate in order to gain countervailing market power and by restricting the platforms’ use of vertical restraints to limit user bypass. 

He would create a presumption against all horizontal mergers by dominant platforms in order to “prevent marginal increases of the output share on which the firms take a monopoly rent” and would avoid the risk of defining markets narrowly and thus failing to recognize that platforms are conglomerates that provide actual or potential competition in multiple partially overlapping commercial segments. By contrast, Petit would restrict the platforms’ entry into untipped markets only in “exceptional circumstances.”  For this, Petit suggests four inquiries: whether leveraging of network effects is involved; whether platform entry deters or forecloses entry by others; whether entry by others pressures the monopoly rents; and whether entry into the untipped market is intended to deter entry by others or is a long-term commitment.

One might question the proposition, which is central to much of Petit’s argument, that reducing monopoly rents in tipped markets will increase the platforms’ incentives to enter untipped markets.  Entry into untipped markets is likely to depend more on expected returns in the untipped market, the cost of capital, and constraints on managerial bandwidth than on expected returns in the tipped market.  But the more important issue, at least from the perspective of competition law, is whether – even assuming the correctness of all aspects of Petit’s economic analysis — the kind of categorical regulatory intervention proposed by Petit is superior to a law enforcement regime that proscribes only anticompetitive conduct that increases or threatens to increase market power.  Under U.S. law, anticompetitive conduct is conduct that tends to diminish the competitive efficacy of rivals and does not sufficiently enhance economic welfare by reducing costs, increasing product quality, or reducing above-cost prices.

If there were no concerns about the ability of legal institutions to know and understand the facts, a law enforcement regime would seem clearly superior.  Consider, for example, Petit’s recommendation that entry by a platform monopoly into untipped markets should be restricted only when network effects are involved and after taking into account whether the entry tends to protect the tipped market monopoly and whether it reflects a long-term commitment.  Petit’s proposed inquiries might make good sense as a way of understanding as a general matter whether market extension by a dominant platform is likely to be problematic.  But it is hard to see how economic welfare is promoted by permitting a platform to enter an adjacent market (e.g., Amazon entering a complementary product market) by predatory pricing or by otherwise unprofitable self-preferencing, even if the entry is intended to be permanent and does not protect the platform monopoly. 

Similarly, consider the proposed presumption against horizontal mergers.  That might not be a good idea if there is a small (10%) chance that the acquired firm would otherwise endure and modestly reduce the platform’s monopoly rents and an equal or even smaller chance that the acquisition will enable the platform, by taking advantage of economies of scope and asset complementarities, to build from the acquired firm an improved business that is much more valuable to consumers.  In that case, the expected value of the merger in welfare terms might be very positive.  Similarly, Petit would permit acquisitions by a platform of firms outside the tipped market as long as the platform has the ability and incentive to grow the target.  But the growth path of the target is not set in stone.  The platform might use it as a constrained complement, while an unaffiliated owner might build it into something both more valuable to consumers and threatening to the platform.  Maybe one of these stories describes Facebook’s acquisition of Instagram.

The prototypical anticompetitive horizontal merger story is one in which actual or potential competitors agree to share the monopoly rents that would be dissipated by competition between them. That story is confounded by communications that seem like threats, which imply a story of exclusion rather than collusion.  Petit refers to one such story.  But the threat story can be misleading.  Suppose, for example, that Platform sees Startup introduce a new business concept and studies whether it could profitably emulate Startup.  Suppose further that Platform concludes that, because of scale and scope economies available to it, it could develop such a business and come to dominate the market for a cost of $100 million acting alone or $25 million if it can acquire Startup and take advantage of its existing expertise, intellectual property, and personnel.  In that case, Platform might explain to Startup the reality that Platform is going to take the new market either way and propose to buy Startup for $50 million (thus offering Startup two-thirds of the gains from trade).  Startup might refuse, perhaps out of vanity or greed, in which case Platform as promised might enter aggressively and, without engaging in predatory or other anticompetitive conduct, drive Startup from the market.  To an omniscient law enforcement regime, there should be no antitrust violation from either an acquisition or the aggressive competition.  Either way, the more efficient provider prevails so the optimum outcome is realized in the new market.  The merger would have been more efficient because it would have avoided wasteful duplication of startup costs, and the merger proposal (later characterized as a threat) was thus a benign, even procompetitive, invitation to collude.  It would be a different story of course if Platform could overcome Startup’s first mover advantage only by engaging in anticompetitive conduct.

The problem is that antitrust decision makers often cannot understand all the facts.  Take the threat story, for example.  If Startup acquiesces and accepts the $50 million offer, the decision maker will have to determine whether Platform could have driven Startup from the market without engaging in predatory or anticompetitive conduct and, if not, whether absent the merger the parties would have competed against one another.  In other situations, decision makers are asked to determine whether the conduct at issue would be more likely than the but-for world to promote innovation or other, similarly elusive matters.

U.S. antitrust law accommodates its unavoidable uncertainty by various default rules and practices.  Some, like per se rules and the controversial Philadelphia National Bank presumption, might on occasion prohibit conduct that would actually have been benign or even procompetitive.  Most, however, insulate from antitrust liability conduct that might actually be anticompetitive.  These include rules applicable to predatory pricing, refusals to deal, two-sided markets, and various matters involving patents.  Perhaps more important are proof requirements in general.  U.S. antitrust law is based on the largely unexamined notion that false positives are worse than false negatives and thus, for the most part, puts the burden of uncertainty on the plaintiff.

Petit is proposing, in effect, an alternative approach for the digital platforms.  This approach would not just proscribe anticompetitive conduct.  It would, instead, apply to specific firms special rules that are intended to promote a desired outcome, the reduction in monopoly rents in tipped digital markets.  So, one question suggested by Petit’s provocative study is whether the inevitable uncertainty surrounding issues of platform competition are best addressed by the kinds of categorical rules Petit proposes or by case-by-case application of abstract legal principles.  Put differently, assuming that economic welfare is the objective, what is the best way to minimize error costs?

Broadly speaking, there are two kinds of error costs: specification errors and application errors.  Specification errors reflect legal rules that do not map perfectly to the normative objectives of the law (e.g., a rule that would prohibit all horizontal mergers by dominant platforms when some such mergers are procompetitive or welfare-enhancing).  Application errors reflect mistaken application of the legal rule to the facts of the case (e.g., an erroneous determination whether the conduct excludes rivals or provides efficiency benefits).   

Application errors are the most likely source of error costs in U.S. antitrust law.  The law relies largely on abstract principles that track the normative objectives of the law (e.g., conduct by a monopoly that excludes rivals and has no efficiency benefit is illegal). Several recent U.S. antitrust decisions (American Express, Qualcomm, and Farelogix among them) suggest that error costs in a law enforcement regime like that in the U.S. might be substantial and even that case-by-case application of principles that require applying economic understanding to diverse factual circumstances might be beyond the competence of generalist judges.  Default rules applicable in special circumstances reduce application errors but at the expense of specification errors.

Specification errors are more likely with categorical rules, like those suggested by Petit.  The total costs of those specification errors are likely to exceed the costs of mistaken decisions in individual cases because categorical rules guide firm conduct in general, not just in decided cases, and rules that embody specification errors are thus likely to encourage undesirable conduct and to discourage desirable conduct in matters that are not the subject of enforcement proceedings.  Application errors, unless systematic and predictable, are less likely to impose substantial costs beyond the costs of mistaken decisions in the decided cases themselves.  Whether any particular categorical rules are likely to have error costs greater than the error costs of the existing U.S. antitrust law will depend in large part on the specification errors of the rules and on whether their application is likely to be accompanied by substantial application costs.

As discussed above, the particular rules suggested by Petit appear to embody important specification errors.  They are likely also to lead to substantial application errors because they would require determination of difficult factual issues.  These include, for example, whether the market at issue has tipped, whether the merger is horizontal, and whether the platform’s entry into an untipped market is intended to be permanent.  It thus seems unlikely, at least from this casual review, that adoption of the rules suggested by Petit will reduce error costs.

 Petit’s impressive study might therefore be most valuable, not as a roadmap for action, but as a source of insight and understanding of the facts – what Petit calls a “mental model to help decision makers understand the idiosyncrasies of digital markets.”  If viewed, not as a prescription for action, but as a description of the digital world, the Moligopoly Scenario can help address the urgent matter of reducing the costs of application errors in U.S. antitrust law.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.]

To mark the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario”, Truth on the Market and  International Center for Law & Economics (ICLE) are hosting some of the world’s leading scholars and practitioners of competition law and economics to discuss some of the book’s themes.

In his book, Petit offers a “moligopoly” framework for understanding competition between large tech companies that may have significant market shares in their ‘home’ markets but nevertheless compete intensely in adjacent ones. Petit argues that tech giants coexist as both monopolies and oligopolies in markets defined by uncertainty and dynamism, and offers policy tools for dealing with the concerns people have about these markets that avoid crude “big is bad” assumptions and do not try to solve non-economic harms with the tools of antitrust.

This symposium asks contributors to give their thoughts either on the book as a whole or on a selected chapter that relates to their own work. In it we hope to explore some of Petit’s arguments with different perspectives from our contributors.

Confirmed Participants

As in the past (see examples of previous TOTM blog symposia here), we’ve lined up an outstanding and diverse group of scholars to discuss these issues, including:

  • Kelly Fayne, Antitrust Associate, Latham & Watkins
  • Shane Greenstein, Professor of Business Administration; Co-chair of the HBS Digital Initiative, Harvard Business School
  • Peter Klein, Professor of Entrepreneurship and Chair, Department of Entrepreneurship and Corporate Innovation, Baylor University
  • William Kovacic, Global Competition Professor of Law and Policy; Director, Competition Law Center, George Washington University Law
  • Kai-Uwe Kuhn, Academic Advisor, University of East Anglia
  • Richard Langlois, Professor of Economics, University of Connecticut
  • Doug Melamed, Professor of the Practice of Law, Stanford law School
  • David Teece, Professor in Global Business, University of California’s Haas School of Business (Berkeley); Director, Center for Global Strategy; Governance and Faculty Director, Institute for Business Innovation

Thank you again to all of the excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting later today, October 12, 2020.