Archives For Google

Others already have noted that the Federal Trade Commission’s (FTC) recently released 6(b) report on the privacy practices of Internet service providers (ISPs) fails to comprehend that widespread adoption of privacy-enabling technology—in particular, Hypertext Transfer Protocol Secure (HTTPS) and DNS over HTTPS (DoH), but also the use of virtual private networks (VPNs)—largely precludes ISPs from seeing what their customers do online.

But a more fundamental problem with the report lies in its underlying assumption that targeted advertising is inherently nefarious. Indeed, much of the report highlights not actual violations of the law by the ISPs, but “concerns” that they could use customer data for targeted advertising much like Google and Facebook already do. The final subheading before the report’s conclusion declares: “Many ISPs in Our Study Can Be At Least As Privacy-Intrusive as Large Advertising Platforms.”

The report does not elaborate on why it would be bad for ISPs to enter the targeted advertising market, which is particularly strange given the public focus regulators have shone in recent months on the supposed dominance of Google, Facebook, and Amazon in online advertising. As the International Center for Law & Economics (ICLE) has argued in past filings on the issue, there simply is no justification to apply sector-specific regulations to ISPs for the mere possibility that they will use customer data for targeted advertising.

ISPs Could be Competition for the Digital Advertising Market

It is ironic to witness FTC warnings about ISPs engaging in targeted advertising even as there are open antitrust cases against Google for its alleged dominance of the digital advertising market. In fact, news reports suggest the U.S. Justice Department (DOJ) is preparing to join the antitrust suits against Google brought by state attorneys general. An obvious upshot of ISPs engaging in a larger amount of targeted advertising if that they could serve as a potential source of competition for Google, Facebook, and Amazon.

Despite the fears raised in the 6(b) report of rampant data collection for targeted ads, ISPs are, in fact, just a very small part of the $152.7 billion U.S. digital advertising market. As the report itself notes: “in 2020, the three largest players, Google, Facebook, and Amazon, received almost two-third of all U.S. digital advertising,” while Verizon pulled in just 3.4% of U.S. digital advertising revenues in 2018.

If the 6(b) report is correct that ISPs have access to troves of consumer data, it raises the question of why they don’t enjoy a bigger share of the digital advertising market. It could be that ISPs have other reasons not to engage in extensive advertising. Internet service provision is a two-sided market. ISPs could (and, over the years in various markets, some have) rely on advertising to subsidize Internet access. That they instead rely primarily on charging users directly for subscriptions may tell us something about prevailing demand on either side of the market.

Regardless of the reasons, the fact that ISPs have little presence in digital advertising suggests that it would be a misplaced focus for regulators to pursue industry-specific privacy regulation to crack down on ISP data collection for targeted advertising.

What’s the Harm in Targeted Advertising, Anyway?

At the heart of the FTC report is the commission’s contention that “advertising-driven surveillance of consumers’ online activity presents serious risks to the privacy of consumer data.” In Part V.B of the report, five of the six risks the FTC lists as associated with ISP data collection are related to advertising. But the only argument the report puts forth for why targeted advertising would be inherently pernicious is the assertion that it is contrary to user expectations and preferences.

As noted earlier, in a two-sided market, targeted ads could allow one side of the market to subsidize the other side. In other words, ISPs could engage in targeted advertising in order to reduce the price of access to consumers on the other side of the market. This is, indeed, one of the dominant models throughout the Internet ecosystem, so it wouldn’t be terribly unusual.

Taking away ISPs’ ability to engage in targeted advertising—particularly if it is paired with rumored net neutrality regulations from the Federal Communications Commission (FCC)—would necessarily put upward pricing pressure on the sector’s remaining revenue stream: subscriber fees. With bridging the so-called “digital divide” (i.e., building out broadband to rural and other unserved and underserved markets) a major focus of the recently enacted infrastructure spending package, it would be counterproductive to simultaneously take steps that would make Internet access more expensive and less accessible.

Even if the FTC were right that data collection for targeted advertising poses the risk of consumer harm, the report fails to justify why a regulatory scheme should apply solely to ISPs when they are such a small part of the digital advertising marketplace. Sector-specific regulation only makes sense if the FTC believes that ISPs are uniquely opaque among data collectors with respect to their collection practices.

Conclusion

The sector-specific approach implicitly endorsed by the 6(b) report would limit competition in the digital advertising market, even as there are already legal and regulatory inquiries into whether that market is sufficiently competitive. The report also fails to make the case the data collection for target advertising is inherently bad, or uniquely bad when done by an ISP.

There may or may not be cause for comprehensive federal privacy legislation, depending on whether it would pass cost-benefit analysis, but there is no reason to focus on ISPs alone. The FTC needs to go back to the drawing board.

The European Commission and its supporters were quick to claim victory following last week’s long-awaited General Court of the European Union ruling in the Google Shopping case. It’s hard to fault them. The judgment is ostensibly an unmitigated win for the Commission, with the court upholding nearly every aspect of its decision. 

However, the broader picture is much less rosy for both the Commission and the plaintiffs. The General Court’s ruling notably provides strong support for maintaining the current remedy package, in which rivals can bid for shopping box placement. This makes the Commission’s earlier rejection of essentially the same remedy  in 2014 look increasingly frivolous. It also pours cold water on rivals’ hopes that it might be replaced with something more far-reaching.

More fundamentally, the online world continues to move further from the idealistic conception of an “open internet” that regulators remain determined to foist on consumers. Indeed, users consistently choose convenience over openness, thus rejecting the vision of online markets upon which both the Commission’s decision and the General Court’s ruling are premised. 

The Google Shopping case will ultimately prove to be both a pyrrhic victory and a monument to the pitfalls of myopic intervention in digital markets.

Google’s big remedy win

The main point of law addressed in the Google Shopping ruling concerns the distinction between self-preferencing and refusals to deal. Contrary to Google’s defense, the court ruled that self-preferencing can constitute a standalone abuse of Article 102 of the Treaty on the Functioning of the European Union (TFEU). The Commission was thus free to dispense with the stringent conditions laid out in the 1998 Bronner ruling

This undoubtedly represents an important victory for the Commission, as it will enable it to launch new proceedings against both Google and other online platforms. However, the ruling will also constrain the Commission’s available remedies, and rightly so.

The origins of the Google Shopping decision are enlightening. Several rivals sought improved access to the top of the Google Search page. The Commission was receptive to those calls, but faced important legal constraints. The natural solution would have been to frame its case as a refusal to deal, which would call for a remedy in which a dominant firm grants rivals access to its infrastructure (be it physical or virtual). But going down this path would notably have required the Commission to show that effective access was “indispensable” for rivals to compete (one of the so-called Bronner conditions)—something that was most likely not the case here. 

Sensing these difficulties, the Commission framed its case in terms of self-preferencing, surmising that this would entail a much softer legal test. The General Court’s ruling vindicates this assessment (at least barring a successful appeal by Google):

240    It must therefore be concluded that the Commission was not required to establish that the conditions set out in the judgment of 26 November 1998, Bronner (C‑7/97, EU:C:1998:569), were satisfied […]. [T]he practices at issue are an independent form of leveraging abuse which involve […] ‘active’ behaviour in the form of positive acts of discrimination in the treatment of the results of Google’s comparison shopping service, which are promoted within its general results pages, and the results of competing comparison shopping services, which are prone to being demoted.

This more expedient approach, however, entails significant limits that will undercut both the Commission and rivals’ future attempts to extract more far-reaching remedies from Google.

Because the underlying harm is no longer the denial of access, but rivals being treated less favorably, the available remedies are much narrower. Google must merely ensure that it does not treat itself more preferably than rivals, regardless whether those rivals ultimately access its infrastructure and manage to compete. The General Court says this much when it explains the theory of harm in the case at hand:

287. Conversely, even if the results from competing comparison shopping services would be particularly relevant for the internet user, they can never receive the same treatment as results from Google’s comparison shopping service, whether in terms of their positioning, since, owing to their inherent characteristics, they are prone to being demoted by the adjustment algorithms and the boxes are reserved for results from Google’s comparison shopping service, or in terms of their display, since rich characters and images are also reserved to Google’s comparison shopping service. […] they can never be shown in as visible and as eye-catching a way as the results displayed in Product Universals.

Regulation 1/2003 (Art. 7.1) ensures the European Commission can only impose remedies that are “proportionate to the infringement committed and necessary to bring the infringement effectively to an end.” This has obvious ramifications for the Google Shopping remedy.

Under the remedy accepted by the Commission, Google agreed to auction off access to the Google Shopping box. Google and rivals would thus compete on equal footing to display comparison shopping results.

Illustrations taken from Graf & Mostyn, 2020

Rivals and their consultants decried this outcome; and Margrethe Vestager intimated the commission might review the remedy package. Both camps essentially argued the remedy did not meaningfully boost traffic to rival comparison shopping services (CSSs), because those services were not winning the best auction slots:

All comparison shopping services other than Google’s are hidden in plain sight, on a tab behind Google’s default comparison shopping page. Traffic cannot get to them, but instead goes to Google and on to merchants. As a result, traffic to comparison shopping services has fallen since the remedy—worsening the original abuse.

Or, as Margrethe Vestager put it:

We may see a show of rivals in the shopping box. We may see a pickup when it comes to clicks for merchants. But we still do not see much traffic for viable competitors when it comes to shopping comparison

But these arguments are entirely beside the point. If the infringement had been framed as a refusal to supply, it might be relevant that rivals cannot access the shopping box at what is, for them,  cost-effective price. Because the infringement was framed in terms of self-preferencing, all that matters is whether Google treats itself equally.

I am not aware of a credible claim that this is not the case. At best, critics have suggested the auction mechanism favors Google because it essentially pays itself:

The auction mechanism operated by Google to determine the price paid for PLA clicks also disproportionately benefits Google. CSSs are discriminated against per clickthrough, as they are forced to cede most of their profit margin in order to successfully bid […] Google, contrary to rival CSSs, does not, in reality, have to incur the auction costs and bid away a great part of its profit margins.

But this reasoning completely omits Google’s opportunity costs. Imagine a hypothetical (and oversimplified) setting where retailers are willing to pay Google or rival CSSs 13 euros per click-through. Imagine further that rival CSSs can serve these clicks at a cost of 2 euros, compared to 3 euros for Google (excluding the auction fee). Google is less efficient in this hypothetical. In this setting, rivals should be willing to bid up to 11 euros per click (the difference between what they expect to earn and their other costs). Critics claim Google will accept to bid higher because the money it pays itself during the auction is not really a cost (it ultimately flows to Google’s pockets). That is clearly false. 

To understand this, readers need only consider Google’s point of view. On the one hand, it could pay itself 11 euros (and some tiny increment) to win the auction. Its revenue per click-through would be 10 euros (13 euros per click-through, minus its cost of 3 euros). On the other hand, it could underbid rivals by a tiny increment, ensuring they bid 11 euros. When its critics argue that Google has an advantage because it pays itself, they are ultimately claiming that 10 is larger than 11.

Google’s remedy could hardly be more neutral. If it wins more auction slots than rivals CSSs, the appropriate inference should be that it is simply more efficient. Nothing in the Commission’s decision or the General Court’s ruling precludes that outcome. In short, while Google has (for the time being, at least) lost its battle to appeal the Commission’s decision, the remedy package—the same it put forward way back in 2014—has never looked stronger.

Good news for whom?

The above is mostly good news for both Google and consumers, who will be relieved that the General Court’s ruling preserves Google’s ability to show specialized boxes (of which the shopping unit is but one example). But that should not mask the tremendous downsides of both the Commission’s case and the court’s ruling. 

The Commission and rivals’ misapprehensions surrounding the Google Shopping remedy, as well as the General Court’s strong stance against self-preferencing, are revealing of a broader misunderstanding about online markets that also permeates through other digital regulation initiatives like the Digital Markets Act and the American Choice and Innovation Act. 

Policymakers wrongly imply that platform neutrality is a good in and of itself. They assume incumbent platforms generally have an incentive to favor their own services, and that preventing them from doing so is beneficial to both rivals and consumers. Yet neither of these statements is correct.

Economic research suggests self-preferencing is only harmful in exceptional circumstances. That is true of the traditional literature on platform threats (here and here), where harm is premised on the notion that rivals will use the downstream market, ultimately, to compete with an upstream incumbent. It’s also true in more recent scholarship that compares dual mode platforms to pure marketplaces and resellers, where harm hinges on a platform being able to immediately imitate rivals’ offerings. Even this ignores the significant efficiencies that might simultaneously arise from self-preferencing and closed platforms, more broadly. In short, rules that categorically prohibit self-preferening by dominant platforms overshoot the mark, and the General Court’s Google Shopping ruling is a troubling development in that regard.

It is also naïve to think that prohibiting self-preferencing will automatically benefit rivals and consumers (as opposed to harming the latter and leaving the former no better off). If self-preferencing is not anticompetitive, then propping up inefficient firms will at best be a futile exercise in preserving failing businesses. At worst, it would impose significant burdens on consumers by destroying valuable synergies between the platform and its own downstream service.

Finally, if the past years teach us anything about online markets, it is that consumers place a much heavier premium on frictionless user interfaces than on open platforms. TikTok is arguably a much more “closed” experience than other sources of online entertainment, like YouTube or Reddit (i.e. users have less direct control over their experience). Yet many observers have pinned its success, among other things, on its highly intuitive and simple interface. The emergence of Vinted, a European pre-owned goods platform, is another example of competition through a frictionless user experience.

There is a significant risk that, by seeking to boost “choice,” intervention by competition regulators against self-preferencing will ultimately remove one of the benefits users value most. By increasing the information users need to process, there is a risk that non-discrimination remedies will merely add pain points to the underlying purchasing process. In short, while Google Shopping is nominally a victory for the Commission and rivals, it is also a testament to the futility and harmfulness of myopic competition intervention in digital markets. Consumer preferences cannot be changed by government fiat, nor can the fact that certain firms are more efficient than others (at least, not without creating significant harm in the process). It is time this simple conclusion made its way into European competition thinking.

[Judge Douglas Ginsburg was invited to respond to the Beesley Lecture given by Andrea Coscelli, chief executive of the U.K. Competition and Markets Authority (CMA). Both the lecture and Judge Ginsburg’s response were broadcast by the BBC on Oct. 28, 2021. The text of Mr. Coscelli’s Beesley lecture is available on the CMA’s website. Judge Ginsburg’s response follows below.]

Thank you, Victoria, for the invitation to respond to Mr. Coscelli and his proposal for a legislatively founded Digital Markets Unit. Mr. Coscelli is one of the most talented, successful, and creative heads a competition agency has ever had. In the case of the DMU [ed., Digital Markets Unit], however, I think he has let hope triumph over experience and prudence. This is often the case with proposals for governmental reform: Indeed, it has a name, the Nirvana Fallacy, which comes from comparing the imperfectly functioning marketplace with the perfectly functioning government agency. Everything we know about the regulation of competition tells us the unintended consequences may dwarf the intended benefits and the result may be a less, not more, competitive economy. The precautionary principle counsels skepticism about such a major and inherently risky intervention.

Mr. Coscelli made a point in passing that highlights the difference in our perspectives: He said the SMS [ed., strategic market status] merger regime would entail “a more cautious standard of proof.” In our shared Anglo-American legal culture, a more cautious standard of proof means the government would intervene in fewer, not more, market activities; proof beyond a reasonable doubt in criminal cases is a more cautious standard than a mere preponderance of the evidence. I, too, urge caution, but of the traditional kind.

I will highlight five areas of concern with the DMU proposal.

I. Chilling Effects

The DMU’s ability to designate a firm as being of strategic market significance—or SMS—will place a potential cloud over innovative activity in far more sectors than Mr. Coscelli could mention in his lecture. He views the DMU’s reach as limited to a small number of SMS-designated firms; and that may prove true, but there is nothing in the proposal limiting DMU’s reach.

Indeed, the DMU’s authority to regulate digital markets is surely going to be difficult to confine. Almost every major retail activity or consumer-facing firm involves an increasingly significant digital component, particularly after the pandemic forced many more firms online. Deciding which firms the DMU should cover seems easy in theory, but will prove ever more difficult and cumbersome in practice as digital technology continues to evolve. For instance, now that money has gone digital, a bank is little more than a digital platform bringing together lenders (called depositors) and borrowers, much as Amazon brings together buyers and sellers; so, is every bank with market power and an entrenched position to be subject to rules and remedies laid down by the DMU as well as supervision by the bank regulators? Is Aldi in the crosshairs now that it has developed an online retail platform? Match.com, too? In short, the number of SMS firms will likely grow apace in the next few years.

II. SMS Designations Should Not Apply to the Whole Firm

The CMA’s proposal would apply each SMS designation firm-wide, even if the firm has market power in a single line of business. This will inhibit investment in further diversification and put an SMS firm at a competitive disadvantage across all its businesses.

Perhaps company-wide SMS designations could be justified if the unintended costs were balanced by expected benefits to consumers, but this will not likely be the case. First, there is little evidence linking consumer harm to lines of business in which large digital firms do not have market power. On the contrary, despite the discussion of Amazon’s supposed threat to competition, consumers enjoy lower prices from many more retailers because of the competitive pressure Amazon brings to bear upon them.

Second, the benefits Mr. Coscelli expects the economy to reap from faster government enforcement are, at best, a mixed blessing. The proposal, you see, reverses the usual legal norm, instead making interim relief the rule rather than the exception. If a firm appeals its SMS designation, then under the CMA’s proposal, the DMU’s SMS designations and pro-competition interventions, or PCIs, will not be stayed pending appeal, raising the prospect that a firm’s activities could be regulated for a significant period even though it was improperly designated. Even prevailing in the courts may be a Pyrrhic victory because opportunities will have slipped away. Making matters worse, the DMU’s designation of a firm as SMS will likely receive a high degree of judicial deference, so that errors may never be corrected.

III. The DMU Cannot Be Evidence-based Given its Goals and Objectives

The DMU’s stated goal is to “further the interests of consumers and citizens in digital markets by promoting competition and innovation.”[1] DMU’s objectives for developing codes of conduct are: fair trading, open choices, and trust and transparency.[2] Fairness, openness, trust, and transparency are all concepts that are difficult to define and probably impossible to quantify. Therefore, I fear Mr. Coscelli’s aspiration that the DMU will be an evidence-based, tailored, and predictable regime seem unrealistic. The CMA’s idea of “an evidence-based regime” seems destined to rely mostly upon qualitative conjecture about the potential for the code of conduct to set “rules of the game” that encourage fair trading, open choices, trust, and transparency. Even if the DMU commits to considering empirical evidence at every step of its process, these fuzzy, qualitative objectives will allow it to come to virtually any conclusion about how a firm should be regulated.

Implementing those broad goals also throws into relief the inevitable tensions among them. Some potential conflicts between DMU’s objectives for developing codes of conduct are clear from the EU’s experience. For example, one of the things DMU has considered already is stronger protection for personal data. The EU’s experience with the GDPR shows that data protection is costly and, like any costly requirement, tends to advantage incumbents and thereby discourage new entry. In other words, greater data protections may come at the expense of start-ups or other new entrants and the contribution they would otherwise have made to competition, undermining open choices in the name of data transparency.

Another example of tension is clear from the distinction between Apple’s iOS and Google’s Android ecosystems. They take different approaches to the trade-off between data privacy and flexibility in app development. Apple emphasizes consumer privacy at the expense of allowing developers flexibility in their design choices and offers its products at higher prices. Android devices have fewer consumer-data protections but allow app developers greater freedom to design their apps to satisfy users and are offered at lower prices. The case of Epic Games v. Apple put on display the purportedly pro-competitive arguments the DMU could use to justify shutting down Apple’s “walled garden,” whereas the EU’s GDPR would cut against Google’s open ecosystem with limited consumer protections. Apple’s model encourages consumer trust and adoption of a single, transparent model for app development, but Google’s model encourages app developers to choose from a broader array of design and payment options and allows consumers to choose between the options; no matter how the DMU designs its code of conduct, it will be creating winners and losers at the cost of either “open choices” or “trust and transparency.” As experience teaches is always the case, it is simply not possible for an agency with multiple goals to serve them all at the same time. The result is an unreviewable discretion to choose among them ad hoc.

Finally, notice that none of the DMU’s objectives—fair trading, open choices, and trust and transparency—revolves around quantitative evidence; at bottom, these goals are not amenable to the kind of rigor Mr. Coscelli hopes for.

IV. Speed of Proposals

Mr. Coscelli has emphasized the slow pace of competition law matters; while I empathize, surely forcing merging parties to prove a negative and truncating their due process rights is not the answer.

As I mentioned earlier, it seems a more cautious standard of proof to Mr. Coscelli is one in which an SMS firm’s proposal to acquire another firm is presumed, or all but presumed, to be anticompetitive and unlawful. That is, the DMU would block the transaction unless the firms can prove their deal would not be anticompetitive—an extremely difficult task. The most self-serving version of the CMA’s proposal would require it to prove only that the merger poses a “realistic prospect” of lessening competition, which is vague, but may in practice be well below a 50% chance. Proving that the merged entity does not harm competition will still require a predictive forward-looking assessment with inherent uncertainty, but the CMA wants the costs of uncertainty placed upon firms, rather than it. Given the inherent uncertainty in merger analysis, the CMA’s proposal would pose an unprecedented burden of proof on merging parties.

But it is not only merging parties the CMA would deprive of due process; the DMU’s so-called pro-competitive interventions, or PCI, SMS designations, and code-of-conduct requirements generally would not be stayed pending appeal. Further, an SMS firm could overturn the CMA’s designation only if it could overcome substantial deference to the DMU’s fact-finding. It is difficult to discern, then, the difference between agency decisions and final orders.

The DMU would not have to show or even assert an extraordinary need for immediate relief. This is the opposite of current practice in every jurisdiction with which I am familiar.  Interim orders should take immediate effect only in exceptional circumstances, when there would otherwise be significant and irreversible harm to consumers, not in the ordinary course of agency decision making.

V. Antitrust Is Not Always the Answer

Although one can hardly disagree with Mr. Coscelli’s premise that the digital economy raises new legal questions and practical challenges, it is far from clear that competition law is the answer to them all. Some commentators of late are proposing to use competition law to solve consumer protection and even labor market problems. Unfortunately, this theme also recurs in Mr. Coscelli’s lecture. He discusses concerns with data privacy and fair and reasonable contract terms, but those have long been the province of consumer protection and contract law; a government does not need to step in and regulate all realms of activity by digital firms and call it competition law. Nor is there reason to confine needed protections of data privacy or fair terms of use to SMS firms.

Competition law remedies are sometimes poorly matched to the problems a government is trying to correct. Mr. Coscelli discusses the possibility of strong interventions, such as forcing the separation of a platform from its participation in retail markets; for example, the DMU could order Amazon to spin off its online business selling and shipping its own brand of products. Such powerful remedies can be a sledgehammer; consider forced data sharing or interoperability to make it easier for new competitors to enter. For example, if Apple’s App Store is required to host all apps submitted to it in the interest of consumer choice, then Apple loses its ability to screen for security, privacy, and other consumer benefits, as its refusal   to deal is its only way to prevent participation in its store. Further, it is not clear consumers want Apple’s store to change; indeed, many prefer Apple products because of their enhanced security.

Forced data sharing would also be problematic; the hiQ v. LinkedIn case in the United States should serve as a cautionary tale. The trial court granted a preliminary injunction forcing LinkedIn to allow hiQ to scrape its users’ profiles while the suit was ongoing. LinkedIn ultimately won the suit because it did not have market power, much less a monopoly, in any relevant market. The court concluded each theory of anticompetitive conduct was implausible, but meanwhile LinkedIn had been forced to allow hiQ to scrape its data for an extended period before the final decision. There is no simple mechanism to “unshare” the data now that LinkedIn has prevailed. This type of case could be common under the CMA proposal because the DMU’s orders will go into immediate effect.

There is potentially much redeeming power in the Digital Regulation Co-operation Forum as Mr. Coscelli described it, but I take a different lesson from this admirable attempt to coordinate across agencies: Perhaps it is time to look beyond antitrust to solve problems that are not based upon market power. As the DRCF highlights, there are multiple agencies with overlapping authority in the digital market space. ICO and Ofcom each have authority to take action against a firm that disseminates fake news or false advertisements. Mr. Coscelli says it would be too cumbersome to take down individual bad actors, but, if so, then the solution is to adopt broader consumer protection rules, not apply an ill-fitting set of competition law rules. For example, the U.K. could change its notice-and-takedown rules to subject platforms to strict liability if they host fake news, even without knowledge that they are doing so, or perhaps only if they are negligent in discharging their obligation to police against it.

Alternatively, the government could shrink the amount of time platforms have to take down information; France gives platforms only about an hour to remove harmful information. That sort of solution does not raise the same prospect of broadly chilling market activity, but still addresses one of the concerns Mr. Coscelli raises with digital markets.

In sum, although Mr. Coscelli is of course correct that competition authorities and governments worldwide are considering whether to adopt broad reforms to their competition laws, the case against broadening remains strong. Instead of relying upon the self-corrective potential of markets, which is admittedly sometimes slower than anyone would like, the CMA assumes markets need regulation until firms prove otherwise. Although clearly well-intentioned, the DMU proposal is in too many respects not met to the task of protecting competition in digital markets; at worst, it will inhibit innovation in digital markets to the point of driving startups and other innovators out of the U.K.


[1] See Digital markets Taskforce, A new pro-competition regime for digital markets, at 22, Dec. 2020, available at: https://assets.publishing.service.gov.uk/media/5fce7567e90e07562f98286c/Digital_Taskforce_-_Advice.pdf; Oliver Dowden & Kwasi Kwarteng, A New Pro-competition Regime for Digital Markets, July 2021, available from: https://www.gov.uk/government/consultations/a-new-pro-competition-regime-for-digital-markets, at ¶ 27.

[2] Sam Bowman, Sam Dumitriu & Aria Babu, Conflicting Missions:The Risks of the Digital Markets Unit to Competition and Innovation, Int’l Center for L. & Econ., June 2021, at 13.

A bipartisan group of senators unveiled legislation today that would dramatically curtail the ability of online platforms to “self-preference” their own services—for example, when Apple pre-installs its own Weather or Podcasts apps on the iPhone, giving it an advantage that independent apps don’t have. The measure accompanies a House bill that included similar provisions, with some changes.

1. The Senate bill closely resembles the House version, and the small improvements will probably not amount to much in practice.

The major substantive changes we have seen between the House bill and the Senate version are:

  1. Violations in Section 2(a) have been modified to refer only to conduct that “unfairly” preferences, limits, or discriminates between the platform’s products and others, and that “materially harm[s] competition on the covered platform,” rather than banning all preferencing, limits, or discrimination.
  2. The evidentiary burden required throughout the bill has been changed from  “clear and convincing” to a “preponderance of evidence” (in other words, greater than 50%).
  3. An affirmative defense has been added to permit a platform to escape liability if it can establish that challenged conduct that “was narrowly tailored, was nonpretextual, and was necessary to… maintain or enhance the core functionality of the covered platform.”
  4. The minimum market capitalization for “covered platforms” has been lowered from $600 billion to $550 billion.
  5. The Senate bill would assess fines of 15% of revenues from the period during which the conduct occurred, in contrast with the House bill, which set fines equal to the greater of either 15% of prior-year revenues or 30% of revenues from the period during which the conduct occurred.
  6. Unlike the House bill, the Senate bill does not create a private right of action. Only the U.S. Justice Department (DOJ), Federal Trade Commission (FTC), and state attorneys-generals could bring enforcement actions on the basis of the bill.

Item one here certainly mitigates the most extreme risks of the House bill, which was drafted, bizarrely, to ban all “preferencing” or “discrimination” by platforms. If that were made law, it could literally have broken much of the Internet. The softened language reduces that risk somewhat.

However, Section 2(b), which lists types of conduct that would presumptively establish a violation under Section 2(a), is largely unchanged. As outlined here, this would amount to a broad ban on a wide swath of beneficial conduct. And “unfair” and “material” are notoriously slippery concepts. As a practical matter, their inclusion here may not significantly alter the course of enforcement under the Senate legislation from what would ensue under the House version.

Item three, which allows challenged conduct to be defended if it is “necessary to… maintain or enhance the core functionality of the covered platform,” may also protect some conduct. But because the bill requires companies to prove that challenged conduct is not only beneficial, but necessary to realize those benefits, it effectively implements a “guilty until proven innocent” standard that is likely to prove impossible to meet. The threat of permanent injunctions and enormous fines will mean that, in many cases, companies simply won’t be able to justify the expense of endeavoring to improve even the “core functionality” of their platforms in any way that could trigger the bill’s liability provisions. Thus, again, as a practical matter, the difference between the Senate and House bills may be only superficial.

The effect of this will likely be to diminish product innovation in these areas, because companies could not know in advance whether the benefits of doing so would be worth the legal risk. We have previously highlighted existing conduct that may be lost if a bill like this passes, such as pre-installation of apps or embedding maps and other “rich” results in boxes on search engine results pages. But the biggest loss may be things we don’t even know about yet, that just never happen because the reward from experimentation is not worth the risk of being found to be “discriminating” against a competitor.

We dove into the House bill in Breaking Down the American Choice and Innovation Online Act and Breaking Down House Democrats’ Forthcoming Competition Bills.

2. The prohibition on “unfair self-preferencing” is vague and expansive and will make Google, Amazon, Facebook, and Apple’s products worse. Consumers don’t want digital platforms to be dumb pipes, or to act like a telephone network or sewer system. The Internet is filled with a superabundance of information and options, as well as a host of malicious actors. Good digital platforms act as middlemen, sorting information in useful ways and taking on some of the risk that exists when, inevitably, we end up doing business with untrustworthy actors.

When users have the choice, they tend to prefer platforms that do quite a bit of “discrimination”—that is, favoring some sellers over others, or offering their own related products or services through the platform. Most people prefer Amazon to eBay because eBay is chaotic and riskier to use.

Competitors that decry self-preferencing by the largest platforms—integrating two different products with each other, like putting a maps box showing only the search engine’s own maps on a search engine results page—argue that the conduct is enabled only by a platform’s market dominance and does not benefit consumers.

Yet these companies often do exactly the same thing in their own products, regardless of whether they have market power. Yelp includes a map on its search results page, not just restaurant listings. DuckDuckGo does the same. If these companies offer these features, it is presumably because they think their users want such results. It seems perfectly plausible that Google does the same because it thinks its users—literally the same users, in most cases—also want them.

Fundamentally, and as we discuss in Against the Vertical Disrcimination Presumption, there is simply no sound basis to enact such a bill (even in a slightly improved version):

The notion that self-preferencing by platforms is harmful to innovation is entirely speculative. Moreover, it is flatly contrary to a range of studies showing that the opposite is likely true. In reality, platform competition is more complicated than simple theories of vertical discrimination would have it, and there is certainly no basis for a presumption of harm.

We discussed self-preferencing further in Platform Self-Preferencing Can Be Good for Consumers and Even Competitors, and showed that platform “discrimination” is often what consumers want from digital platforms in On the Origin of Platforms: An Evolutionary Perspective.

3. The bill massively empowers an FTC that seems intent to use antitrust to achieve political goals. The House bill would enable competitors to pepper covered platforms with frivolous lawsuits. The bill’s sponsors presumably hope that removing the private right of action will help to avoid that. But the bill still leaves intact a much more serious risk to the rule of law: the bill’s provisions are so broad that federal antitrust regulators will have enormous discretion over which cases they take.

This means that whoever is running the FTC and DOJ will be able to threaten covered platforms with a broad array of lawsuits, potentially to influence or control their conduct in other, unrelated areas. While some supporters of the bill regard this as a positive, most antitrust watchers would greet this power with much greater skepticism. Fundamentally, both bills grant antitrust enforcers wildly broad powers to pursue goals unrelated to competition. FTC Chair Lina Khan has, for example, argued that “the dispersion of political and economic control” ought to be antitrust’s goal. Commissioner Rebecca Kelly-Slaughter has argued that antitrust should be “antiracist”.

Whatever the desirability of these goals, the broad discretionary authority the bills confer on the antitrust agencies means that individual commissioners may have significantly greater scope to pursue the goals that they believe to be right, rather than Congress.

See discussions of this point at What Lina Khan’s Appointment Means for the House Antitrust Bills, Republicans Should Tread Carefully as They Consider ‘Solutions’ to Big Tech, The Illiberal Vision of Neo-Brandeisian Antitrust, and Alden Abbott’s discussion of FTC Antitrust Enforcement and the Rule of Law.

4. The bill adopts European principles of competition regulation. These are, to put it mildly, not obviously conducive to the sort of innovation and business growth that Americans may expect. Europe has no tech giants of its own, a condition that shows little sign of changing. Apple, alone, is worth as much as the top 30 companies in Germany’s DAX index, and the top 40 in France’s CAC index. Landmark European competition cases have seen Google fined for embedding Shopping results in the Search page—not because it hurt consumers, but because it hurt competing pricecomparison websites.

A fundamental difference between American and European competition regimes is that the U.S. system is far more friendly to businesses that obtain dominant market positions because they have offered better products more cheaply. Under the American system, successful businesses are normally given broad scope to charge high prices and refuse to deal with competitors. This helps to increase the rewards and incentive to innovate and invest in order to obtain that strong market position. The European model is far more burdensome.

The Senate bill adopts a European approach to refusals to deal—the same approach that led the European Commission to fine Microsoft for including Windows Media Player with Windows—and applies it across Big Tech broadly. Adopting this kind of approach may end up undermining elements of U.S. law that support innovation and growth.

For more, see How US and EU Competition Law Differ.

5. The proposals are based on a misunderstanding of the state of competition in the American economy, and of antitrust enforcement. It is widely believed that the U.S. economy has seen diminished competition. This is mistaken, particularly with respect to digital markets. Apparent rises in market concentration and profit margins disappear when we look more closely: local-level concentration is falling even as national-level concentration is rising, driven by more efficient chains setting up more stores in areas that were previously served by only one or two firms.

And markup rises largely disappear after accounting for fixed costs like R&D and marketing.

Where profits are rising, in areas like manufacturing, it appears to be mainly driven by increased productivity, not higher prices. Real prices have not risen in line with markups. Where profitability has increased, it has been mainly driven by falling costs.

Nor have the number of antitrust cases brought by federal antitrust agencies fallen. The likelihood of a merger being challenged more than doubled between 1979 and 2017. And there is little reason to believe that the deterrent effect of antitrust has weakened. Many critics of Big Tech have decided that there must be a problem and have worked backwards from that conclusion, selecting whatever evidence supports it and ignoring the evidence that does not. The consequence of such motivated reasoning is bills like this.

See Geoff’s April 2020 written testimony to the House Judiciary Investigation Into Competition in Digital Markets here.

A lawsuit filed by the State of Texas and nine other states in December 2020 alleges, among other things, that Google has engaged in anticompetitive conduct related to its online display-advertising business.

Broadly, the Texas complaint (previously discussed in this TOTM symposium) alleges that Google possesses market power in ad-buying tools and in search, illustrated in the figure below.

The complaint also alleges anticompetitive conduct by Google with respect to YouTube in a separate “inline video-advertising market.” According to the complaint, this market power is leveraged to force transactions through Google’s exchange, AdX, and its network, Google Display Network. The leverage is further exercised by forcing publishers to license Google’s ad server, Google Ad Manager.

Although the Texas complaint raises many specific allegations, the key ones constitute four broad claims: 

  1. Google forces publishers to license Google’s ad server and trade in Google’s ad exchange;
  2. Google uses its control over publishers’ inventory to block exchange competition;
  3. Google has disadvantaged technology known as “header bidding” in order to prevent publishers from accessing its competitors; and
  4. Google prevents rival ad-placement services from competing by not allowing them to buy YouTube ad space.

Alleged harms

The Texas complaint alleges Google’s conduct has caused harm to competing networks, exchanges, and ad servers. The complaint also claims that the plaintiff states’ economies have been harmed “by depriving the Plaintiff States and the persons within each Plaintiff State of the benefits of competition.”

In a nod to the widely accepted Consumer Welfare Standard, the Texas complaint alleges harm to three categories of consumers:

  1. Advertisers who pay for their ads to be displayed, but should be paying less;
  2. Publishers who are paid to provide space on their sites to display ads, but should be paid more; and
  3. Users who visit the sites, view the ads, and purchase or use the advertisers’ and publishers’ products and services.

The complaint claims users are harmed by above-competitive prices paid by advertisers, in that these higher costs are passed on in the form of higher prices and lower quality for the products and services they purchase from those advertisers. The complaint simultaneously claims that users are harmed by the below-market prices received by publishers in the form of “less content (lower output of content), lower-quality content, less innovation in content delivery, more paywalls, and higher subscription fees.”

Without saying so explicitly, the complaint insinuates that if intermediaries (e.g., Google and competing services) charged lower fees for their services, advertisers would pay less, publishers would be paid more, and consumers would be better off in the form of lower prices and better products from advertisers, as well as improved content and lower fees on publishers’ sites.

Effective competition is not an antitrust offense

A flawed premise underlies much of the Texas complaint. It asserts that conduct by a dominant incumbent firm that makes competition more difficult for competitors is inherently anticompetitive, even if that conduct confers benefits on users.

This amounts to a claim that Google is acting anti-competitively by innovating and developing products and services to benefit one or more display-advertising constituents (e.g., advertisers, publishers, or consumers) or by doing things that benefit the advertising ecosystem more generally. These include creating new and innovative products, lowering prices, reducing costs through vertical integration, or enhancing interoperability.

The argument, which is made explicitly elsewhere, is that Google must show that it has engineered and implemented its products to minimize obstacles its rivals face, and that any efficiencies created by its products must be shown to outweigh the costs imposed by those improvements on the company’s competitors.

Similarly, claims that Google has acted in an anticompetitive fashion rest on the unsupportable notion that the company acts unfairly when it designs products to benefit itself without considering how those designs would affect competitors. Google could, it is argued, choose alternate arrangements and practices that would possibly confer greater revenue on publishers or lower prices on advertisers without imposing burdens on competitors.

For example, a report published by the Omidyar Network sketching a “roadmap” for a case against Google claims that, if Google’s practices could possibly be reimagined to achieve the same benefits in ways that foster competition from rivals, then the practices should be condemned as anticompetitive:

It is clear even to us as lay people that there are less anticompetitive ways of delivering effective digital advertising—and thereby preserving the substantial benefits from this technology—than those employed by Google.

– Fiona M. Scott Morton & David C. Dinielli, “Roadmap for a Digital Advertising Monopolization Case Against Google”

But that’s not how the law—or the economics—works. This approach converts beneficial aspects of Google’s ad-tech business into anticompetitive defects, essentially arguing that successful competition and innovation create barriers to entry that merit correction through antitrust enforcement.

This approach turns U.S. antitrust law (and basic economics) on its head. As some of the most well-known words of U.S. antitrust jurisprudence have it:

A single producer may be the survivor out of a group of active competitors, merely by virtue of his superior skill, foresight and industry. In such cases a strong argument can be made that, although, the result may expose the public to the evils of monopoly, the Act does not mean to condemn the resultant of those very forces which it is its prime object to foster: finis opus coronat. The successful competitor, having been urged to compete, must not be turned upon when he wins.

– United States v. Aluminum Co. of America, 148 F.2d 416 (2d Cir. 1945)

U.S. antitrust law is intended to foster innovation that creates benefits for consumers, including innovation by incumbents. The law does not proscribe efficiency-enhancing unilateral conduct on the grounds that it might also inconvenience competitors, or that there is some other arrangement that could be “even more” competitive. Under U.S. antitrust law, firms are “under no duty to help [competitors] survive or expand.”  

To be sure, the allegations against Google are couched in terms of anticompetitive effect, rather than being described merely as commercial disagreements over the distribution of profits. But these effects are simply inferred, based on assumptions that Google’s vertically integrated business model entails an inherent ability and incentive to harm rivals.

The Texas complaint claims Google can surreptitiously derive benefits from display advertisers by leveraging its search-advertising capabilities, or by “withholding YouTube inventory,” rather than altruistically opening Google Search and YouTube up to rival ad networks. The complaint alleges Google uses its access to advertiser, publisher, and user data to improve its products without sharing this data with competitors.

All these charges may be true, but they do not describe inherently anticompetitive conduct. Under U.S. law, companies are not obliged to deal with rivals and certainly are not obliged to do so on those rivals’ preferred terms

As long ago as 1919, the U.S. Supreme Court held that:

In the absence of any purpose to create or maintain a monopoly, the [Sherman Act] does not restrict the long recognized right of [a] trader or manufacturer engaged in an entirely private business, freely to exercise his own independent discretion as to parties with whom he will deal.

– United States v. Colgate & Co.

U.S. antitrust law does not condemn conduct on the basis that an enforcer (or a court) is able to identify or hypothesize alternative conduct that might plausibly provide similar benefits at lower cost. In alleging that there are ostensibly “better” ways that Google could have pursued its product design, pricing, and terms of dealing, both the Texas complaint and Omidyar “roadmap” assert that, had the firm only selected a different path, an alternative could have produced even more benefits or an even more competitive structure.

The purported cure of tinkering with benefit-producing unilateral conduct by applying an “even more competition” benchmark is worse than the supposed disease. The adjudicator is likely to misapply such a benchmark, deterring the very conduct the law seeks to promote.

For example, Texas complaint alleges: “Google’s ad server passed inside information to Google’s exchange and permitted Google’s exchange to purchase valuable impressions at artificially depressed prices.” The Omidyar Network’s “roadmap” claims that “after purchasing DoubleClick, which became its publisher ad server, Google apparently lowered its prices to publishers by a factor of ten, at least according to one publisher’s account related to the CMA. Low prices for this service can force rivals to depart, thereby directly reducing competition.”

In contrast, as current U.S. Supreme Court Associate Justice Stephen Breyer once explained, in the context of above-cost low pricing, “the consequence of a mistake here is not simply to force a firm to forego legitimate business activity it wishes to pursue; rather, it is to penalize a procompetitive price cut, perhaps the most desirable activity (from an antitrust perspective) that can take place in a concentrated industry where prices typically exceed costs.”  That commentators or enforcers may be able to imagine alternative or theoretically more desirable conduct is beside the point.

It has been reported that the U.S. Justice Department (DOJ) may join the Texas suit or bring its own similar action against Google in the coming months. If it does, it should learn from the many misconceptions and errors in the Texas complaint that leave it on dubious legal and economic grounds.

Policymakers’ recent focus on how Big Tech should be treated under antitrust law has been accompanied by claims that companies like Facebook and Google hold dominant positions in various “markets.” Notwithstanding the tendency to conflate whether a firm is large with whether it hold a dominant position, we must first answer the question most of these claims tend to ignore: “dominant over what?”

For example, as set out in this earlier Truth on the Market post, a recent lawsuit filed by various states and the U.S. Justice Department outlined five areas related to online display advertising over which Google is alleged by the plaintiffs to hold a dominant position. But crucially, none appear to have been arrived at via the application of economic reasoning.

As that post explained, other forms of advertising (such as online search and offline advertising) might form part of a “relevant market” (i.e., the market in which a product actually competes) over which Google’s alleged dominance should be assessed. The post makes a strong case for the actual relevant market being much broader than that claimed in the lawsuit. Of course, some might disagree with that assessment, so it is useful to step back and examine the principles that underlie and motivate how a relevant market is defined.

In any antitrust case, defining the relevant market should be regarded as a means to an end, not an end in itself. While such definitions provide the basis to calculate market shares, the process of thinking about relevant markets also should provide a framework to consider and highlight important aspects of the case. The process enables one to think about how a particular firm and market operates, the constraints that it and rival firms face, and whether entry by other firms is feasible or likely.

Many naïve attempts to define the relevant market will limit their analysis to a particular industry. But an industry could include too few competitors, or it might even include too many—for example, if some firms in the industry generate products that do not constitute strong competitive constraints. If one were to define all cars as the “relevant” market, that would imply that a Dacia Sandero (a supermini model produced Renault’s Romanian subsidiary Dacia) constrains the price of Maserati’s Quattroporte luxury sports sedan as much as the Ferrari Portofino grand touring sports car does. This is very unlikely to hold in reality.[1]

The relevant market should be the smallest possible group of products and services that contains all such products and services that could provide a reasonable competitive constraint. But that, of course, merely raises the question of what is meant by a “reasonable competitive constraint.” Thankfully, by applying economic reasoning, we can answer that question.

More specifically, we have the “hypothetical monopolist test.” This test operates by considering whether a hypothetical monopolist (i.e., a single firm that controlled all the products considered part of the relevant market) could profitably undertake “a small but significant, non-transitory, increase in price” (typically shortened as the SSNIP test).[2]

If the hypothetical monopolist could profitably implement this increase in price, then the group of products under consideration is said to constitute a relevant market. On the other hand, if the hypothetical monopolist could not profitably increase the price of that group of products (due to demand-side or supply-side constraints on their ability to increase prices), then that group of products is not a relevant market, and more products need to be included in the candidate relevant market. The process of widening the group of products continues until the hypothetical monopolist could profitably increase prices over that group.

So how does this test work in practice? Let’s use an example to make things concrete. In particular, let’s focus on Google’s display advertising, as that has been a significant focus of attention. Starting from the narrowest possible market, Google’s own display advertising, the HM test would ask whether a hypothetical monopolist controlling these services (and just these services) could profitably increase prices of these services permanently by 5% to 10%.

At this initial stage, it is important to avoid the “cellophane fallacy,” in which a monopolist firm could not profitably increase its prices by 5% to 10% because it is already charging the monopoly price. This fallacy usually arises in situations where the product under consideration has very few (if any) substitutes. But as has been shown here, there are already plenty of alternatives to Google’s display-advertising services, so we can be reasonably confident that the fallacy does not apply here.

We would then consider what is likely to happen if Google were to increase the prices of its online display advertising services by 5% to 10%. Given the plethora of other options (such as Microsoft, Facebook, and Simpli.fi) customers have for obtaining online display ads, a sufficiently high number of Google’s customers are likely to switch away, such that the price increase would not be profitable. It is therefore necessary to expand the candidate relevant market to include those closest alternatives to which Google’s customers would switch.

We repeat the exercise, but now with the hypothetical monopolist also increasing the prices of those newly included products. It might be the case that alternatives such as online search ads (as opposed to display ads), print advertising, TV advertising and/or other forms of advertising would sufficiently constrain the hypothetical monopolist in this case that those other alternatives form part of the relevant market.

In determining whether an alternative sufficiently constrains our hypothetical monopolist, it is important to consider actual consumer/firm behavior, rather than relying on products having “similar” characteristics. Although constraints can come from either the demand side (i.e., customers switching to another provider) or the supply side (entry/switching by other providers to start producing the products offered by the HM), for market-definition purposes, it is almost always demand-side switching that matters most. Switching by consumers tends to happen much more quickly than does switching by providers, such that it can be a more effective constraint. (Note that supply-side switching is still important when assessing overall competitive constraints, but because such switching can take one or more years, it is usually considered in the overall competitive assessment, rather than at the market-definition stage.)

Identifying which alternatives consumers do and would switch to therefore highlights the rival products and services that constrain the candidate hypothetical monopolist. It is only once the hypothetical monopolist test has been completed and the relevant market has been found that market shares can be calculated.[3]

It is at that point than an assessment of a firm’s alleged market power (or of a proposed merger) can proceed. This is why claims that “Facebook is a monopolist” or that “Google has market power” often fail at the first hurdle (indeed, in the case of Facebook, they recently have.)

Indeed, I would go so far as to argue that any antitrust claim that does not first undertake a market-definition exercise with sound economic reasoning akin to that described above should be discounted and ignored.


[1] Some might argue that there is a “chain of substitution” from the Maserati to, for example, an Audi A4, to a Ford Focus, to a Mini, to a Dacia Sandero, such that the latter does, indeed, provide some constraint on the former. However, the size of that constraint is likely to be de minimis, given how many “links” there are in that chain.

[2] The “small but significant” price increase is usually taken to be between 5% and 10%.

[3] Even if a product or group of products ends up excluded from the definition of the relevant market, these products can still form a competitive constraint in the overall assessment and are still considered at that point.

Digital advertising is the economic backbone of the Internet. It allows websites and apps to monetize their userbase without having to charge them fees, while the emergence of targeted ads allows this to be accomplished affordably and with less wasted time wasted.

This advertising is facilitated by intermediaries using the “adtech stack,” through which advertisers and publishers are matched via auctions and ads ultimately are served to relevant users. This intermediation process has advanced enormously over the past three decades. Some now allege, however, that this market is being monopolized by its largest participant: Google.

A lawsuit filed by the State of Texas and nine other states in December 2020 alleges, among other things, that Google has engaged in anticompetitive conduct related to its online display advertising business. Those 10 original state plaintiffs were joined by another four states and the Commonwealth of Puerto Rico in March 2021, while South Carolina and Louisiana have also moved to be added as additional plaintiffs. Google also faces a pending antitrust lawsuit brought by the U.S. Justice Department (DOJ) and 14 states (originally 11) related to the company’s distribution agreements, as well as a separate action by the State of Utah, 35 other states, and the District of Columbia related to its search design.

In recent weeks, it has been reported that the DOJ may join the Texas suit or bring its own similar action against Google in the coming months. If it does, it should learn from the many misconceptions and errors in the Texas complaint that leave it on dubious legal and economic grounds.

​​Relevant market

The Texas complaint identifies at least five relevant markets within the adtech stack that it alleges Google either is currently monopolizing or is attempting to monopolize:

  1. Publisher ad servers;
  2. Display ad exchanges;
  3. Display ad networks;
  4. Ad-buying tools for large advertisers; and
  5. Ad-buying tools for small advertisers.

None of these constitute an economically relevant product market for antitrust purposes, since each “market” is defined according to how superficially similar the products are in function, not how substitutable they are. Nevertheless, the Texas complaint vaguely echoes how markets were conceived in the “Roadmap” for a case against Google’s advertising business, published last year by the Omidyar Network, which may ultimately influence any future DOJ complaint, as well.

The Omidyar Roadmap narrows the market from media advertising to digital advertising, then to the open supply of display ads, which comprises only 9% of the total advertising spending and less than 20% of digital advertising, as shown in the figure below. It then further narrows the defined market to the intermediation of the open supply of display ads. Once the market has been sufficiently narrowed, the Roadmap authors conclude that Google’s market share is “perhaps sufficient to confer market power.”

While whittling down the defined market may achieve the purposes of sketching a roadmap to prosecute Google, it also generates a mishmash of more than a dozen relevant markets for digital display and video advertising. In many of these, Google doesn’t have anything approaching market power, while, in some, Facebook is the most dominant player.

The Texas complaint adopts a non-economic approach to market definition.  It ignores potential substitutability between different kinds of advertising, both online and offline, which can serve as a competitive constraint on the display advertising market. The complaint considers neither alternative forms of display advertising, such as social media ads, nor alternative forms of advertising, such as search ads or non-digital ads—all of which can and do act as substitutes. It is possible, at the very least, that advertisers who choose to place ads on third-party websites may switch to other forms of advertising if the price of third-party website advertising was above competitive levels. To ignore this possibility, as the Texas complaint does, is to ignore the entire purpose of defining the relevant antitrust market altogether.

Offline advertising vs. online advertising

The fact that offline and online advertising employ distinct processes does not consign them to economically distinct markets. Indeed, online advertising has manifestly drawn advertisers from offline markets, just as previous technological innovations drew advertisers from other pre-existing channels.

Moreover, there is evidence that, in some cases, offline and online advertising are substitute products. For example, economists Avi Goldfarb and Catherine Tucker demonstrate that display advertising pricing is sensitive to the availability of offline alternatives. They conclude:

We believe our studies refute the hypothesis that online and offline advertising markets operate independently and suggest a default position of substitution. Online and offline advertising markets appear to be closely related. That said, it is important not to draw any firm conclusions based on historical behavior.

Display ads vs. search ads

There is perhaps even more reason to doubt that online display advertising constitutes a distinct, economically relevant market from online search advertising.

Although casual and ill-informed claims are often made to the contrary, various forms of targeted online advertising are significant competitors of each other. Bo Xing and Zhanxi Lin report firms spread their marketing budgets across these different sources of online marketing, and “search engine optimizers”—firms that help websites to maximize the likelihood of a valuable “top-of-list” organic search placement—attract significant revenue. That is, all of these different channels vie against each other for consumer attention and offer advertisers the ability to target their advertising based on data gleaned from consumers’ interactions with their platforms.

Facebook built a business on par with Google’s thanks in large part to advertising, by taking advantage of users’ more extended engagement with the platform to assess relevance and by enabling richer, more engaged advertising than previously appeared on Google Search. It’s an entirely different model from search, but one that has turned Facebook into a competitive ad platform.

And the market continues to shift. Somewhere between 37-56% of product searches start on Amazon, according to one survey, and advertisers have noticed. This is not surprising, given Amazon’s strong ability to match consumers with advertisements, and to do so when and where consumers are more likely to make a purchase.

‘Open’ display advertising vs. ‘owned-and-operated’ display advertising

The United Kingdom’s Competition and Markets Authority (like the Omidyar Roadmap report) has identified two distinct channels of display advertising, which they term “owned and operated” and “open.” The CMA concludes:

Over half of display expenditure is generated by Facebook, which owns both the Facebook platform and Instagram. YouTube has the second highest share of display advertising and is owned by Google. The open display market, in which advertisers buy inventory from many publishers of smaller scale (for example, newspapers and app providers) comprises around 32% of display expenditure.

The Texas complaint does not directly address the distinction between open and owned and operated, but it does allege anticompetitive conduct by Google with respect to YouTube in a separate “inline video advertising market.” 

The CMA finds that the owned-and-operated channel mostly comprises large social media platforms, which sell their own advertising inventory directly to advertisers or media agencies through self-service interfaces, such as Facebook Ads Manager or Snapchat Ads Manager.  In contrast, in the open display channel, publishers such as online newspapers and blogs sell their inventory to advertisers through a “complex chain of intermediaries.”  Through these, intermediaries run auctions that match advertisers’ ads to publisher inventory of ad space. In both channels, nearly all transactions are run through programmatic technology.

The CMA concludes that advertisers “largely see” the open and the owned-and-operated channels as substitutes. According to the CMA, an advertiser’s choice of one channel over the other is driven by each channel’s ability to meet the key performance metrics the advertising campaign is intended to achieve.

The Omidyar Roadmap argues, instead, that the CMA too narrowly focuses on the perspective of advertisers. The Roadmap authors claim that “most publishers” do not control supply that is “owned and operated.” As a result, they conclude that publishers “such as gardenandgun.com or hotels.com” do not have any owned-and-operated supply and can generate revenues from their supply “only through the Google-dominated adtech stack.” 

But this is simply not true. For example, in addition to inventory in its print media, Garden & Gun’s “Digital Media Kit” indicates that the publisher has several sources of owned-and-operated banner and video supply, including the desktop, mobile, and tablet ads on its website; a “homepage takeover” of its website; branded/sponsored content; its email newsletters; and its social media accounts. Hotels.com, an operating company of Expedia Group, has its own owned-and-operated search inventory, which it sells through its “Travel Ads Sponsored Listing,” as well owned-and-operated supply of standard and custom display ads.

Given that both perform the same function and employ similar mechanisms for matching inventory with advertisers, it is unsurprising that both advertisers and publishers appear to consider the owned-and-operated channel and the open channel to be substitutes.

The dystopian novel is a powerful literary genre. It has given us such masterpieces as Nineteen Eighty-Four, Brave New World, and Fahrenheit 451. Though these novels often shed light on the risks of contemporary society and the zeitgeist of the era in which they were written, they also almost always systematically overshoot the mark (intentionally or not) and severely underestimate the radical improvements that stem from the technologies (or other causes) that they fear.

But dystopias are not just a literary phenomenon; they are also a powerful force in policy circles. This is epitomized by influential publications such as The Club of Rome’s 1972 report The Limits of Growth, whose dire predictions of Malthusian catastrophe have largely failed to materialize.

In an article recently published in the George Mason Law Review, we argue that contemporary antitrust scholarship and commentary is similarly afflicted by dystopian thinking. In that respect, today’s antitrust pessimists have set their sights predominantly on the digital economy—”Big Tech” and “Big Data”—in the process of alleging a vast array of potential harms.

Scholars have notably argued that the data created and employed by the digital economy produces network effects that inevitably lead to tipping and to more concentrated markets (e.g., here and here). In other words, firms will allegedly accumulate insurmountable data advantages and thus thwart competitors for extended periods of time.

Some have gone so far as to argue that this threatens the very fabric of western democracy. For instance, parallels between the novel Nineteen Eighty-Four and the power of large digital platforms were plain to see when Epic Games launched an antitrust suit against Apple and its App Store in August 2020. The gaming company released a short video clip parodying Apple’s famous “1984” ad (which, upon its release, was itself widely seen as a critique of the tech incumbents of the time). Similarly, a piece in the New Statesman—titled “Slouching Towards Dystopia: The Rise of Surveillance Capitalism and the Death of Privacy”—concluded that:

Our lives and behaviour have been turned into profit for the Big Tech giants—and we meekly click ‘Accept.’ How did we sleepwalk into a world without privacy?

In our article, we argue that these fears are symptomatic of two different but complementary phenomena, which we refer to as “Antitrust Dystopia” and “Antitrust Nostalgia.”

Antitrust Dystopia is the pessimistic tendency among competition scholars and enforcers to assert that novel business conduct will cause technological advances to have unprecedented, anticompetitive consequences. This is almost always grounded in the belief that “this time is different”—that, despite the benign or positive consequences of previous, similar technological advances, this time those advances will have dire, adverse consequences absent enforcement to stave off abuse.

Antitrust Nostalgia is the biased assumption—often built into antitrust doctrine itself—that change is bad. Antitrust Nostalgia holds that, because a business practice has seemingly benefited competition before, changing it will harm competition going forward. Thus, antitrust enforcement is often skeptical of, and triggered by, various deviations from status quo conduct and relationships (i.e., “nonstandard” business arrangements) when change is, to a first approximation, the hallmark of competition itself.

Our article argues that these two worldviews are premised on particularly questionable assumptions about the way competition unfolds, in this case, in data-intensive markets.

The Case of Big Data Competition

The notion that digital markets are inherently more problematic than their brick-and-mortar counterparts—if there even is a meaningful distinction—is advanced routinely by policymakers, journalists, and other observers. The fear is that, left to their own devices, today’s dominant digital platforms will become all-powerful, protected by an impregnable “data barrier to entry.” Against this alarmist backdrop, nostalgic antitrust scholars have argued for aggressive antitrust intervention against the nonstandard business models and contractual arrangements that characterize these markets.

But as our paper demonstrates, a proper assessment of the attributes of data-intensive digital markets does not support either the dire claims or the proposed interventions.

  1. Data is information

One of the most salient features of the data created and consumed by online firms is that, jargon aside, it is just information. As with other types of information, it thus tends to have at least some traits usually associated with public goods (i.e., goods that are non-rivalrous in consumption and not readily excludable). As the National Bureau of Economic Research’s Catherine Tucker argues, data “has near-zero marginal cost of production and distribution even over long distances,” making it very difficult to exclude others from accessing it. Meanwhile, multiple economic agents can simultaneously use the same data, making it non-rivalrous in consumption.

As we explain in our paper, these features make the nature of modern data almost irreconcilable with the alleged hoarding and dominance that critics routinely associate with the tech industry.

2. Data is not scarce; expertise is

Another important feature of data is that it is ubiquitous. The predominant challenge for firms is not so much in obtaining data but, rather, in drawing useful insights from it. This has two important implications for antitrust policy.

First, although data does not have the self-reinforcing characteristics of network effects, there is a sense that acquiring a certain amount of data and expertise is necessary to compete in data-heavy industries. It is (or should be) equally apparent, however, that this “learning by doing” advantage rapidly reaches a point of diminishing returns.

This is supported by significant empirical evidence. As our survey of the empirical literature shows, data generally entails diminishing marginal returns:

Second, it is firms’ capabilities, rather than the data they own, that lead to success in the marketplace. Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data might, in fact, have the causality in reverse. Arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around.

This dynamic can be seen at play in the early days of the search-engine market. In 2013, The Atlantic ran a piece titled “What the Web Looked Like Before Google.” By comparing the websites of Google and its rivals in 1998 (when Google Search was launched), the article shows how the current champion of search marked a radical departure from the status quo.

Even if it stumbled upon it by chance, Google immediately identified a winning formula for the search-engine market. It ditched the complicated classification schemes favored by its rivals and opted, instead, for a clean page with a single search box. This ensured that users could access the information they desired in the shortest possible amount of time—thanks, in part, to Google’s PageRank algorithm.

It is hardly surprising that Google’s rivals struggled to keep up with this shift in the search-engine industry. The theory of dynamic capabilities tells us that firms that have achieved success by indexing the web will struggle when the market rapidly moves toward a new paradigm (in this case, Google’s single search box and ten blue links). During the time it took these rivals to identify their weaknesses and repurpose their assets, Google kept on making successful decisions: notably, the introduction of Gmail, its acquisitions of YouTube and Android, and the introduction of Google Maps, among others.

Seen from this evolutionary perspective, Google thrived because its capabilities were perfect for the market at that time, while rivals were ill-adapted.

3.    Data as a byproduct of, and path to, platform monetization

Policymakers should also bear in mind that platforms often must go to great lengths in order to create data about their users—data that these same users often do not know about themselves. Under this framing, data is a byproduct of firms’ activity, rather than an input necessary for rivals to launch a business.

This is especially clear when one looks at the formative years of numerous online platforms. Most of the time, these businesses were started by entrepreneurs who did not own much data but, instead, had a brilliant idea for a service that consumers would value. Even if data ultimately played a role in the monetization of these platforms, it does not appear that it was necessary for their creation.

Data often becomes significant only at a relatively late stage in these businesses’ development. A quick glance at the digital economy is particularly revealing in this regard. Google and Facebook, in particular, both launched their platforms under the assumption that building a successful product would eventually lead to significant revenues.

It took five years from its launch for Facebook to start making a profit. Even at that point, when the platform had 300 million users, it still was not entirely clear whether it would generate most of its income from app sales or online advertisements. It was another three years before Facebook started to cement its position as one of the world’s leading providers of online ads. During this eight-year timespan, Facebook prioritized user growth over the monetization of its platform. The company appears to have concluded (correctly, it turns out) that once its platform attracted enough users, it would surely find a way to make itself highly profitable.

This might explain how Facebook managed to build a highly successful platform despite a large data disadvantage when compared to rivals like MySpace. And Facebook is no outlier. The list of companies that prevailed despite starting with little to no data (and initially lacking a data-dependent monetization strategy) is lengthy. Other examples include TikTok, Airbnb, Amazon, Twitter, PayPal, Snapchat, and Uber.

Those who complain about the unassailable competitive advantages enjoyed by companies with troves of data have it exactly backward. Companies need to innovate to attract consumer data or else consumers will switch to competitors, including both new entrants and established incumbents. As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results. The continued explosion of new products, services, and apps is evidence that data is not a bottleneck to competition, but a spur to drive it.

We’ve Been Here Before: The Microsoft Antitrust Saga

Dystopian and nostalgic discussions concerning the power of successful technology firms are nothing new. Throughout recent history, there have been repeated calls for antitrust authorities to reign in these large companies. These calls for regulation have often led to increased antitrust scrutiny of some form. The Microsoft antitrust cases—which ran from the 1990s to the early 2010s on both sides of the Atlantic—offer a good illustration of the misguided “Antitrust Dystopia.”

In the mid-1990s, Microsoft was one of the most successful and vilified companies in America. After it obtained a commanding position in the desktop operating system market, the company sought to establish a foothold in the burgeoning markets that were developing around the Windows platform (many of which were driven by the emergence of the Internet). These included the Internet browser and media-player markets.

The business tactics employed by Microsoft to execute this transition quickly drew the ire of the press and rival firms, ultimately landing Microsoft in hot water with antitrust authorities on both sides of the Atlantic.

However, as we show in our article, though there were numerous calls for authorities to adopt a precautionary principle-type approach to dealing with Microsoft—and antitrust enforcers were more than receptive to these calls—critics’ worst fears never came to be.

This positive outcome is unlikely to be the result of the antitrust cases that were brought against Microsoft. In other words, the markets in which Microsoft operated seem to have self-corrected (or were misapprehended as competitively constrained) and, today, are generally seen as being unproblematic.

This is not to say that antitrust interventions against Microsoft were necessarily misguided. Instead, our critical point is that commentators and antitrust decisionmakers routinely overlooked or misinterpreted the existing and nonstandard market dynamics that ultimately prevented the worst anticompetitive outcomes from materializing. This is supported by several key factors.

First, the remedies that were imposed against Microsoft by antitrust authorities on both sides of the Atlantic were ultimately quite weak. It is thus unlikely that these remedies, by themselves, prevented Microsoft from dominating its competitors in adjacent markets.

Note that, if this assertion is wrong, and antitrust enforcement did indeed prevent Microsoft from dominating online markets, then there is arguably no need to reform the antitrust laws on either side of the Atlantic, nor even to adopt a particularly aggressive enforcement position. The remedies that were imposed on Microsoft were relatively localized. Accordingly, if antitrust enforcement did indeed prevent Microsoft from dominating other online markets, then it is antitrust enforcement’s deterrent effect that is to thank, and not the remedies actually imposed.

Second, Microsoft lost its bottleneck position. One of the biggest changes that took place in the digital space was the emergence of alternative platforms through which consumers could access the Internet. Indeed, as recently as January 2009, roughly 94% of all Internet traffic came from Windows-based computers. Just over a decade later, this number has fallen to about 31%. Android, iOS, and OS X have shares of roughly 41%, 16%, and 7%, respectively. Consumers can thus access the web via numerous platforms. The emergence of these alternatives reduced the extent to which Microsoft could use its bottleneck position to force its services on consumers in online markets.

Third, it is possible that Microsoft’s own behavior ultimately sowed the seeds of its relative demise. In particular, the alleged barriers to entry (rooted in nostalgic market definitions and skeptical analysis of “ununderstandable” conduct) that were essential to establishing the antitrust case against the company may have been pathways to entry as much as barriers.

Consider this error in the Microsoft court’s analysis of entry barriers: the court pointed out that new entrants faced a barrier that Microsoft didn’t face, in that Microsoft didn’t have to contend with a powerful incumbent impeding its entry by tying up application developers.

But while this may be true, Microsoft did face the absence of any developers at all, and had to essentially create (or encourage the creation of) businesses that didn’t previously exist. Microsoft thus created a huge positive externality for new entrants: existing knowledge and organizations devoted to software development, industry knowledge, reputation, awareness, and incentives for schools to offer courses. It could well be that new entrants, in fact, faced lower barriers with respect to app developers than did Microsoft when it entered.

In short, new entrants may face even more welcoming environments because of incumbents. This enabled Microsoft’s rivals to thrive.

Conclusion

Dystopian antitrust prophecies are generally doomed to fail, just like those belonging to the literary world. The reason is simple. While it is easy to identify what makes dominant firms successful in the present (i.e., what enables them to hold off competitors in the short term), it is almost impossible to conceive of the myriad ways in which the market could adapt. Indeed, it is today’s supra-competitive profits that spur the efforts of competitors.

Surmising that the economy will come to be dominated by a small number of successful firms is thus the same as believing that all market participants can be outsmarted by a few successful ones. This might occur in some cases or for some period of time, but as our article argues, it is bound to happen far less often than pessimists fear.

In short, dystopian scholars have not successfully made the case for precautionary antitrust. Indeed, the economic features of data make it highly unlikely that today’s tech giants could anticompetitively maintain their advantage for an indefinite amount of time, much less leverage this advantage in adjacent markets.

With this in mind, there is one dystopian novel that offers a fitting metaphor to end this Article. The Man in the High Castle tells the story of an alternate present, where Axis forces triumphed over the Allies during the second World War. This turns the dystopia genre on its head: rather than argue that the world is inevitably sliding towards a dark future, The Man in the High Castle posits that the present could be far worse than it is.

In other words, we should not take any of the luxuries we currently enjoy for granted. In the world of antitrust, critics routinely overlook that the emergence of today’s tech industry might have occurred thanks to, and not in spite of, existing antitrust doctrine. Changes to existing antitrust law should thus be dictated by a rigorous assessment of the various costs and benefits they would entail, rather than a litany of hypothetical concerns. The most recent wave of calls for antitrust reform have so far failed to clear this low bar.

The patent system is too often caricatured as involving the grant of “monopolies” that may be used to delay entry and retard competition in key sectors of the economy. The accumulation of allegedly “poor-quality” patents into thickets and portfolios held by “patent trolls” is said by critics to spawn excessive royalty-licensing demands and threatened “holdups” of firms that produce innovative products and services. These alleged patent abuses have been characterized as a wasteful “tax” on high-tech implementers of patented technologies, which inefficiently raises price and harms consumer welfare.

Fortunately, solid scholarship has debunked these stories and instead pointed to the key role patents play in enhancing competition and driving innovation. See, for example, here, here, here, here, here, here, and here.

Nevertheless, early indications are that the Biden administration may be adopting a patent-skeptical attitude. Such an attitude was revealed, for example, in the president’s July 9 Executive Order on Competition (which suggested an openness to undermining the Bayh-Dole Act by using march-in rights to set prices; to weakening pharmaceutical patent rights; and to weakening standard essential patents) and in the administration’s inexplicable decision to waive patent protection for COVID-19 vaccines (see here and here).

Before it takes further steps that would undermine patent protections, the administration should consider new research that underscores how patents help to spawn dynamic market growth through “design around” competition and through licensing that promotes new technologies and product markets.

Patents Spawn Welfare-Enhancing ‘Design Around’ Competition

Critics sometimes bemoan the fact that patents covering a new product or technology allegedly retard competition by preventing new firms from entering a market. (Never mind the fact that the market might not have existed but for the patent.) This thinking, which confuses a patent with a product-market monopoly, is badly mistaken. It is belied by the fact that the publicly available patented technology itself (1) provides valuable information to third parties; and (2) thereby incentivizes them to innovate and compete by refining technologies that fall outside the scope of the patent. In short, patents on important new technologies stimulate, rather than retard, competition. They do this by leading third parties to “design around” the patented technology and thus generate competition that features a richer set of technological options realized in new products.

The importance of design around is revealed, for example, in the development of the incandescent light bulb market in the late 19th century, in reaction to Edison’s patent on a long-lived light bulb. In a 2021 article in the Journal of Competition Law and Economics, Ron D. Katznelson and John Howells did an empirical study of this important example of product innovation. The article’s synopsis explains:

Designing around patents is prevalent but not often appreciated as a means by which patents promote economic development through competition. We provide a novel empirical study of the extent and timing of designing around patent claims. We study the filing rate of incandescent lamp-related patents during 1878–1898 and find that the enforcement of Edison’s incandescent lamp patent in 1891–1894 stimulated a surge of patenting. We studied the specific design features of the lamps described in these lamp patents and compared them with Edison’s claimed invention to create a count of noninfringing designs by filing date. Most of these noninfringing designs circumvented Edison’s patent claims by creating substitute technologies to enable participation in the market. Our forward citation analysis of these patents shows that some had introduced pioneering prior art for new fields. This indicates that invention around patents is not duplicative research and contributes to dynamic economic efficiency. We show that the Edison lamp patent did not suppress advance in electric lighting and the market power of the Edison patent owner weakened during this patent’s enforcement. We propose that investigation of the effects of design around patents is essential for establishing the degree of market power conferred by patents.

In a recent commentary, Katznelson highlights the procompetitive consumer welfare benefits of the Edison light bulb design around:

GE’s enforcement of the Edison patent by injunctions did not stifle competition nor did it endow GE with undue market power, let alone a “monopoly.” Instead, it resulted in clear and tangible consumer welfare benefits. Investments in design-arounds resulted in tangible and measurable dynamic economic efficiencies by (a) increased competition, (b) lamp price reductions, (c) larger choice of suppliers, (d) acceleration of downstream development of new electric illumination technologies, and (e) collateral creation of new technologies that would not have been developed for some time but for the need to design around Edison’s patent claims. These are all imparted benefits attributable to patent enforcement.

Katznelson further explains that “the mythical harm to innovation inflicted by enforcers of pioneer patents is not unique to the Edison case.” He cites additional research debunking claims that the Wright brothers’ pioneer airplane patent seriously retarded progress in aviation (“[a]ircraft manufacturing and investments grew at an even faster pace after the assertion of the Wright Brothers’ patent than before”) and debunking similar claims made about the early radio industry and the early automobile industry. He also notes strong research refuting the patent holdup conjecture regarding standard essential patents. He concludes by bemoaning “infringers’ rhetoric” that “suppresses information on the positive aspects of patent enforcement, such as the design-around effects that we study in this article.”

The Bayh-Dole Act: Licensing that Promotes New Technologies and Product Markets

The Bayh-Dole Act of 1980 has played an enormously important role in accelerating American technological innovation by creating a property rights-based incentive to use government labs. As this good summary from the Biotechnology Innovation Organization puts it, it “[e]mpowers universities, small businesses and non-profit institutions to take ownership [through patent rights] of inventions made during federally-funded research, so they can license these basic inventions for further applied research and development and broader public use.”

The act has continued to generate many new welfare-enhancing technologies and related high-tech business opportunities even during the “COVID slowdown year” of 2020, according to a newly released survey by a nonprofit organization representing the technology management community (see here):  

° The number of startup companies launched around academic inventions rose from 1,040 in 2019 to 1,117 in 2020. Almost 70% of these companies locate in the same state as the research institution that licensed them—making Bayh-Dole a critical driver of state and regional economic development;
° Invention disclosures went from 25,392 to 27,112 in 2020;
° New patent applications increased from 15,972 to 17,738;
° Licenses and options went from 9,751 in ’19 to 10,050 in ’20, with 60% of licenses going to small companies; and
° Most impressive of all—new products introduced to the market based on academic inventions jumped from 711 in 2019 to 933 in 2020.

Despite this continued record of success, the Biden Administration has taken actions that create uncertainty about the government’s support for Bayh-Dole.  

As explained by the Congressional Research Service, “march-in rights allow the government, in specified circumstances, to require the contractor or successors in title to the patent to grant a ‘nonexclusive, partially exclusive, or exclusive license’ to a ‘responsible applicant or applicants.’ If the patent owner refuses to do so, the government may grant the license itself.” Government march-in rights thus far have not been invoked, but a serious threat of their routine invocation would greatly disincentivize future use of Bayh-Dole, thereby undermining patent-backed innovation.

Despite this, the president’s July 9 Executive Order on Competition (noted above) instructed the U.S. Commerce Department to defer finalizing a regulation (see here) “that would have ensured that march-in rights under Bayh Dole would not be misused to allow the government to set prices, but utilized for its statutory intent of providing oversight so good faith efforts are being made to turn government-funded innovations into products. But that’s all up in the air now.”

What’s more, a new U.S. Energy Department policy that would more closely scrutinize Bayh-Dole patentees’ licensing transactions and acquisitions (apparently to encourage more domestic manufacturing) has raised questions in the Bayh-Dole community and may discourage licensing transactions (see here and here). Added to this is the fact that “prominent Members of Congress are pressing the Biden Administration to misconstrue the march-in rights clause to control prices of products arising from National Institutes of Health and Department of Defense funding.” All told, therefore, the outlook for continued patent-inspired innovation through Bayh-Dole processes appears to be worse than it has been in many years.

Conclusion

The patent system does far more than provide potential rewards to enhance incentives for particular individuals to invent. The system also creates a means to enhance welfare by facilitating the diffusion of technology through market processes (see here).

But it does even more than that. It actually drives new forms of dynamic competition by inducing third parties to design around new patents, to the benefit of consumers and the overall economy. As revealed by the Bayh-Dole Act, it also has facilitated the more efficient use of federal labs to generate innovation and new products and processes that would not otherwise have seen the light of day. Let us hope that the Biden administration pays heed to these benefits to the American economy and thinks again before taking steps that would further weaken our patent system.     

The American Choice and Innovation Online Act (previously called the Platform Anti-Monopoly Act), introduced earlier this summer by U.S. Rep. David Cicilline (D-R.I.), would significantly change the nature of digital platforms and, with them, the internet itself. Taken together, the bill’s provisions would turn platforms into passive intermediaries, undermining many of the features that make them valuable to consumers. This seems likely to remain the case even after potential revisions intended to minimize the bill’s unintended consequences.

In its current form, the bill is split into two parts that each is dangerous in its own right. The first, Section 2(a), would prohibit almost any kind of “discrimination” by platforms. Because it is so open-ended, lawmakers might end up removing it in favor of the nominally more focused provisions of Section 2(b), which prohibit certain named conduct. But despite being more specific, this section of the bill is incredibly far-reaching and would effectively ban swaths of essential services.

I will address the potential effects of these sections point-by-point, but both elements of the bill suffer from the same problem: a misguided assumption that “discrimination” by platforms is necessarily bad from a competition and consumer welfare point of view. On the contrary, this conduct is often exactly what consumers want from platforms, since it helps to bring order and legibility to otherwise-unwieldy parts of the Internet. Prohibiting it, as both main parts of the bill do, would make the Internet harder to use and less competitive.

Section 2(a)

Section 2(a) essentially prohibits any behavior by a covered platform that would advantage that platform’s services over any others that also uses that platform; it characterizes this preferencing as “discrimination.”

As we wrote when the House Judiciary Committee’s antitrust bills were first announced, this prohibition on “discrimination” is so broad that, if it made it into law, it would prevent platforms from excluding or disadvantaging any product of another business that uses the platform or advantaging their own products over those of their competitors.

The underlying assumption here is that platforms should be like telephone networks: providing a way for different sides of a market to communicate with each other, but doing little more than that. When platforms do do more—for example, manipulating search results to favor certain businesses or to give their own products prominence —it is seen as exploitative “leveraging.”

But consumers often want platforms to be more than just a telephone network or directory, because digital markets would be very difficult to navigate without some degree of “discrimination” between sellers. The Internet is so vast and sellers are often so anonymous that any assistance which helps you choose among options can serve to make it more navigable. As John Gruber put it:

From what I’ve seen over the last few decades, the quality of the user experience of every computing platform is directly correlated to the amount of control exerted by its platform owner. The current state of the ownerless world wide web speaks for itself.

Sometimes, this manifests itself as “self-preferencing” of another service, to reduce additional time spent searching for the information you want. When you search for a restaurant on Google, it can be very useful to get information like user reviews, the restaurant’s phone number, a button on mobile to phone them directly, estimates of how busy it is, and a link to a Maps page to see how to actually get there.

This is, undoubtedly, frustrating for competitors like Yelp, who would like this information not to be there and for users to have to click on either a link to Yelp or a link to Google Maps. But whether it is good or bad for Yelp isn’t relevant to whether it is good for users—and it is at least arguable that it is, which makes a blanket prohibition on this kind of behavior almost inevitably harmful.

If it isn’t obvious why removing this kind of feature would be harmful for users, ask yourself why some users search in Yelp’s app directly for this kind of result. The answer, I think, is that Yelp gives you all the information above that Google does (and sometimes is better, although I tend to trust Google Maps’ reviews over Yelp’s), and it’s really convenient to have all that on the same page. If Google could not provide this kind of “rich” result, many users would probably stop using Google Search to look for restaurant information in the first place, because a new friction would have been added that made the experience meaningfully worse. Removing that option would be good for Yelp, but mainly because it removes a competitor.

If all this feels like stating the obvious, then it should highlight a significant problem with Section 2(a) in the Cicilline bill: it prohibits conduct that is directly value-adding for consumers, and that creates competition for dedicated services like Yelp that object to having to compete with this kind of conduct.

This is true across all the platforms the legislation proposes to regulate. Amazon prioritizes some third-party products over others on the basis of user reviews, rates of returns and complaints, and so on; Amazon provides private label products to fill gaps in certain product lines where existing offerings are expensive or unreliable; Apple pre-installs a Camera app on the iPhone that, obviously, enjoys an advantage over rival apps like Halide.

Some or all of this behavior would be prohibited under Section 2(a) of the Cicilline bill. Combined with the bill’s presumption that conduct must be defended affirmatively—that is, the platform is presumed guilty unless it can prove that the challenged conduct is pro-competitive, which may be very difficult to do—and the bill could prospectively eliminate a huge range of socially valuable behavior.

Supporters of the bill have already been left arguing that the law simply wouldn’t be enforced in these cases of benign discrimination. But this would hardly be an improvement. It would mean the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) have tremendous control over how these platforms are built, since they could challenge conduct in virtually any case. The regulatory uncertainty alone would complicate the calculus for these firms as they refine, develop, and deploy new products and capabilities. 

So one potential compromise might be to do away with this broad-based rule and proscribe specific kinds of “discriminatory” conduct instead. This approach would involve removing Section 2(a) from the bill but retaining Section 2(b), which enumerates 10 practices it deems to be “other discriminatory conduct.” This may seem appealing, as it would potentially avoid the worst abuses of the broad-based prohibition. In practice, however, it would carry many of the same problems. In fact, many of 2(b)’s provisions appear to go even further than 2(a), and would proscribe even more procompetitive conduct that consumers want.

Sections 2(b)(1) and 2(b)(9)

The wording of these provisions is extremely broad and, as drafted, would seem to challenge even the existence of vertically integrated products. As such, these prohibitions are potentially even more extensive and invasive than Section 2(a) would have been. Even a narrower reading here would seem to preclude safety and privacy features that are valuable to many users. iOS’s sandboxing of apps, for example, serves to limit the damage that a malware app can do on a user’s device precisely because of the limitations it imposes on what other features and hardware the app can access.

Section 2(b)(2)

This provision would preclude a firm from conditioning preferred status on use of another service from that firm. This would likely undermine the purpose of platforms, which is to absorb and counter some of the risks involved in doing business online. An example of this is Amazon’s tying eligibility for its Prime program to sellers that use Amazon’s delivery service (FBA – Fulfilled By Amazon). The bill seems to presume in an example like this that Amazon is leveraging its power in the market—in the form of the value of the Prime label—to profit from delivery. But Amazon could, and already does, charge directly for listing positions; it’s unclear why it would benefit from charging via FBA when it could just charge for the Prime label.

An alternate, simpler explanation is that FBA improves the quality of the service, by granting customers greater assurance that a Prime product will arrive when Amazon says it will. Platforms add value by setting out rules and providing services that reduce the uncertainties between buyers and sellers they’d otherwise experience if they transacted directly with each other. This section’s prohibition—which, as written, would seem to prevent any kind of quality assurance—likely would bar labelling by a platform, even where customers explicitly want it.

Section 2(b)(3)

As written, this would prohibit platforms from using aggregated data to improve their services at all. If Apple found that 99% of its users uninstalled an app immediately after it was installed, it would be reasonable to conclude that the app may be harmful or broken in some way, and that Apple should investigate. This provision would ban that.

Sections 2(b)(4) and 2(b)(6)

These two provisions effectively prohibit a platform from using information it does not also provide to sellers. Such prohibitions ignore the fact that it is often good for sellers to lack certain information, since withholding information can prevent abuse by malicious users. For example, a seller may sometimes try to bribe their customers to post positive reviews of their products, or even threaten customers who have posted negative ones. Part of the role of a platform is to combat that kind of behavior by acting as a middleman and forcing both consumer users and business users to comply with the platform’s own mechanisms to control that kind of behavior.

If this seems overly generous to platforms—since, obviously, it gives them a lot of leverage over business users—ask yourself why people use platforms at all. It is not a coincidence that people often prefer Amazon to dealing with third-party merchants and having to navigate those merchants’ sites themselves. The assurance that Amazon provides is extremely valuable for users. Much of it comes from the company’s ability to act as a middleman in this way, lowering the transaction costs between buyers and sellers.

Section 2(b)(5)

This provision restricts the treatment of defaults. It is, however, relatively restrained when compared to, for example, the DOJ’s lawsuit against Google, which treats as anticompetitive even payment for defaults that can be changed. Still, many of the arguments that apply in that case also apply here: default status for apps can be a way to recoup income foregone elsewhere (e.g., a browser provided for free that makes its money by selling the right to be the default search engine).

Section 2(b)(7)

This section gets to the heart of why “discrimination” can often be procompetitive: that it facilitates competition between platforms. The kind of self-preferencing that this provision would prohibit can allow firms that have a presence in one market to extend that position into another, increasing competition in the process. Both Apple and Amazon have used their customer bases in smartphones and e-commerce, respectively, to grow their customer bases for video streaming, in competition with Netflix, Google’s YouTube, cable television, and each other. If Apple designed a search engine to compete with Google, it would do exactly the same thing, and we would be better off because of it. Restricting this kind of behavior is, perversely, exactly what you would do if you wanted to shield these incumbents from competition.

Section 2(b)(8)

As with other provisions, this one would preclude one of the mechanisms by which platforms add value: creating assurance for customers about the products they can expect if they visit the platform. Some of this relates to child protection; some of the most frustrating stories involve children being overcharged when they use an iPhone or Android app, and effectively being ripped off because of poor policing of the app (or insufficiently strict pricing rules by Apple or Google). This may also relate to rules that state that the seller cannot offer a cheaper product elsewhere (Amazon’s “General Pricing Rule” does this, for example). Prohibiting this would simply impose a tax on customers who cannot shop around and would prefer to use a platform that they trust has the lowest prices for the item they want.

Section 2(b)(10)

Ostensibly a “whistleblower” provision, this section could leave platforms with no recourse, not even removing a user from its platform, in response to spurious complaints intended purely to extract value for the complaining business rather than to promote competition. On its own, this sort of provision may be fairly harmless, but combined with the provisions above, it allows the bill to add up to a rent-seekers’ charter.

Conclusion

In each case above, it’s vital to remember that a reversed burden of proof applies. So, there is a high chance that the law will side against the defendant business, and a large downside for conduct that ends up being found to violate these provisions. That means that platforms will likely err on the side of caution in many cases, avoiding conduct that is ambiguous, and society will probably lose a lot of beneficial behavior in the process.

Put together, the provisions undermine much of what has become an Internet platform’s role: to act as an intermediary, de-risk transactions between customers and merchants who don’t know each other, and tweak the rules of the market to maximize its attractiveness as a place to do business. The “discrimination” that the bill would outlaw is, in practice, behavior that makes it easier for consumers to navigate marketplaces of extreme complexity and uncertainty, in which they often know little or nothing about the firms with whom they are trying to transact business.

Customers do not want platforms to be neutral, open utilities. They can choose platforms that are like that already, such as eBay. They generally tend to prefer ones like Amazon, which are not neutral and which carefully cultivate their service to be as streamlined, managed, and “discriminatory” as possible. Indeed, many of people’s biggest complaints with digital platforms relate to their openness: the fake reviews, counterfeit products, malware, and spam that come with letting more unknown businesses use your service. While these may be unavoidable by-products of running a platform, platforms compete on their ability to ferret them out. Customers are unlikely to thank legislators for regulating Amazon into being another eBay.

[This post adapts elements of “Technology Mergers and the Market for Corporate Control,” forthcoming in the Missouri Law Review.]

In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.

These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).

Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.

However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.

The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.

For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.

Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.

The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.

Kill Zones

One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.

The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.

But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.

Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.

Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:

[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.

However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.

Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.

Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.

In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”

Mergers and Potential Competition

Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.

Some scholars argue that incumbents might acquire rivals that do not yet compete with them directly, in order to reduce the competitive pressure they will face in the future. In his paper “Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits,” Steven Salop argues:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.

Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.

Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell.  Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.

Second, potential competition does not always increase consumer welfare.  Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.

For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.

There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.

In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.

Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.

Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.

If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.

This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.

As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.

Killer Acquisitions

Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”

Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:

[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]

[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.

From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.

To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.

Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.

Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?

The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.

Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.

And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:

For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.

One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:

The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.

Another report finds that:

Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.

In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.

This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.

In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.

While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.

A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.

This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.

The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.

To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.

The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.

Conclusion

Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.

Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.

The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.

Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.

The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.

In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.

Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.

Over the past decade and a half, virtually every branch of the federal government has taken steps to weaken the patent system. As reflected in President Joe Biden’s July 2021 executive order, these restraints on patent enforcement are now being coupled with antitrust policies that, in large part, adopt a “big is bad” approach in place of decades of economically grounded case law and agency guidelines.

This policy bundle is nothing new. It largely replicates the innovation policies pursued during the late New Deal and the postwar decades. That historical experience suggests that a “weak-patent/strong-antitrust” approach is likely to encourage neither innovation nor competition.

The Overlooked Shortfalls of New Deal Innovation Policy

Starting in the early 1930s, the U.S. Supreme Court issued a sequence of decisions that raised obstacles to patent enforcement. The Franklin Roosevelt administration sought to take this policy a step further, advocating compulsory licensing for all patents. While Congress did not adopt this proposal, it was partially implemented as a de facto matter through antitrust enforcement. Starting in the early 1940s and continuing throughout the postwar decades, the antitrust agencies secured judicial precedents that treated a broad range of licensing practices as per se illegal. Perhaps most dramatically, the U.S. Justice Department (DOJ) secured more than 100 compulsory licensing orders against some of the nation’s largest companies. 

The rationale behind these policies was straightforward. By compelling access to incumbents’ patented technologies, courts and regulators would lower barriers to entry and competition would intensify. The postwar economy declined to comply with policymakers’ expectations. Implementation of a weak-IP/strong-antitrust innovation policy over the course of four decades yielded the opposite of its intended outcome. 

Market concentration did not diminish, turnover in market leadership was slow, and private research and development (R&D) was confined mostly to the research labs of the largest corporations (who often relied on generous infusions of federal defense funding). These tendencies are illustrated by the dramatically unequal allocation of innovation capital in the postwar economy.  As of the late 1950s, small firms represented approximately 7% of all private U.S. R&D expenditures.  Two decades later, that figure had fallen even further. By the late 1970s, patenting rates had plunged, and entrepreneurship and innovation were in a state of widely lamented decline.

Why Weak IP Raises Entry Costs and Promotes Concentration

The decline in entrepreneurial innovation under a weak-IP regime was not accidental. Rather, this outcome can be derived logically from the economics of information markets.

Without secure IP rights to establish exclusivity, engage securely with business partners, and deter imitators, potential innovator-entrepreneurs had little hope to obtain funding from investors. In contrast, incumbents could fund R&D internally (or with federal funds that flowed mostly to the largest computing, communications, and aerospace firms) and, even under a weak-IP regime, were protected by difficult-to-match production and distribution efficiencies. As a result, R&D mostly took place inside the closed ecosystems maintained by incumbents such as AT&T, IBM, and GE.

Paradoxically, the antitrust campaign against patent “monopolies” most likely raised entry barriers and promoted industry concentration by removing a critical tool that smaller firms might have used to challenge incumbents that could outperform on every competitive parameter except innovation. While the large corporate labs of the postwar era are rightly credited with technological breakthroughs, incumbents such as AT&T were often slow in transforming breakthroughs in basic research into commercially viable products and services for consumers. Without an immediate competitive threat, there was no rush to do so. 

Back to the Future: Innovation Policy in the New New Deal

Policymakers are now at work reassembling almost the exact same policy bundle that ended in the innovation malaise of the 1970s, accompanied by a similar reliance on public R&D funding disbursed through administrative processes. However well-intentioned, these processes are inherently exposed to political distortions that are absent in an innovation environment that relies mostly on private R&D funding governed by price signals. 

This policy bundle has emerged incrementally since approximately the mid-2000s, through a sequence of complementary actions by every branch of the federal government.

  • In 2011, Congress enacted the America Invents Act, which enables any party to challenge the validity of an issued patent through the U.S. Patent and Trademark Office’s (USPTO) Patent Trial and Appeals Board (PTAB). Since PTAB’s establishment, large information-technology companies that advocated for the act have been among the leading challengers.
  • In May 2021, the Office of the U.S. Trade Representative (USTR) declared its support for a worldwide suspension of IP protections over Covid-19-related innovations (rather than adopting the more nuanced approach of preserving patent protections and expanding funding to accelerate vaccine distribution).  
  • President Biden’s July 2021 executive order states that “the Attorney General and the Secretary of Commerce are encouraged to consider whether to revise their position on the intersection of the intellectual property and antitrust laws, including by considering whether to revise the Policy Statement on Remedies for Standard-Essential Patents Subject to Voluntary F/RAND Commitments.” This suggests that the administration has already determined to retract or significantly modify the 2019 joint policy statement in which the DOJ, USPTO, and the National Institutes of Standards and Technology (NIST) had rejected the view that standard-essential patent owners posed a high risk of patent holdup, which would therefore justify special limitations on enforcement and licensing activities.

The history of U.S. technology markets and policies casts great doubt on the wisdom of this weak-IP policy trajectory. The repeated devaluation of IP rights is likely to be a “lose-lose” approach that does little to promote competition, while endangering the incentive and transactional structures that sustain robust innovation ecosystems. A weak-IP regime is particularly likely to disadvantage smaller firms in biotech, medical devices, and certain information-technology segments that rely on patents to secure funding from venture capital and to partner with larger firms that can accelerate progress toward market release. The BioNTech/Pfizer alliance in the production and distribution of a Covid-19 vaccine illustrates how patents can enable such partnerships to accelerate market release.  

The innovative contribution of BioNTech is hardly a one-off occurrence. The restoration of robust patent protection in the early 1980s was followed by a sharp increase in the percentage of private R&D expenditures attributable to small firms, which jumped from about 5% as of 1980 to 21% by 1992. This contrasts sharply with the unequal allocation of R&D activities during the postwar period.

Remarkably, the resurgence of small-firm innovation following the strong-IP policy shift, starting in the late 20th century, mimics tendencies observed during the late 19th and early-20th centuries, when U.S. courts provided a hospitable venue for patent enforcement; there were few antitrust constraints on licensing activities; and innovation was often led by small firms in partnership with outside investors. This historical pattern, encompassing more than a century of U.S. technology markets, strongly suggests that strengthening IP rights tends to yield a policy “win-win” that bolsters both innovative and competitive intensity. 

An Alternate Path: ‘Bottom-Up’ Innovation Policy

To be clear, the alternative to the policy bundle of weak-IP/strong antitrust does not consist of a simple reversion to blind enforcement of patents and lax administration of the antitrust laws. A nuanced innovation policy would couple modern antitrust’s commitment to evidence-based enforcement—which, in particular cases, supports vigorous intervention—with a renewed commitment to protecting IP rights for innovator-entrepreneurs. That would promote competition from the “bottom up” by bolstering maverick innovators who are well-positioned to challenge (or sometimes partner with) incumbents and maintaining the self-starting engine of creative disruption that has repeatedly driven entrepreneurial innovation environments. Tellingly, technology incumbents have often been among the leading advocates for limiting patent and copyright protections.  

Advocates of a weak-patent/strong-antitrust policy believe it will enhance competitive and innovative intensity in technology markets. History suggests that this combination is likely to produce the opposite outcome.  

Jonathan M. Barnett is the Torrey H. Webb Professor of Law at the University of Southern California, Gould School of Law. This post is based on the author’s recent publications, Innovators, Firms, and Markets: The Organizational Logic of Intellectual Property (Oxford University Press 2021) and “The Great Patent Grab,” in Battles Over Patents: History and the Politics of Innovation (eds. Stephen H. Haber and Naomi R. Lamoreaux, Oxford University Press 2021).