Archives For Monopolization

As a new year dawns, the Biden administration remains fixated on illogical, counterproductive “big is bad” nostrums.

Noted economist and former Clinton Treasury Secretary Larry Summers correctly stressed recently that using antitrust to fight inflation represents “science denial,” tweeting that:

In his extended Twitter thread, Summers notes that labor shortages are the primary cause of inflation over time and that lowering tariffs, paring back import restrictions (such as the Buy America Act), and reducing regulatory delays are vital to combat inflation.

Summers’ points, of course, are right on the mark. Indeed, labor shortages, supply-chain issues, and a dramatic increase in regulatory burdens have been key to the dramatic run-up of prices during the Biden administration’s first year. Reducing the weight of government on the private sector and thereby enhancing incentives for increased investment, labor participation, and supply are the appropriate weapons to slow price rises and incentivize economic growth.

More specifically, administration policies can be pinpointed as the cause, not the potential solution to, rapid price increases in specific sectors, particularly the oil and gas industry. As I recently commented, policies that disincentivize new energy production, and fail to lift excessive regulatory burdens, have been a key factor in sparking rises in gasoline prices. Administration claims that anticompetitive activity is behind these prices increases should be discounted. New Federal Trade Commission (FTC) investigations of oil and gas companies would waste resources and increase already large governmental burdens on those firms.

The administration, nevertheless, appears committed to using antitrust as an anti-inflationary “tool” against “big business” (or perhaps, really, as a symbolic hammer to shift blame to the private sector for rising prices). Recent  pronouncements about combatting “big meat” are a case in point.

The New ‘Big Meat’ Crusade

Part of the administration’s crusade against “big meat” involves providing direct government financial support for favored firms. A U.S. Department of Agriculture (USDA) plan to spend up to $1 billion to assist smaller meat processors is a subsidy that artificially favors one group of competitors. This misguided policy, which bears the scent of special-interest favoritism, wastes taxpayer dollars and distorts free-market outcomes. It will do nothing to cure supply and regulatory problems that affect rising meat prices. It will, however, misallocate resources.

The other key aspect of the big meat initiative smacks more of problematic, old-style, economics-free antitrust. It centers on: (1) threatening possible antitrust actions against four large meat processors based principally on their size and market share; and (2) initiating a planned rulemaking under the Packers and Stockyards Act. (That rulemaking was foreshadowed by language in the July 2021 Biden Administration Executive Order on Competition.)

The administration’s apparent focus on the “dominance” of four large meatpacking firms (which have the temerity to collectively hold greater than 50% market shares in the hog, cattle, and chicken sectors) and the 120% jump in their gross profits since the pandemic began is troubling. It echoes the structuralist “big is bad” philosophy of the 1950s and 1960s. In and of itself, large market share is not, of course, an antitrust problem, nor are large gross profits. Rather, those metrics typically signal a particular firm’s superior efficiency relative to the competition. (Gross profit “reflects the efficiency of a business in terms of making use of its labor, raw material and other supplies.”) Antitrust investigations of firms merely because they are large would inefficiently bloat those companies’ costs and discourage them from engaging in cost-reducing new capacity and production improvements. This would tend to raise, not lower, prices by major firms. It thus would lower consumer welfare, a result at odds with the guiding policy goal of antitrust, which is to promote consumer welfare.

The administration’s announcement that the USDA “will also propose rules this year to strengthen enforcement of the Packers and Stockyards Act” is troublesome. That act, dating back to 1921, uses broad terms that extend beyond antitrust law (such as a prohibition on “giv[ing] any undue or unreasonable preference or advantage to any particular person”) and threatens to penalize efficient conduct by individual competitors. “Ratcheting up” enforcement under this act also could undermine business efficiency and paradoxically raise, not lower, prices.

Obviously, the specifics of the forthcoming proposed rules have not yet been revealed. Nevertheless, the administration’s “big is bad” approach to “big meat” strongly signals that one may expect rules to generate new costly and inefficient restrictions on meat-packer conduct. Such restrictions, of course, would be at odds with vibrant competition and consumer-welfare enhancement.    

This is not to say, of course, that meat packing should be immune from antitrust attention. Such scrutiny, however, should not be transfixed by “big is bad” concerns. Rather, it should center on the core antitrust goal of combatting harmful business conduct that unreasonably restrains competition and reduces consumer welfare. A focus on ferreting out collusive agreements among meat processors, such as price-fixing schemes, should have pride of place. The U.S. Justice Department’s already successful ongoing investigation into price fixing in the broiler-chicken industry is precisely the sort of antitrust initiative on which the administration should expend its scarce enforcement resources.

Conclusion

In sum, the Biden administration could do a lot of good in antitrust land if it would only set aside its nostalgic “big is bad” philosophy. It should return to the bipartisan enlightened understanding that antitrust is a consumer-welfare prescription that is based on sound and empirically based economics and is concerned with economically inefficient conduct that softens or destroys competition.

If it wants to stray beyond mere enforcement, the administration could turn its focus toward dismantling welfare-reducing anticompetitive federal regulatory schemes, rather than adding to private-sector regulatory burdens. For more about how to do this, we recommend that the administration consult a just-released Mercatus Center policy brief that Andrew Mercado and I co-authored.

Others already have noted that the Federal Trade Commission’s (FTC) recently released 6(b) report on the privacy practices of Internet service providers (ISPs) fails to comprehend that widespread adoption of privacy-enabling technology—in particular, Hypertext Transfer Protocol Secure (HTTPS) and DNS over HTTPS (DoH), but also the use of virtual private networks (VPNs)—largely precludes ISPs from seeing what their customers do online.

But a more fundamental problem with the report lies in its underlying assumption that targeted advertising is inherently nefarious. Indeed, much of the report highlights not actual violations of the law by the ISPs, but “concerns” that they could use customer data for targeted advertising much like Google and Facebook already do. The final subheading before the report’s conclusion declares: “Many ISPs in Our Study Can Be At Least As Privacy-Intrusive as Large Advertising Platforms.”

The report does not elaborate on why it would be bad for ISPs to enter the targeted advertising market, which is particularly strange given the public focus regulators have shone in recent months on the supposed dominance of Google, Facebook, and Amazon in online advertising. As the International Center for Law & Economics (ICLE) has argued in past filings on the issue, there simply is no justification to apply sector-specific regulations to ISPs for the mere possibility that they will use customer data for targeted advertising.

ISPs Could be Competition for the Digital Advertising Market

It is ironic to witness FTC warnings about ISPs engaging in targeted advertising even as there are open antitrust cases against Google for its alleged dominance of the digital advertising market. In fact, news reports suggest the U.S. Justice Department (DOJ) is preparing to join the antitrust suits against Google brought by state attorneys general. An obvious upshot of ISPs engaging in a larger amount of targeted advertising if that they could serve as a potential source of competition for Google, Facebook, and Amazon.

Despite the fears raised in the 6(b) report of rampant data collection for targeted ads, ISPs are, in fact, just a very small part of the $152.7 billion U.S. digital advertising market. As the report itself notes: “in 2020, the three largest players, Google, Facebook, and Amazon, received almost two-third of all U.S. digital advertising,” while Verizon pulled in just 3.4% of U.S. digital advertising revenues in 2018.

If the 6(b) report is correct that ISPs have access to troves of consumer data, it raises the question of why they don’t enjoy a bigger share of the digital advertising market. It could be that ISPs have other reasons not to engage in extensive advertising. Internet service provision is a two-sided market. ISPs could (and, over the years in various markets, some have) rely on advertising to subsidize Internet access. That they instead rely primarily on charging users directly for subscriptions may tell us something about prevailing demand on either side of the market.

Regardless of the reasons, the fact that ISPs have little presence in digital advertising suggests that it would be a misplaced focus for regulators to pursue industry-specific privacy regulation to crack down on ISP data collection for targeted advertising.

What’s the Harm in Targeted Advertising, Anyway?

At the heart of the FTC report is the commission’s contention that “advertising-driven surveillance of consumers’ online activity presents serious risks to the privacy of consumer data.” In Part V.B of the report, five of the six risks the FTC lists as associated with ISP data collection are related to advertising. But the only argument the report puts forth for why targeted advertising would be inherently pernicious is the assertion that it is contrary to user expectations and preferences.

As noted earlier, in a two-sided market, targeted ads could allow one side of the market to subsidize the other side. In other words, ISPs could engage in targeted advertising in order to reduce the price of access to consumers on the other side of the market. This is, indeed, one of the dominant models throughout the Internet ecosystem, so it wouldn’t be terribly unusual.

Taking away ISPs’ ability to engage in targeted advertising—particularly if it is paired with rumored net neutrality regulations from the Federal Communications Commission (FCC)—would necessarily put upward pricing pressure on the sector’s remaining revenue stream: subscriber fees. With bridging the so-called “digital divide” (i.e., building out broadband to rural and other unserved and underserved markets) a major focus of the recently enacted infrastructure spending package, it would be counterproductive to simultaneously take steps that would make Internet access more expensive and less accessible.

Even if the FTC were right that data collection for targeted advertising poses the risk of consumer harm, the report fails to justify why a regulatory scheme should apply solely to ISPs when they are such a small part of the digital advertising marketplace. Sector-specific regulation only makes sense if the FTC believes that ISPs are uniquely opaque among data collectors with respect to their collection practices.

Conclusion

The sector-specific approach implicitly endorsed by the 6(b) report would limit competition in the digital advertising market, even as there are already legal and regulatory inquiries into whether that market is sufficiently competitive. The report also fails to make the case the data collection for target advertising is inherently bad, or uniquely bad when done by an ISP.

There may or may not be cause for comprehensive federal privacy legislation, depending on whether it would pass cost-benefit analysis, but there is no reason to focus on ISPs alone. The FTC needs to go back to the drawing board.

Why do digital industries routinely lead to one company having a very large share of the market (at least if one defines markets narrowly)? To anyone familiar with competition policy discussions, the answer might seem obvious: network effects, scale-related economies, and other barriers to entry lead to winner-take-all dynamics in platform industries. Accordingly, it is that believed the first platform to successfully unlock a given online market enjoys a determining first-mover advantage.

This narrative has become ubiquitous in policymaking circles. Thinking of this sort notably underpins high-profile reports on competition in digital markets (here, here, and here), as well ensuing attempts to regulate digital platforms, such as the draft American Innovation and Choice Online Act and the EU’s Digital Markets Act.

But are network effects and the like the only ways to explain why these markets look like this? While there is no definitive answer, scholars routinely overlook an alternative explanation that tends to undercut the narrative that tech markets have become non-contestable.

The alternative model is simple: faced with zero prices and the almost complete absence of switching costs, users have every reason to join their preferred platform. If user preferences are relatively uniform and one platform has a meaningful quality advantage, then there is every reason to expect that most consumers will all join the same one—even though the market remains highly contestable. On the other side of the equation, because platforms face very few capacity constraints, there are few limits to a given platform’s growth. As will be explained throughout this piece, this intuition is as old as economics itself.

The Bertrand Paradox

In 1883, French mathematician Joseph Bertrand published a powerful critique of two of the most high-profile economic thinkers of his time: the late Antoine Augustin Cournot and Léon Walras (it would be another seven years before Alfred Marshall published his famous principles of economics).

Bertrand criticized several of Cournot and Walras’ widely accepted findings. This included Cournot’s conclusion that duopoly competition would lead to prices above marginal cost—or, in other words, that duopolies were imperfectly competitive.

By reformulating the problem slightly, Bertand arrived at the opposite conclusion. He argued that each firm’s incentive to undercut its rival would ultimately lead to marginal cost pricing, and one seller potentially capturing the entire market:

There is a decisive objection [to Cournot’s model]: According to his hypothesis, no [supracompetitive] equilibrium is possible. There is no limit to price decreases; whatever the joint price being charged by firms, a competitor could always undercut this price and, with few exceptions, attract all consumers. If the competitor is allowed to get away with this [i.e. the rival does not react], it will double its profits.

This result is mainly driven by the assumption that, unlike in Cournot’s model, firms can immediately respond to their rival’s chosen price/quantity. In other words, Bertrand implicitly framed the competitive process as price competition, rather than quantity competition (under price competition, firms do not face any capacity constraints and they cannot commit to producing given quantities of a good):

If Cournot’s calculations mask this result, it is because of a remarkable oversight. Referring to them as D and D’, Cournot deals with the quantities sold by each of the two competitors and treats them as independent variables. He assumes that if one were to change by the will of one of the two sellers, the other one could remain fixed. The opposite is evidently true.

This later came to be known as the “Bertrand paradox”—the notion that duopoly-market configurations can produce the same outcome as perfect competition (i.e., P=MC).

But while Bertrand’s critique was ostensibly directed at Cournot’s model of duopoly competition, his underlying point was much broader. Above all, Bertrand seemed preoccupied with the notion that expressing economic problems mathematically merely gives them a veneer of accuracy. In that sense, he was one of the first economists (at least to my knowledge) to argue that the choice of assumptions has a tremendous influence on the predictions of economic models, potentially rendering them unreliable:

On other occasions, Cournot introduces assumptions that shield his reasoning from criticism—scholars can always present problems in a way that suits their reasoning.

All of this is not to say that Bertrand’s predictions regarding duopoly competition necessarily hold in real-world settings; evidence from experimental settings is mixed. Instead, the point is epistemological. Bertrand’s reasoning was groundbreaking because he ventured that market structures are not the sole determinants of consumer outcomes. More broadly, he argued that assumptions regarding the competitive process hold significant sway over the results that a given model may produce (and, as a result, over normative judgements concerning the desirability of given market configurations).

The Theory of Contestable Markets

Bertrand is certainly not the only economist to have suggested market structures alone do not determine competitive outcomes. In the early 1980s, William Baumol (and various co-authors) went one step further. Baumol argued that, under certain conditions, even monopoly market structures could deliver perfectly competitive outcomes. This thesis thus rejected the Structure-Conduct-Performance (“SCP”) Paradigm that dominated policy discussions of the time.

Baumol’s main point was that industry structure is not the main driver of market “contestability,” which is the key determinant of consumer outcomes. In his words:

In the limit, when entry and exit are completely free, efficient incumbent monopolists and oligopolists may in fact be able to prevent entry. But they can do so only by behaving virtuously, that is, by offering to consumers the benefits which competition would otherwise bring. For every deviation from good behavior instantly makes them vulnerable to hit-and-run entry.

For instance, it is widely accepted that “perfect competition” leads to low prices because firms are price-takers; if one does not sell at marginal cost, it will be undercut by rivals. Observers often assume this is due to the number of independent firms on the market. Baumol suggests this is wrong. Instead, the result is driven by the sanction that firms face for deviating from competitive pricing.

In other words, numerous competitors are a sufficient, but not necessary condition for competitive pricing. Monopolies can produce the same outcome when there is a present threat of entry and an incumbent’s deviation from competitive pricing would be sanctioned. This is notably the case when there are extremely low barriers to entry.

Take this hypothetical example from the world of cryptocurrencies. It is largely irrelevant to a user whether there are few or many crypto exchanges on which to trade coins, nonfungible tokens (NFTs), etc. What does matter is that there is at least one exchange that meets one’s needs in terms of both price and quality of service. This could happen because there are many competing exchanges, or because a failure to meet my needs by the few (or even one) exchange that does exist would attract the entry of others to which I could readily switch—thus keeping the behavior of the existing exchanges in check.

This has far-reaching implications for antitrust policy, as Baumol was quick to point out:

This immediately offers what may be a new insight on antitrust policy. It tells us that a history of absence of entry in an industry and a high concentration index may be signs of virtue, not of vice. This will be true when entry costs in our sense are negligible.

Given what precedes, Baumol surmised that industry structure must be driven by endogenous factors—such as firms’ cost structures—rather than the intensity of competition that they face. For instance, scale economies might make monopoly (or another structure) the most efficient configuration in some industries. But so long as rivals can sanction incumbents for failing to compete, the market remains contestable. Accordingly, at least in some industries, both the most efficient and the most contestable market configuration may entail some level of concentration.

To put this last point in even more concrete terms, online platform markets may have features that make scale (and large market shares) efficient. If so, there is every reason to believe that competition could lead to more, not less, concentration. 

How Contestable Are Digital Markets?

The insights of Bertrand and Baumol have important ramifications for contemporary antitrust debates surrounding digital platforms. Indeed, it is critical to ascertain whether the (relatively) concentrated market structures we see in these industries are a sign of superior efficiency (and are consistent with potentially intense competition), or whether they are merely caused by barriers to entry.

The barrier-to-entry explanation has been repeated ad nauseam in recent scholarly reports, competition decisions, and pronouncements by legislators. There is thus little need to restate that thesis here. On the other hand, the contestability argument is almost systematically ignored.

Several factors suggest that online platform markets are far more contestable than critics routinely make them out to be.

First and foremost, consumer switching costs are extremely low for most online platforms. To cite but a few examples: Changing your default search engine requires at most a couple of clicks; joining a new social network can be done by downloading an app and importing your contacts to the app; and buying from an alternative online retailer is almost entirely frictionless, thanks to intermediaries such as PayPal.

These zero or near-zero switching costs are compounded by consumers’ ability to “multi-home.” In simple terms, joining TikTok does not require users to close their Facebook account. And the same applies to other online services. As a result, there is almost no opportunity cost to join a new platform. This further reduces the already tiny cost of switching.

Decades of app development have greatly improved the quality of applications’ graphical user interfaces (GUIs), to such an extent that costs to learn how to use a new app are mostly insignificant. Nowhere is this more apparent than for social media and sharing-economy apps (it may be less true for productivity suites that enable more complex operations). For instance, remembering a couple of intuitive swipe motions is almost all that is required to use TikTok. Likewise, ridesharing and food-delivery apps merely require users to be familiar with the general features of other map-based applications. It is almost unheard of for users to complain about usability—something that would have seemed impossible in the early 21st century, when complicated interfaces still plagued most software.

A second important argument in favor of contestability is that, by and large, online platforms face only limited capacity constraints. In other words, platforms can expand output rapidly (though not necessarily costlessly).

Perhaps the clearest example of this is the sudden rise of the Zoom service in early 2020. As a result of the COVID pandemic, Zoom went from around 10 million daily active users in early 2020 to more than 300 million by late April 2020. Despite being a relatively data-intensive service, Zoom did not struggle to meet this new demand from a more than 30-fold increase in its user base. The service never had to turn down users, reduce call quality, or significantly increase its price. In short, capacity largely followed demand for its service. Online industries thus seem closer to the Bertrand model of competition, where the best platform can almost immediately serve any consumers that demand its services.

Conclusion

Of course, none of this should be construed to declare that online markets are perfectly contestable. The central point is, instead, that critics are too quick to assume they are not. Take the following examples.

Scholars routinely cite the putatively strong concentration of digital markets to argue that big tech firms do not face strong competition, but this is a non sequitur. As Bertrand and Baumol (and others) show, what matters is not whether digital markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, this alone will discipline the behavior of incumbents.

Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.

Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).

Finally, and perhaps most importantly, this piece has argued that many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are but two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.

In short, critics’ failure to meaningfully grapple with these issues serves to shape the prevailing zeitgeist in tech-policy debates. Cournot and Bertrand’s intuitions about oligopoly competition may be more than a century old, but they continue to be tested empirically. It is about time those same standards were applied to tech-policy debates.

A lawsuit filed by the State of Texas and nine other states in December 2020 alleges, among other things, that Google has engaged in anticompetitive conduct related to its online display-advertising business.

Broadly, the Texas complaint (previously discussed in this TOTM symposium) alleges that Google possesses market power in ad-buying tools and in search, illustrated in the figure below.

The complaint also alleges anticompetitive conduct by Google with respect to YouTube in a separate “inline video-advertising market.” According to the complaint, this market power is leveraged to force transactions through Google’s exchange, AdX, and its network, Google Display Network. The leverage is further exercised by forcing publishers to license Google’s ad server, Google Ad Manager.

Although the Texas complaint raises many specific allegations, the key ones constitute four broad claims: 

  1. Google forces publishers to license Google’s ad server and trade in Google’s ad exchange;
  2. Google uses its control over publishers’ inventory to block exchange competition;
  3. Google has disadvantaged technology known as “header bidding” in order to prevent publishers from accessing its competitors; and
  4. Google prevents rival ad-placement services from competing by not allowing them to buy YouTube ad space.

Alleged harms

The Texas complaint alleges Google’s conduct has caused harm to competing networks, exchanges, and ad servers. The complaint also claims that the plaintiff states’ economies have been harmed “by depriving the Plaintiff States and the persons within each Plaintiff State of the benefits of competition.”

In a nod to the widely accepted Consumer Welfare Standard, the Texas complaint alleges harm to three categories of consumers:

  1. Advertisers who pay for their ads to be displayed, but should be paying less;
  2. Publishers who are paid to provide space on their sites to display ads, but should be paid more; and
  3. Users who visit the sites, view the ads, and purchase or use the advertisers’ and publishers’ products and services.

The complaint claims users are harmed by above-competitive prices paid by advertisers, in that these higher costs are passed on in the form of higher prices and lower quality for the products and services they purchase from those advertisers. The complaint simultaneously claims that users are harmed by the below-market prices received by publishers in the form of “less content (lower output of content), lower-quality content, less innovation in content delivery, more paywalls, and higher subscription fees.”

Without saying so explicitly, the complaint insinuates that if intermediaries (e.g., Google and competing services) charged lower fees for their services, advertisers would pay less, publishers would be paid more, and consumers would be better off in the form of lower prices and better products from advertisers, as well as improved content and lower fees on publishers’ sites.

Effective competition is not an antitrust offense

A flawed premise underlies much of the Texas complaint. It asserts that conduct by a dominant incumbent firm that makes competition more difficult for competitors is inherently anticompetitive, even if that conduct confers benefits on users.

This amounts to a claim that Google is acting anti-competitively by innovating and developing products and services to benefit one or more display-advertising constituents (e.g., advertisers, publishers, or consumers) or by doing things that benefit the advertising ecosystem more generally. These include creating new and innovative products, lowering prices, reducing costs through vertical integration, or enhancing interoperability.

The argument, which is made explicitly elsewhere, is that Google must show that it has engineered and implemented its products to minimize obstacles its rivals face, and that any efficiencies created by its products must be shown to outweigh the costs imposed by those improvements on the company’s competitors.

Similarly, claims that Google has acted in an anticompetitive fashion rest on the unsupportable notion that the company acts unfairly when it designs products to benefit itself without considering how those designs would affect competitors. Google could, it is argued, choose alternate arrangements and practices that would possibly confer greater revenue on publishers or lower prices on advertisers without imposing burdens on competitors.

For example, a report published by the Omidyar Network sketching a “roadmap” for a case against Google claims that, if Google’s practices could possibly be reimagined to achieve the same benefits in ways that foster competition from rivals, then the practices should be condemned as anticompetitive:

It is clear even to us as lay people that there are less anticompetitive ways of delivering effective digital advertising—and thereby preserving the substantial benefits from this technology—than those employed by Google.

– Fiona M. Scott Morton & David C. Dinielli, “Roadmap for a Digital Advertising Monopolization Case Against Google”

But that’s not how the law—or the economics—works. This approach converts beneficial aspects of Google’s ad-tech business into anticompetitive defects, essentially arguing that successful competition and innovation create barriers to entry that merit correction through antitrust enforcement.

This approach turns U.S. antitrust law (and basic economics) on its head. As some of the most well-known words of U.S. antitrust jurisprudence have it:

A single producer may be the survivor out of a group of active competitors, merely by virtue of his superior skill, foresight and industry. In such cases a strong argument can be made that, although, the result may expose the public to the evils of monopoly, the Act does not mean to condemn the resultant of those very forces which it is its prime object to foster: finis opus coronat. The successful competitor, having been urged to compete, must not be turned upon when he wins.

– United States v. Aluminum Co. of America, 148 F.2d 416 (2d Cir. 1945)

U.S. antitrust law is intended to foster innovation that creates benefits for consumers, including innovation by incumbents. The law does not proscribe efficiency-enhancing unilateral conduct on the grounds that it might also inconvenience competitors, or that there is some other arrangement that could be “even more” competitive. Under U.S. antitrust law, firms are “under no duty to help [competitors] survive or expand.”  

To be sure, the allegations against Google are couched in terms of anticompetitive effect, rather than being described merely as commercial disagreements over the distribution of profits. But these effects are simply inferred, based on assumptions that Google’s vertically integrated business model entails an inherent ability and incentive to harm rivals.

The Texas complaint claims Google can surreptitiously derive benefits from display advertisers by leveraging its search-advertising capabilities, or by “withholding YouTube inventory,” rather than altruistically opening Google Search and YouTube up to rival ad networks. The complaint alleges Google uses its access to advertiser, publisher, and user data to improve its products without sharing this data with competitors.

All these charges may be true, but they do not describe inherently anticompetitive conduct. Under U.S. law, companies are not obliged to deal with rivals and certainly are not obliged to do so on those rivals’ preferred terms

As long ago as 1919, the U.S. Supreme Court held that:

In the absence of any purpose to create or maintain a monopoly, the [Sherman Act] does not restrict the long recognized right of [a] trader or manufacturer engaged in an entirely private business, freely to exercise his own independent discretion as to parties with whom he will deal.

– United States v. Colgate & Co.

U.S. antitrust law does not condemn conduct on the basis that an enforcer (or a court) is able to identify or hypothesize alternative conduct that might plausibly provide similar benefits at lower cost. In alleging that there are ostensibly “better” ways that Google could have pursued its product design, pricing, and terms of dealing, both the Texas complaint and Omidyar “roadmap” assert that, had the firm only selected a different path, an alternative could have produced even more benefits or an even more competitive structure.

The purported cure of tinkering with benefit-producing unilateral conduct by applying an “even more competition” benchmark is worse than the supposed disease. The adjudicator is likely to misapply such a benchmark, deterring the very conduct the law seeks to promote.

For example, Texas complaint alleges: “Google’s ad server passed inside information to Google’s exchange and permitted Google’s exchange to purchase valuable impressions at artificially depressed prices.” The Omidyar Network’s “roadmap” claims that “after purchasing DoubleClick, which became its publisher ad server, Google apparently lowered its prices to publishers by a factor of ten, at least according to one publisher’s account related to the CMA. Low prices for this service can force rivals to depart, thereby directly reducing competition.”

In contrast, as current U.S. Supreme Court Associate Justice Stephen Breyer once explained, in the context of above-cost low pricing, “the consequence of a mistake here is not simply to force a firm to forego legitimate business activity it wishes to pursue; rather, it is to penalize a procompetitive price cut, perhaps the most desirable activity (from an antitrust perspective) that can take place in a concentrated industry where prices typically exceed costs.”  That commentators or enforcers may be able to imagine alternative or theoretically more desirable conduct is beside the point.

It has been reported that the U.S. Justice Department (DOJ) may join the Texas suit or bring its own similar action against Google in the coming months. If it does, it should learn from the many misconceptions and errors in the Texas complaint that leave it on dubious legal and economic grounds.

Digital advertising is the economic backbone of the Internet. It allows websites and apps to monetize their userbase without having to charge them fees, while the emergence of targeted ads allows this to be accomplished affordably and with less wasted time wasted.

This advertising is facilitated by intermediaries using the “adtech stack,” through which advertisers and publishers are matched via auctions and ads ultimately are served to relevant users. This intermediation process has advanced enormously over the past three decades. Some now allege, however, that this market is being monopolized by its largest participant: Google.

A lawsuit filed by the State of Texas and nine other states in December 2020 alleges, among other things, that Google has engaged in anticompetitive conduct related to its online display advertising business. Those 10 original state plaintiffs were joined by another four states and the Commonwealth of Puerto Rico in March 2021, while South Carolina and Louisiana have also moved to be added as additional plaintiffs. Google also faces a pending antitrust lawsuit brought by the U.S. Justice Department (DOJ) and 14 states (originally 11) related to the company’s distribution agreements, as well as a separate action by the State of Utah, 35 other states, and the District of Columbia related to its search design.

In recent weeks, it has been reported that the DOJ may join the Texas suit or bring its own similar action against Google in the coming months. If it does, it should learn from the many misconceptions and errors in the Texas complaint that leave it on dubious legal and economic grounds.

​​Relevant market

The Texas complaint identifies at least five relevant markets within the adtech stack that it alleges Google either is currently monopolizing or is attempting to monopolize:

  1. Publisher ad servers;
  2. Display ad exchanges;
  3. Display ad networks;
  4. Ad-buying tools for large advertisers; and
  5. Ad-buying tools for small advertisers.

None of these constitute an economically relevant product market for antitrust purposes, since each “market” is defined according to how superficially similar the products are in function, not how substitutable they are. Nevertheless, the Texas complaint vaguely echoes how markets were conceived in the “Roadmap” for a case against Google’s advertising business, published last year by the Omidyar Network, which may ultimately influence any future DOJ complaint, as well.

The Omidyar Roadmap narrows the market from media advertising to digital advertising, then to the open supply of display ads, which comprises only 9% of the total advertising spending and less than 20% of digital advertising, as shown in the figure below. It then further narrows the defined market to the intermediation of the open supply of display ads. Once the market has been sufficiently narrowed, the Roadmap authors conclude that Google’s market share is “perhaps sufficient to confer market power.”

While whittling down the defined market may achieve the purposes of sketching a roadmap to prosecute Google, it also generates a mishmash of more than a dozen relevant markets for digital display and video advertising. In many of these, Google doesn’t have anything approaching market power, while, in some, Facebook is the most dominant player.

The Texas complaint adopts a non-economic approach to market definition.  It ignores potential substitutability between different kinds of advertising, both online and offline, which can serve as a competitive constraint on the display advertising market. The complaint considers neither alternative forms of display advertising, such as social media ads, nor alternative forms of advertising, such as search ads or non-digital ads—all of which can and do act as substitutes. It is possible, at the very least, that advertisers who choose to place ads on third-party websites may switch to other forms of advertising if the price of third-party website advertising was above competitive levels. To ignore this possibility, as the Texas complaint does, is to ignore the entire purpose of defining the relevant antitrust market altogether.

Offline advertising vs. online advertising

The fact that offline and online advertising employ distinct processes does not consign them to economically distinct markets. Indeed, online advertising has manifestly drawn advertisers from offline markets, just as previous technological innovations drew advertisers from other pre-existing channels.

Moreover, there is evidence that, in some cases, offline and online advertising are substitute products. For example, economists Avi Goldfarb and Catherine Tucker demonstrate that display advertising pricing is sensitive to the availability of offline alternatives. They conclude:

We believe our studies refute the hypothesis that online and offline advertising markets operate independently and suggest a default position of substitution. Online and offline advertising markets appear to be closely related. That said, it is important not to draw any firm conclusions based on historical behavior.

Display ads vs. search ads

There is perhaps even more reason to doubt that online display advertising constitutes a distinct, economically relevant market from online search advertising.

Although casual and ill-informed claims are often made to the contrary, various forms of targeted online advertising are significant competitors of each other. Bo Xing and Zhanxi Lin report firms spread their marketing budgets across these different sources of online marketing, and “search engine optimizers”—firms that help websites to maximize the likelihood of a valuable “top-of-list” organic search placement—attract significant revenue. That is, all of these different channels vie against each other for consumer attention and offer advertisers the ability to target their advertising based on data gleaned from consumers’ interactions with their platforms.

Facebook built a business on par with Google’s thanks in large part to advertising, by taking advantage of users’ more extended engagement with the platform to assess relevance and by enabling richer, more engaged advertising than previously appeared on Google Search. It’s an entirely different model from search, but one that has turned Facebook into a competitive ad platform.

And the market continues to shift. Somewhere between 37-56% of product searches start on Amazon, according to one survey, and advertisers have noticed. This is not surprising, given Amazon’s strong ability to match consumers with advertisements, and to do so when and where consumers are more likely to make a purchase.

‘Open’ display advertising vs. ‘owned-and-operated’ display advertising

The United Kingdom’s Competition and Markets Authority (like the Omidyar Roadmap report) has identified two distinct channels of display advertising, which they term “owned and operated” and “open.” The CMA concludes:

Over half of display expenditure is generated by Facebook, which owns both the Facebook platform and Instagram. YouTube has the second highest share of display advertising and is owned by Google. The open display market, in which advertisers buy inventory from many publishers of smaller scale (for example, newspapers and app providers) comprises around 32% of display expenditure.

The Texas complaint does not directly address the distinction between open and owned and operated, but it does allege anticompetitive conduct by Google with respect to YouTube in a separate “inline video advertising market.” 

The CMA finds that the owned-and-operated channel mostly comprises large social media platforms, which sell their own advertising inventory directly to advertisers or media agencies through self-service interfaces, such as Facebook Ads Manager or Snapchat Ads Manager.  In contrast, in the open display channel, publishers such as online newspapers and blogs sell their inventory to advertisers through a “complex chain of intermediaries.”  Through these, intermediaries run auctions that match advertisers’ ads to publisher inventory of ad space. In both channels, nearly all transactions are run through programmatic technology.

The CMA concludes that advertisers “largely see” the open and the owned-and-operated channels as substitutes. According to the CMA, an advertiser’s choice of one channel over the other is driven by each channel’s ability to meet the key performance metrics the advertising campaign is intended to achieve.

The Omidyar Roadmap argues, instead, that the CMA too narrowly focuses on the perspective of advertisers. The Roadmap authors claim that “most publishers” do not control supply that is “owned and operated.” As a result, they conclude that publishers “such as gardenandgun.com or hotels.com” do not have any owned-and-operated supply and can generate revenues from their supply “only through the Google-dominated adtech stack.” 

But this is simply not true. For example, in addition to inventory in its print media, Garden & Gun’s “Digital Media Kit” indicates that the publisher has several sources of owned-and-operated banner and video supply, including the desktop, mobile, and tablet ads on its website; a “homepage takeover” of its website; branded/sponsored content; its email newsletters; and its social media accounts. Hotels.com, an operating company of Expedia Group, has its own owned-and-operated search inventory, which it sells through its “Travel Ads Sponsored Listing,” as well owned-and-operated supply of standard and custom display ads.

Given that both perform the same function and employ similar mechanisms for matching inventory with advertisers, it is unsurprising that both advertisers and publishers appear to consider the owned-and-operated channel and the open channel to be substitutes.

The dystopian novel is a powerful literary genre. It has given us such masterpieces as Nineteen Eighty-Four, Brave New World, and Fahrenheit 451. Though these novels often shed light on the risks of contemporary society and the zeitgeist of the era in which they were written, they also almost always systematically overshoot the mark (intentionally or not) and severely underestimate the radical improvements that stem from the technologies (or other causes) that they fear.

But dystopias are not just a literary phenomenon; they are also a powerful force in policy circles. This is epitomized by influential publications such as The Club of Rome’s 1972 report The Limits of Growth, whose dire predictions of Malthusian catastrophe have largely failed to materialize.

In an article recently published in the George Mason Law Review, we argue that contemporary antitrust scholarship and commentary is similarly afflicted by dystopian thinking. In that respect, today’s antitrust pessimists have set their sights predominantly on the digital economy—”Big Tech” and “Big Data”—in the process of alleging a vast array of potential harms.

Scholars have notably argued that the data created and employed by the digital economy produces network effects that inevitably lead to tipping and to more concentrated markets (e.g., here and here). In other words, firms will allegedly accumulate insurmountable data advantages and thus thwart competitors for extended periods of time.

Some have gone so far as to argue that this threatens the very fabric of western democracy. For instance, parallels between the novel Nineteen Eighty-Four and the power of large digital platforms were plain to see when Epic Games launched an antitrust suit against Apple and its App Store in August 2020. The gaming company released a short video clip parodying Apple’s famous “1984” ad (which, upon its release, was itself widely seen as a critique of the tech incumbents of the time). Similarly, a piece in the New Statesman—titled “Slouching Towards Dystopia: The Rise of Surveillance Capitalism and the Death of Privacy”—concluded that:

Our lives and behaviour have been turned into profit for the Big Tech giants—and we meekly click ‘Accept.’ How did we sleepwalk into a world without privacy?

In our article, we argue that these fears are symptomatic of two different but complementary phenomena, which we refer to as “Antitrust Dystopia” and “Antitrust Nostalgia.”

Antitrust Dystopia is the pessimistic tendency among competition scholars and enforcers to assert that novel business conduct will cause technological advances to have unprecedented, anticompetitive consequences. This is almost always grounded in the belief that “this time is different”—that, despite the benign or positive consequences of previous, similar technological advances, this time those advances will have dire, adverse consequences absent enforcement to stave off abuse.

Antitrust Nostalgia is the biased assumption—often built into antitrust doctrine itself—that change is bad. Antitrust Nostalgia holds that, because a business practice has seemingly benefited competition before, changing it will harm competition going forward. Thus, antitrust enforcement is often skeptical of, and triggered by, various deviations from status quo conduct and relationships (i.e., “nonstandard” business arrangements) when change is, to a first approximation, the hallmark of competition itself.

Our article argues that these two worldviews are premised on particularly questionable assumptions about the way competition unfolds, in this case, in data-intensive markets.

The Case of Big Data Competition

The notion that digital markets are inherently more problematic than their brick-and-mortar counterparts—if there even is a meaningful distinction—is advanced routinely by policymakers, journalists, and other observers. The fear is that, left to their own devices, today’s dominant digital platforms will become all-powerful, protected by an impregnable “data barrier to entry.” Against this alarmist backdrop, nostalgic antitrust scholars have argued for aggressive antitrust intervention against the nonstandard business models and contractual arrangements that characterize these markets.

But as our paper demonstrates, a proper assessment of the attributes of data-intensive digital markets does not support either the dire claims or the proposed interventions.

  1. Data is information

One of the most salient features of the data created and consumed by online firms is that, jargon aside, it is just information. As with other types of information, it thus tends to have at least some traits usually associated with public goods (i.e., goods that are non-rivalrous in consumption and not readily excludable). As the National Bureau of Economic Research’s Catherine Tucker argues, data “has near-zero marginal cost of production and distribution even over long distances,” making it very difficult to exclude others from accessing it. Meanwhile, multiple economic agents can simultaneously use the same data, making it non-rivalrous in consumption.

As we explain in our paper, these features make the nature of modern data almost irreconcilable with the alleged hoarding and dominance that critics routinely associate with the tech industry.

2. Data is not scarce; expertise is

Another important feature of data is that it is ubiquitous. The predominant challenge for firms is not so much in obtaining data but, rather, in drawing useful insights from it. This has two important implications for antitrust policy.

First, although data does not have the self-reinforcing characteristics of network effects, there is a sense that acquiring a certain amount of data and expertise is necessary to compete in data-heavy industries. It is (or should be) equally apparent, however, that this “learning by doing” advantage rapidly reaches a point of diminishing returns.

This is supported by significant empirical evidence. As our survey of the empirical literature shows, data generally entails diminishing marginal returns:

Second, it is firms’ capabilities, rather than the data they own, that lead to success in the marketplace. Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data might, in fact, have the causality in reverse. Arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around.

This dynamic can be seen at play in the early days of the search-engine market. In 2013, The Atlantic ran a piece titled “What the Web Looked Like Before Google.” By comparing the websites of Google and its rivals in 1998 (when Google Search was launched), the article shows how the current champion of search marked a radical departure from the status quo.

Even if it stumbled upon it by chance, Google immediately identified a winning formula for the search-engine market. It ditched the complicated classification schemes favored by its rivals and opted, instead, for a clean page with a single search box. This ensured that users could access the information they desired in the shortest possible amount of time—thanks, in part, to Google’s PageRank algorithm.

It is hardly surprising that Google’s rivals struggled to keep up with this shift in the search-engine industry. The theory of dynamic capabilities tells us that firms that have achieved success by indexing the web will struggle when the market rapidly moves toward a new paradigm (in this case, Google’s single search box and ten blue links). During the time it took these rivals to identify their weaknesses and repurpose their assets, Google kept on making successful decisions: notably, the introduction of Gmail, its acquisitions of YouTube and Android, and the introduction of Google Maps, among others.

Seen from this evolutionary perspective, Google thrived because its capabilities were perfect for the market at that time, while rivals were ill-adapted.

3.    Data as a byproduct of, and path to, platform monetization

Policymakers should also bear in mind that platforms often must go to great lengths in order to create data about their users—data that these same users often do not know about themselves. Under this framing, data is a byproduct of firms’ activity, rather than an input necessary for rivals to launch a business.

This is especially clear when one looks at the formative years of numerous online platforms. Most of the time, these businesses were started by entrepreneurs who did not own much data but, instead, had a brilliant idea for a service that consumers would value. Even if data ultimately played a role in the monetization of these platforms, it does not appear that it was necessary for their creation.

Data often becomes significant only at a relatively late stage in these businesses’ development. A quick glance at the digital economy is particularly revealing in this regard. Google and Facebook, in particular, both launched their platforms under the assumption that building a successful product would eventually lead to significant revenues.

It took five years from its launch for Facebook to start making a profit. Even at that point, when the platform had 300 million users, it still was not entirely clear whether it would generate most of its income from app sales or online advertisements. It was another three years before Facebook started to cement its position as one of the world’s leading providers of online ads. During this eight-year timespan, Facebook prioritized user growth over the monetization of its platform. The company appears to have concluded (correctly, it turns out) that once its platform attracted enough users, it would surely find a way to make itself highly profitable.

This might explain how Facebook managed to build a highly successful platform despite a large data disadvantage when compared to rivals like MySpace. And Facebook is no outlier. The list of companies that prevailed despite starting with little to no data (and initially lacking a data-dependent monetization strategy) is lengthy. Other examples include TikTok, Airbnb, Amazon, Twitter, PayPal, Snapchat, and Uber.

Those who complain about the unassailable competitive advantages enjoyed by companies with troves of data have it exactly backward. Companies need to innovate to attract consumer data or else consumers will switch to competitors, including both new entrants and established incumbents. As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results. The continued explosion of new products, services, and apps is evidence that data is not a bottleneck to competition, but a spur to drive it.

We’ve Been Here Before: The Microsoft Antitrust Saga

Dystopian and nostalgic discussions concerning the power of successful technology firms are nothing new. Throughout recent history, there have been repeated calls for antitrust authorities to reign in these large companies. These calls for regulation have often led to increased antitrust scrutiny of some form. The Microsoft antitrust cases—which ran from the 1990s to the early 2010s on both sides of the Atlantic—offer a good illustration of the misguided “Antitrust Dystopia.”

In the mid-1990s, Microsoft was one of the most successful and vilified companies in America. After it obtained a commanding position in the desktop operating system market, the company sought to establish a foothold in the burgeoning markets that were developing around the Windows platform (many of which were driven by the emergence of the Internet). These included the Internet browser and media-player markets.

The business tactics employed by Microsoft to execute this transition quickly drew the ire of the press and rival firms, ultimately landing Microsoft in hot water with antitrust authorities on both sides of the Atlantic.

However, as we show in our article, though there were numerous calls for authorities to adopt a precautionary principle-type approach to dealing with Microsoft—and antitrust enforcers were more than receptive to these calls—critics’ worst fears never came to be.

This positive outcome is unlikely to be the result of the antitrust cases that were brought against Microsoft. In other words, the markets in which Microsoft operated seem to have self-corrected (or were misapprehended as competitively constrained) and, today, are generally seen as being unproblematic.

This is not to say that antitrust interventions against Microsoft were necessarily misguided. Instead, our critical point is that commentators and antitrust decisionmakers routinely overlooked or misinterpreted the existing and nonstandard market dynamics that ultimately prevented the worst anticompetitive outcomes from materializing. This is supported by several key factors.

First, the remedies that were imposed against Microsoft by antitrust authorities on both sides of the Atlantic were ultimately quite weak. It is thus unlikely that these remedies, by themselves, prevented Microsoft from dominating its competitors in adjacent markets.

Note that, if this assertion is wrong, and antitrust enforcement did indeed prevent Microsoft from dominating online markets, then there is arguably no need to reform the antitrust laws on either side of the Atlantic, nor even to adopt a particularly aggressive enforcement position. The remedies that were imposed on Microsoft were relatively localized. Accordingly, if antitrust enforcement did indeed prevent Microsoft from dominating other online markets, then it is antitrust enforcement’s deterrent effect that is to thank, and not the remedies actually imposed.

Second, Microsoft lost its bottleneck position. One of the biggest changes that took place in the digital space was the emergence of alternative platforms through which consumers could access the Internet. Indeed, as recently as January 2009, roughly 94% of all Internet traffic came from Windows-based computers. Just over a decade later, this number has fallen to about 31%. Android, iOS, and OS X have shares of roughly 41%, 16%, and 7%, respectively. Consumers can thus access the web via numerous platforms. The emergence of these alternatives reduced the extent to which Microsoft could use its bottleneck position to force its services on consumers in online markets.

Third, it is possible that Microsoft’s own behavior ultimately sowed the seeds of its relative demise. In particular, the alleged barriers to entry (rooted in nostalgic market definitions and skeptical analysis of “ununderstandable” conduct) that were essential to establishing the antitrust case against the company may have been pathways to entry as much as barriers.

Consider this error in the Microsoft court’s analysis of entry barriers: the court pointed out that new entrants faced a barrier that Microsoft didn’t face, in that Microsoft didn’t have to contend with a powerful incumbent impeding its entry by tying up application developers.

But while this may be true, Microsoft did face the absence of any developers at all, and had to essentially create (or encourage the creation of) businesses that didn’t previously exist. Microsoft thus created a huge positive externality for new entrants: existing knowledge and organizations devoted to software development, industry knowledge, reputation, awareness, and incentives for schools to offer courses. It could well be that new entrants, in fact, faced lower barriers with respect to app developers than did Microsoft when it entered.

In short, new entrants may face even more welcoming environments because of incumbents. This enabled Microsoft’s rivals to thrive.

Conclusion

Dystopian antitrust prophecies are generally doomed to fail, just like those belonging to the literary world. The reason is simple. While it is easy to identify what makes dominant firms successful in the present (i.e., what enables them to hold off competitors in the short term), it is almost impossible to conceive of the myriad ways in which the market could adapt. Indeed, it is today’s supra-competitive profits that spur the efforts of competitors.

Surmising that the economy will come to be dominated by a small number of successful firms is thus the same as believing that all market participants can be outsmarted by a few successful ones. This might occur in some cases or for some period of time, but as our article argues, it is bound to happen far less often than pessimists fear.

In short, dystopian scholars have not successfully made the case for precautionary antitrust. Indeed, the economic features of data make it highly unlikely that today’s tech giants could anticompetitively maintain their advantage for an indefinite amount of time, much less leverage this advantage in adjacent markets.

With this in mind, there is one dystopian novel that offers a fitting metaphor to end this Article. The Man in the High Castle tells the story of an alternate present, where Axis forces triumphed over the Allies during the second World War. This turns the dystopia genre on its head: rather than argue that the world is inevitably sliding towards a dark future, The Man in the High Castle posits that the present could be far worse than it is.

In other words, we should not take any of the luxuries we currently enjoy for granted. In the world of antitrust, critics routinely overlook that the emergence of today’s tech industry might have occurred thanks to, and not in spite of, existing antitrust doctrine. Changes to existing antitrust law should thus be dictated by a rigorous assessment of the various costs and benefits they would entail, rather than a litany of hypothetical concerns. The most recent wave of calls for antitrust reform have so far failed to clear this low bar.

The patent system is too often caricatured as involving the grant of “monopolies” that may be used to delay entry and retard competition in key sectors of the economy. The accumulation of allegedly “poor-quality” patents into thickets and portfolios held by “patent trolls” is said by critics to spawn excessive royalty-licensing demands and threatened “holdups” of firms that produce innovative products and services. These alleged patent abuses have been characterized as a wasteful “tax” on high-tech implementers of patented technologies, which inefficiently raises price and harms consumer welfare.

Fortunately, solid scholarship has debunked these stories and instead pointed to the key role patents play in enhancing competition and driving innovation. See, for example, here, here, here, here, here, here, and here.

Nevertheless, early indications are that the Biden administration may be adopting a patent-skeptical attitude. Such an attitude was revealed, for example, in the president’s July 9 Executive Order on Competition (which suggested an openness to undermining the Bayh-Dole Act by using march-in rights to set prices; to weakening pharmaceutical patent rights; and to weakening standard essential patents) and in the administration’s inexplicable decision to waive patent protection for COVID-19 vaccines (see here and here).

Before it takes further steps that would undermine patent protections, the administration should consider new research that underscores how patents help to spawn dynamic market growth through “design around” competition and through licensing that promotes new technologies and product markets.

Patents Spawn Welfare-Enhancing ‘Design Around’ Competition

Critics sometimes bemoan the fact that patents covering a new product or technology allegedly retard competition by preventing new firms from entering a market. (Never mind the fact that the market might not have existed but for the patent.) This thinking, which confuses a patent with a product-market monopoly, is badly mistaken. It is belied by the fact that the publicly available patented technology itself (1) provides valuable information to third parties; and (2) thereby incentivizes them to innovate and compete by refining technologies that fall outside the scope of the patent. In short, patents on important new technologies stimulate, rather than retard, competition. They do this by leading third parties to “design around” the patented technology and thus generate competition that features a richer set of technological options realized in new products.

The importance of design around is revealed, for example, in the development of the incandescent light bulb market in the late 19th century, in reaction to Edison’s patent on a long-lived light bulb. In a 2021 article in the Journal of Competition Law and Economics, Ron D. Katznelson and John Howells did an empirical study of this important example of product innovation. The article’s synopsis explains:

Designing around patents is prevalent but not often appreciated as a means by which patents promote economic development through competition. We provide a novel empirical study of the extent and timing of designing around patent claims. We study the filing rate of incandescent lamp-related patents during 1878–1898 and find that the enforcement of Edison’s incandescent lamp patent in 1891–1894 stimulated a surge of patenting. We studied the specific design features of the lamps described in these lamp patents and compared them with Edison’s claimed invention to create a count of noninfringing designs by filing date. Most of these noninfringing designs circumvented Edison’s patent claims by creating substitute technologies to enable participation in the market. Our forward citation analysis of these patents shows that some had introduced pioneering prior art for new fields. This indicates that invention around patents is not duplicative research and contributes to dynamic economic efficiency. We show that the Edison lamp patent did not suppress advance in electric lighting and the market power of the Edison patent owner weakened during this patent’s enforcement. We propose that investigation of the effects of design around patents is essential for establishing the degree of market power conferred by patents.

In a recent commentary, Katznelson highlights the procompetitive consumer welfare benefits of the Edison light bulb design around:

GE’s enforcement of the Edison patent by injunctions did not stifle competition nor did it endow GE with undue market power, let alone a “monopoly.” Instead, it resulted in clear and tangible consumer welfare benefits. Investments in design-arounds resulted in tangible and measurable dynamic economic efficiencies by (a) increased competition, (b) lamp price reductions, (c) larger choice of suppliers, (d) acceleration of downstream development of new electric illumination technologies, and (e) collateral creation of new technologies that would not have been developed for some time but for the need to design around Edison’s patent claims. These are all imparted benefits attributable to patent enforcement.

Katznelson further explains that “the mythical harm to innovation inflicted by enforcers of pioneer patents is not unique to the Edison case.” He cites additional research debunking claims that the Wright brothers’ pioneer airplane patent seriously retarded progress in aviation (“[a]ircraft manufacturing and investments grew at an even faster pace after the assertion of the Wright Brothers’ patent than before”) and debunking similar claims made about the early radio industry and the early automobile industry. He also notes strong research refuting the patent holdup conjecture regarding standard essential patents. He concludes by bemoaning “infringers’ rhetoric” that “suppresses information on the positive aspects of patent enforcement, such as the design-around effects that we study in this article.”

The Bayh-Dole Act: Licensing that Promotes New Technologies and Product Markets

The Bayh-Dole Act of 1980 has played an enormously important role in accelerating American technological innovation by creating a property rights-based incentive to use government labs. As this good summary from the Biotechnology Innovation Organization puts it, it “[e]mpowers universities, small businesses and non-profit institutions to take ownership [through patent rights] of inventions made during federally-funded research, so they can license these basic inventions for further applied research and development and broader public use.”

The act has continued to generate many new welfare-enhancing technologies and related high-tech business opportunities even during the “COVID slowdown year” of 2020, according to a newly released survey by a nonprofit organization representing the technology management community (see here):  

° The number of startup companies launched around academic inventions rose from 1,040 in 2019 to 1,117 in 2020. Almost 70% of these companies locate in the same state as the research institution that licensed them—making Bayh-Dole a critical driver of state and regional economic development;
° Invention disclosures went from 25,392 to 27,112 in 2020;
° New patent applications increased from 15,972 to 17,738;
° Licenses and options went from 9,751 in ’19 to 10,050 in ’20, with 60% of licenses going to small companies; and
° Most impressive of all—new products introduced to the market based on academic inventions jumped from 711 in 2019 to 933 in 2020.

Despite this continued record of success, the Biden Administration has taken actions that create uncertainty about the government’s support for Bayh-Dole.  

As explained by the Congressional Research Service, “march-in rights allow the government, in specified circumstances, to require the contractor or successors in title to the patent to grant a ‘nonexclusive, partially exclusive, or exclusive license’ to a ‘responsible applicant or applicants.’ If the patent owner refuses to do so, the government may grant the license itself.” Government march-in rights thus far have not been invoked, but a serious threat of their routine invocation would greatly disincentivize future use of Bayh-Dole, thereby undermining patent-backed innovation.

Despite this, the president’s July 9 Executive Order on Competition (noted above) instructed the U.S. Commerce Department to defer finalizing a regulation (see here) “that would have ensured that march-in rights under Bayh Dole would not be misused to allow the government to set prices, but utilized for its statutory intent of providing oversight so good faith efforts are being made to turn government-funded innovations into products. But that’s all up in the air now.”

What’s more, a new U.S. Energy Department policy that would more closely scrutinize Bayh-Dole patentees’ licensing transactions and acquisitions (apparently to encourage more domestic manufacturing) has raised questions in the Bayh-Dole community and may discourage licensing transactions (see here and here). Added to this is the fact that “prominent Members of Congress are pressing the Biden Administration to misconstrue the march-in rights clause to control prices of products arising from National Institutes of Health and Department of Defense funding.” All told, therefore, the outlook for continued patent-inspired innovation through Bayh-Dole processes appears to be worse than it has been in many years.

Conclusion

The patent system does far more than provide potential rewards to enhance incentives for particular individuals to invent. The system also creates a means to enhance welfare by facilitating the diffusion of technology through market processes (see here).

But it does even more than that. It actually drives new forms of dynamic competition by inducing third parties to design around new patents, to the benefit of consumers and the overall economy. As revealed by the Bayh-Dole Act, it also has facilitated the more efficient use of federal labs to generate innovation and new products and processes that would not otherwise have seen the light of day. Let us hope that the Biden administration pays heed to these benefits to the American economy and thinks again before taking steps that would further weaken our patent system.     

The American Choice and Innovation Online Act (previously called the Platform Anti-Monopoly Act), introduced earlier this summer by U.S. Rep. David Cicilline (D-R.I.), would significantly change the nature of digital platforms and, with them, the internet itself. Taken together, the bill’s provisions would turn platforms into passive intermediaries, undermining many of the features that make them valuable to consumers. This seems likely to remain the case even after potential revisions intended to minimize the bill’s unintended consequences.

In its current form, the bill is split into two parts that each is dangerous in its own right. The first, Section 2(a), would prohibit almost any kind of “discrimination” by platforms. Because it is so open-ended, lawmakers might end up removing it in favor of the nominally more focused provisions of Section 2(b), which prohibit certain named conduct. But despite being more specific, this section of the bill is incredibly far-reaching and would effectively ban swaths of essential services.

I will address the potential effects of these sections point-by-point, but both elements of the bill suffer from the same problem: a misguided assumption that “discrimination” by platforms is necessarily bad from a competition and consumer welfare point of view. On the contrary, this conduct is often exactly what consumers want from platforms, since it helps to bring order and legibility to otherwise-unwieldy parts of the Internet. Prohibiting it, as both main parts of the bill do, would make the Internet harder to use and less competitive.

Section 2(a)

Section 2(a) essentially prohibits any behavior by a covered platform that would advantage that platform’s services over any others that also uses that platform; it characterizes this preferencing as “discrimination.”

As we wrote when the House Judiciary Committee’s antitrust bills were first announced, this prohibition on “discrimination” is so broad that, if it made it into law, it would prevent platforms from excluding or disadvantaging any product of another business that uses the platform or advantaging their own products over those of their competitors.

The underlying assumption here is that platforms should be like telephone networks: providing a way for different sides of a market to communicate with each other, but doing little more than that. When platforms do do more—for example, manipulating search results to favor certain businesses or to give their own products prominence —it is seen as exploitative “leveraging.”

But consumers often want platforms to be more than just a telephone network or directory, because digital markets would be very difficult to navigate without some degree of “discrimination” between sellers. The Internet is so vast and sellers are often so anonymous that any assistance which helps you choose among options can serve to make it more navigable. As John Gruber put it:

From what I’ve seen over the last few decades, the quality of the user experience of every computing platform is directly correlated to the amount of control exerted by its platform owner. The current state of the ownerless world wide web speaks for itself.

Sometimes, this manifests itself as “self-preferencing” of another service, to reduce additional time spent searching for the information you want. When you search for a restaurant on Google, it can be very useful to get information like user reviews, the restaurant’s phone number, a button on mobile to phone them directly, estimates of how busy it is, and a link to a Maps page to see how to actually get there.

This is, undoubtedly, frustrating for competitors like Yelp, who would like this information not to be there and for users to have to click on either a link to Yelp or a link to Google Maps. But whether it is good or bad for Yelp isn’t relevant to whether it is good for users—and it is at least arguable that it is, which makes a blanket prohibition on this kind of behavior almost inevitably harmful.

If it isn’t obvious why removing this kind of feature would be harmful for users, ask yourself why some users search in Yelp’s app directly for this kind of result. The answer, I think, is that Yelp gives you all the information above that Google does (and sometimes is better, although I tend to trust Google Maps’ reviews over Yelp’s), and it’s really convenient to have all that on the same page. If Google could not provide this kind of “rich” result, many users would probably stop using Google Search to look for restaurant information in the first place, because a new friction would have been added that made the experience meaningfully worse. Removing that option would be good for Yelp, but mainly because it removes a competitor.

If all this feels like stating the obvious, then it should highlight a significant problem with Section 2(a) in the Cicilline bill: it prohibits conduct that is directly value-adding for consumers, and that creates competition for dedicated services like Yelp that object to having to compete with this kind of conduct.

This is true across all the platforms the legislation proposes to regulate. Amazon prioritizes some third-party products over others on the basis of user reviews, rates of returns and complaints, and so on; Amazon provides private label products to fill gaps in certain product lines where existing offerings are expensive or unreliable; Apple pre-installs a Camera app on the iPhone that, obviously, enjoys an advantage over rival apps like Halide.

Some or all of this behavior would be prohibited under Section 2(a) of the Cicilline bill. Combined with the bill’s presumption that conduct must be defended affirmatively—that is, the platform is presumed guilty unless it can prove that the challenged conduct is pro-competitive, which may be very difficult to do—and the bill could prospectively eliminate a huge range of socially valuable behavior.

Supporters of the bill have already been left arguing that the law simply wouldn’t be enforced in these cases of benign discrimination. But this would hardly be an improvement. It would mean the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) have tremendous control over how these platforms are built, since they could challenge conduct in virtually any case. The regulatory uncertainty alone would complicate the calculus for these firms as they refine, develop, and deploy new products and capabilities. 

So one potential compromise might be to do away with this broad-based rule and proscribe specific kinds of “discriminatory” conduct instead. This approach would involve removing Section 2(a) from the bill but retaining Section 2(b), which enumerates 10 practices it deems to be “other discriminatory conduct.” This may seem appealing, as it would potentially avoid the worst abuses of the broad-based prohibition. In practice, however, it would carry many of the same problems. In fact, many of 2(b)’s provisions appear to go even further than 2(a), and would proscribe even more procompetitive conduct that consumers want.

Sections 2(b)(1) and 2(b)(9)

The wording of these provisions is extremely broad and, as drafted, would seem to challenge even the existence of vertically integrated products. As such, these prohibitions are potentially even more extensive and invasive than Section 2(a) would have been. Even a narrower reading here would seem to preclude safety and privacy features that are valuable to many users. iOS’s sandboxing of apps, for example, serves to limit the damage that a malware app can do on a user’s device precisely because of the limitations it imposes on what other features and hardware the app can access.

Section 2(b)(2)

This provision would preclude a firm from conditioning preferred status on use of another service from that firm. This would likely undermine the purpose of platforms, which is to absorb and counter some of the risks involved in doing business online. An example of this is Amazon’s tying eligibility for its Prime program to sellers that use Amazon’s delivery service (FBA – Fulfilled By Amazon). The bill seems to presume in an example like this that Amazon is leveraging its power in the market—in the form of the value of the Prime label—to profit from delivery. But Amazon could, and already does, charge directly for listing positions; it’s unclear why it would benefit from charging via FBA when it could just charge for the Prime label.

An alternate, simpler explanation is that FBA improves the quality of the service, by granting customers greater assurance that a Prime product will arrive when Amazon says it will. Platforms add value by setting out rules and providing services that reduce the uncertainties between buyers and sellers they’d otherwise experience if they transacted directly with each other. This section’s prohibition—which, as written, would seem to prevent any kind of quality assurance—likely would bar labelling by a platform, even where customers explicitly want it.

Section 2(b)(3)

As written, this would prohibit platforms from using aggregated data to improve their services at all. If Apple found that 99% of its users uninstalled an app immediately after it was installed, it would be reasonable to conclude that the app may be harmful or broken in some way, and that Apple should investigate. This provision would ban that.

Sections 2(b)(4) and 2(b)(6)

These two provisions effectively prohibit a platform from using information it does not also provide to sellers. Such prohibitions ignore the fact that it is often good for sellers to lack certain information, since withholding information can prevent abuse by malicious users. For example, a seller may sometimes try to bribe their customers to post positive reviews of their products, or even threaten customers who have posted negative ones. Part of the role of a platform is to combat that kind of behavior by acting as a middleman and forcing both consumer users and business users to comply with the platform’s own mechanisms to control that kind of behavior.

If this seems overly generous to platforms—since, obviously, it gives them a lot of leverage over business users—ask yourself why people use platforms at all. It is not a coincidence that people often prefer Amazon to dealing with third-party merchants and having to navigate those merchants’ sites themselves. The assurance that Amazon provides is extremely valuable for users. Much of it comes from the company’s ability to act as a middleman in this way, lowering the transaction costs between buyers and sellers.

Section 2(b)(5)

This provision restricts the treatment of defaults. It is, however, relatively restrained when compared to, for example, the DOJ’s lawsuit against Google, which treats as anticompetitive even payment for defaults that can be changed. Still, many of the arguments that apply in that case also apply here: default status for apps can be a way to recoup income foregone elsewhere (e.g., a browser provided for free that makes its money by selling the right to be the default search engine).

Section 2(b)(7)

This section gets to the heart of why “discrimination” can often be procompetitive: that it facilitates competition between platforms. The kind of self-preferencing that this provision would prohibit can allow firms that have a presence in one market to extend that position into another, increasing competition in the process. Both Apple and Amazon have used their customer bases in smartphones and e-commerce, respectively, to grow their customer bases for video streaming, in competition with Netflix, Google’s YouTube, cable television, and each other. If Apple designed a search engine to compete with Google, it would do exactly the same thing, and we would be better off because of it. Restricting this kind of behavior is, perversely, exactly what you would do if you wanted to shield these incumbents from competition.

Section 2(b)(8)

As with other provisions, this one would preclude one of the mechanisms by which platforms add value: creating assurance for customers about the products they can expect if they visit the platform. Some of this relates to child protection; some of the most frustrating stories involve children being overcharged when they use an iPhone or Android app, and effectively being ripped off because of poor policing of the app (or insufficiently strict pricing rules by Apple or Google). This may also relate to rules that state that the seller cannot offer a cheaper product elsewhere (Amazon’s “General Pricing Rule” does this, for example). Prohibiting this would simply impose a tax on customers who cannot shop around and would prefer to use a platform that they trust has the lowest prices for the item they want.

Section 2(b)(10)

Ostensibly a “whistleblower” provision, this section could leave platforms with no recourse, not even removing a user from its platform, in response to spurious complaints intended purely to extract value for the complaining business rather than to promote competition. On its own, this sort of provision may be fairly harmless, but combined with the provisions above, it allows the bill to add up to a rent-seekers’ charter.

Conclusion

In each case above, it’s vital to remember that a reversed burden of proof applies. So, there is a high chance that the law will side against the defendant business, and a large downside for conduct that ends up being found to violate these provisions. That means that platforms will likely err on the side of caution in many cases, avoiding conduct that is ambiguous, and society will probably lose a lot of beneficial behavior in the process.

Put together, the provisions undermine much of what has become an Internet platform’s role: to act as an intermediary, de-risk transactions between customers and merchants who don’t know each other, and tweak the rules of the market to maximize its attractiveness as a place to do business. The “discrimination” that the bill would outlaw is, in practice, behavior that makes it easier for consumers to navigate marketplaces of extreme complexity and uncertainty, in which they often know little or nothing about the firms with whom they are trying to transact business.

Customers do not want platforms to be neutral, open utilities. They can choose platforms that are like that already, such as eBay. They generally tend to prefer ones like Amazon, which are not neutral and which carefully cultivate their service to be as streamlined, managed, and “discriminatory” as possible. Indeed, many of people’s biggest complaints with digital platforms relate to their openness: the fake reviews, counterfeit products, malware, and spam that come with letting more unknown businesses use your service. While these may be unavoidable by-products of running a platform, platforms compete on their ability to ferret them out. Customers are unlikely to thank legislators for regulating Amazon into being another eBay.

For a potential entrepreneur, just how much time it will take to compete, and the barrier to entry that time represents, will vary greatly depending on the market he or she wishes to enter. A would-be competitor to the likes of Subway, for example, might not find the time needed to open a sandwich shop to be a substantial hurdle. Even where it does take a long time to bring a product to market, it may be possible to accelerate the timeline if the potential profits are sufficiently high. 

As Steven Salop notes in a recent paper, however, there may be cases where long periods of production time are intrinsic to a product: 

If entry takes a long time, then the fear of entry may not provide a substantial constraint on conduct. The firm can enjoy higher prices and profits until the entry occurs. Even if a strong entrant into the 12-year-old scotch market begins the entry process immediately upon announcement of the merger of its rivals, it will not be able to constrain prices for a long time. [emphasis added]

Salop’s point relates to the supply-side substitutability of Scotch whisky (sic — Scotch whisky is spelt without an “e”). That is, to borrow from the European Commission’s definition, whether “suppliers are able to switch production to the relevant products and market them in the short term.” Scotch is aged in wooden barrels for a number of years (at least three, but often longer) before being bottled and sold, and the value of Scotch usually increases with age. 

Due to this protracted manufacturing process, Salop argues, an entrant cannot compete with an incumbent dominant firm for however many years it would take to age the Scotch; they cannot produce the relevant product in the short term, no matter how high the profits collected by a monopolist are, and hence no matter how strong the incentive to enter the market. If I wanted to sell 12-year-old Scotch, to use Salop’s example, it would take me 12 years to enter the market. In the meantime, a dominant firm could extract monopoly rents, leading to higher prices for consumers. 

But can a whisky producer “enjoy higher prices and profits until … entry occurs”? A dominant firm in the 12-year-old Scotch market will not necessarily be immune to competition for the entire 12-year period it would take to produce a Scotch of the same vintage. There are various ways, both on the demand and supply side, that pressure could be brought to bear on a monopolist in the Scotch market.

One way could be to bring whiskies that are being matured for longer-maturity bottles (like 16- or 18-year-old Scotches) into service at the 12-year maturity point, shifting this supply to a market in which profits are now relatively higher. 

Alternatively, distilleries may try to produce whiskies that resemble 12-year old whiskies in flavor with younger batches. A 2013 article from The Scotsman discusses this possibility in relation to major Scottish whisky brand Macallan’s decision to switch to selling exclusively No-Age Statement (NAS — they do not bear an age on the bottle) whiskies: 

Experts explained that, for example, nine and 11-year-old whiskies—not yet ready for release under the ten and 12-year brands—could now be blended together to produce the “entry-level” Gold whisky immediately.

An aged Scotch cannot contain any whisky younger than the age stated on the bottle, but an NAS alternative can contain anything over three years (though older whiskies are often used to capture a flavor more akin to a 12-year dram). For many drinkers, NAS whiskies are a close substitute for 12-year-old whiskies. They often compete with aged equivalents on quality and flavor and can command similar prices to aged bottles in the 12-year category. More than 80% of bottles sold bear no age statement. While this figure includes non-premium bottles, the share of NAS whiskies traded at auction on the secondary market, presumably more likely to be premium, increased from 20% to 30% in the years between 2013 and 2018.

There are also whiskies matured outside of Scotland, in regions such as Taiwan and India, that can achieve flavor profiles akin to older whiskies more quickly, thanks to warmer climates and the faster chemical reactions inside barrels they cause. Further increases in maturation rate can be brought about by using smaller barrels with a higher surface-area-to-volume ratio. Whiskies matured in hotter climates and smaller barrels can be brought to market even more quickly than NAS Scotch matured in the cooler Scottish climate, and may well represent a more authentic replication of an older barrel. 

“Whiskies” that can be manufactured even more quickly may also be on the horizon. Some startups in the United States are experimenting with rapid-aging technology which would allow them to produce a whisky-like spirit in a very short amount of time. As detailed in a recent article in The Economist, Endless West in California is using technology that ages spirits within 24 hours, with the resulting bottles selling for $40 – a bit less than many 12-year-old Scotches. Although attempts to break the conventional maturation process are nothing new, recent attempts have won awards in blind taste-test competitions.

None of this is to dismiss Salop’s underlying point. But it may suggest that, even for a product where time appears to be an insurmountable barrier to entry, there may be more ways to compete than we initially assume.

In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.

Bork notes the Competition and Antitrust Law Enforcement Reform Act, introduced by Sen. Amy Klobuchar (D-Minn.), would “outlaw any mergers or acquisitions for the more than 80 large U.S. companies valued over $100 billion.”

Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.

One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.

If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.

But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.

Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.

But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.

Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.

So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.

The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.

Publicly traded companies with more than $100 billion in market capitalization

3MDanaher Corp.PepsiCo
Abbott LaboratoriesDeere & Co.Pfizer
AbbVieEli Lilly and Co.Philip Morris International
Adobe Inc.ExxonMobilProcter & Gamble
Advanced Micro DevicesFacebook Inc.Qualcomm
Alphabet Inc.General Electric Co.Raytheon Technologies
AmazonGoldman SachsSalesforce
American ExpressHoneywellServiceNow
American TowerIBMSquare Inc.
AmgenIntelStarbucks
Apple Inc.IntuitTarget Corp.
Applied MaterialsIntuitive SurgicalTesla Inc.
AT&TJohnson & JohnsonTexas Instruments
Bank of AmericaJPMorgan ChaseThe Coca-Cola Co.
Berkshire HathawayLockheed MartinThe Estée Lauder Cos.
BlackRockLowe’sThe Home Depot
BoeingMastercardThe Walt Disney Co.
Bristol Myers SquibbMcDonald’sThermo Fisher Scientific
Broadcom Inc.MedtronicT-Mobile US
Caterpillar Inc.Merck & Co.Union Pacific Corp.
Charles Schwab Corp.MicrosoftUnited Parcel Service
Charter CommunicationsMorgan StanleyUnitedHealth Group
Chevron Corp.NetflixVerizon Communications
Cisco SystemsNextEra EnergyVisa Inc.
CitigroupNike Inc.Walmart
ComcastNvidiaWells Fargo
CostcoOracle Corp.Zoom Video Communications
CVS HealthPayPal

Publicly traded companies with more than $100 billion in current assets

Ally FinancialFreddie Mac
American International GroupKeyBank
BNY MellonM&T Bank
Capital OneNorthern Trust
Citizens Financial GroupPNC Financial Services
Fannie MaeRegions Financial Corp.
Fifth Third BankState Street Corp.
First Republic BankTruist Financial
Ford Motor Co.U.S. Bancorp

Publicly traded companies with more than $100 billion in sales

AmerisourceBergenDell Technologies
AnthemGeneral Motors
Cardinal HealthKroger
Centene Corp.McKesson Corp.
CignaWalgreens Boots Alliance

The Biden Administration’s July 9 Executive Order on Promoting Competition in the American Economy is very much a mixed bag—some positive aspects, but many negative ones.

It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.

But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)

Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.

An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.

In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:

  1. Deals effectively with serious competitive problems; while at the same time
  2. Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.

Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.

Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.

Advocates of legislative action to “reform” antitrust law have already pointed to the U.S. District Court for the District of Columbia’s dismissal of the state attorneys general’s case and the “conditional” dismissal of the Federal Trade Commission’s case against Facebook as evidence that federal antitrust case law is lax and demands correction. In fact, the court’s decisions support the opposite implication. 

The Risks of Antitrust by Anecdote

The failure of a well-resourced federal regulator, and more than 45 state attorney-general offices, to avoid dismissal at an early stage of the litigation testifies to the dangers posed by a conclusory approach toward antitrust enforcement that seeks to unravel acquisitions consummated almost a decade ago without even demonstrating the factual predicates to support consideration of such far-reaching interventions. The dangers to the rule of law are self-evident. Irrespective of one’s views on the appropriate direction of antitrust law, this shortcut approach would substitute prosecutorial fiat, ideological predilection, and popular sentiment for decades of case law and agency guidelines grounded in the rigorous consideration of potential evidence of competitive harm. 

The paucity of empirical support for the exceptional remedial action sought by the FTC is notable. As the district court observed, there was little systematic effort made to define the economically relevant market or provide objective evidence of market power, beyond the assertion that Facebook has a market share of “in excess of 60%.” Remarkably, the denominator behind that 60%-plus assertion is not precisely defined, since the FTC’s brief does not supply any clear metric by which to measure market share. As the court pointed out, this is a nontrivial task in multi-sided environments in which one side of the potentially relevant market delivers services to users at no charge.  

While the point may seem uncontroversial, it is important to re-appreciate why insisting on a rigorous demonstration of market power is critical to preserving a coherent body of law that provides the market with a basis for reasonably anticipating the likelihood of antitrust intervention. At least since the late 1970s, courts have recognized that “big is not always bad” and can often yield cost savings that ultimately redound to consumers’ benefit. That is: firm size and consumer welfare do not stand in inherent opposition. If courts were to abandon safeguards against suits that cannot sufficiently define the relevant market and plausibly show market power, antitrust litigation could easily be used as a tool to punish successful firms that prevail over competitors simply by being more efficient. In other words: antitrust law could become a tool to preserve competitor welfare at the expense of consumer welfare.

The Specter of No-Fault Antitrust Liability

The absence of any specific demonstration of market power suggests deficient lawyering or the inability to gather supporting evidence. Giving the FTC litigation team the benefit of the doubt, the latter becomes the stronger possibility. If that is the case, this implies an effort to persuade courts to adopt a de facto rule of per se illegality for any firm that achieves a certain market share. (The same concept lies behind legislative proposals to bar acquisitions for firms that cross a certain revenue or market capitalization threshold.) Effectively, any firm that reached a certain size would operate under the presumption that it has market power and has secured or maintained such power due to anticompetitive practices, rather than business prowess. This would effectively convert leading digital platforms into quasi-public utilities subject to continuous regulatory intervention. Such an approach runs counter to antitrust law’s mission to preserve, rather than displace, private ordering by market forces.  

Even at the high-water point of post-World War II antitrust zealotry (a period that ultimately ended in economic malaise), proposals to adopt a rule of no-fault liability for alleged monopolization were rejected. This was for good reason. Any such rule would likely injure consumers by precluding them from enjoying the cost savings that result from the “sweet spot” scenario in which the scale and scope economies of large firms are combined with sufficiently competitive conditions to yield reduced prices and increased convenience for consumers. Additionally, any such rule would eliminate incumbents’ incentives to work harder to offer consumers reduced prices and increased convenience, since any market share preserved or acquired as a result would simply invite antitrust scrutiny as a reward.

Remembering Why Market Power Matters

To be clear, this is not to say that “Big Tech” does not deserve close antitrust scrutiny, does not wield market power in certain segments, or has not potentially engaged in anticompetitive practices.  The fundamental point is that assertions of market power and anticompetitive conduct must be demonstrated, rather than being assumed or “proved” based largely on suggestive anecdotes.  

Perhaps market power will be shown sufficiently in Facebook’s case if the FTC elects to respond to the court’s invitation to resubmit its brief with a plausible definition of the relevant market and indication of market power at this stage of the litigation. If that threshold is satisfied, then thorough consideration of the allegedly anticompetitive effect of Facebook’s WhatsApp and Instagram acquisitions may be merited. However, given the policy interest in preserving the market’s confidence in relying on the merger-review process under the Hart-Scott-Rodino Act, the burden of proof on the government should be appropriately enhanced to reflect the significant time that has elapsed since regulatory decisions not to intervene in those transactions.  

It would once have seemed mundane to reiterate that market power must be reasonably demonstrated to support a monopolization claim that could lead to a major divestiture remedy. Given the populist thinking that now leads much of the legislative and regulatory discussion on antitrust policy, it is imperative to reiterate the rationale behind this elementary principle. 

This principle reflects the fact that, outside collusion scenarios, antitrust law is typically engaged in a complex exercise to balance the advantages of scale against the risks of anticompetitive conduct. At its best, antitrust law weighs competing facts in a good faith effort to assess the net competitive harm posed by a particular practice. While this exercise can be challenging in digital markets that naturally converge upon a handful of leading platforms or multi-dimensional markets that can have offsetting pro- and anti-competitive effects, these are not reasons to treat such an exercise as an anachronistic nuisance. Antitrust cases are inherently challenging and proposed reforms to make them easier to win are likely to endanger, rather than preserve, competitive markets.