Archives For antitrust

The Competition and Antitrust Law Enforcement Reform Act (CALERA), recently introduced in the U.S. Senate, exhibits a remarkable willingness to cast aside decades of evidentiary standards that courts have developed to uphold the rule of law by precluding factually and economically ungrounded applications of antitrust law. Without those safeguards, antitrust enforcement is prone to be driven by a combination of prosecutorial and judicial fiat. That would place at risk the free play of competitive forces that the antitrust laws are designed to protect.

Antitrust law inherently lends itself to the risk of erroneous interpretations of ambiguous evidence. Outside clear cases of interfirm collusion, virtually all conduct that might appear anti-competitive might just as easily be proven, after significant factual inquiry, to be pro-competitive. This fundamental risk of a false diagnosis has guided antitrust case law and regulatory policy since at least the Supreme Court’s landmark Continental Television v. GTE Sylvania decision in 1977 and arguably earlier. Judicial and regulatory efforts to mitigate this ambiguity, while preserving the deterrent power of the antitrust laws, have resulted in the evidentiary requirements that are targeted by the proposed bill.

Proponents of the legislative “reforms” might argue that modern antitrust case law’s careful avoidance of enforcement error yields excessive caution. To relieve regulators and courts from having to do their homework before disrupting a targeted business and its employees, shareholders, customers and suppliers, the proposed bill empowers plaintiffs to allege and courts to “find” anti-competitive conduct without having to be bound to the reasonably objective metrics upon which courts and regulators have relied for decades. That runs the risk of substituting rhetoric and intuition for fact and analysis as the guiding principles of antitrust enforcement and adjudication.

This dismissal of even a rudimentary commitment to rule-of-law principles is illustrated by two dramatic departures from existing case law in the proposed bill. Each constitutes a largely unrestrained “blank check” for regulatory and judicial overreach.

Blank Check #1

The bill includes a broad prohibition on “exclusionary” conduct, which is defined to include any conduct that “materially disadvantages 1 or more actual or potential competitors” and “presents an appreciable risk of harming competition.” That amorphous language arguably enables litigants to target a firm that offers consumers lower prices but “disadvantages” less efficient competitors that cannot match that price.

In fact, the proposed legislation specifically facilitates this litigation strategy by relieving predatory pricing claims from having to show that pricing is below cost or likely to result ultimately in profits for the defendant. While the bill permits a defendant to escape liability by showing sufficiently countervailing “procompetitive benefits,” the onus rests on the defendant to show otherwise. This burden-shifting strategy encourages lagging firms to shift competition from the marketplace to the courthouse.

Blank Check #2

The bill then removes another evidentiary safeguard by relieving plaintiffs from always having to define a relevant market. Rather, it may be sufficient to show that the contested practice gives rise to an “appreciable risk of harming competition … based on the totality of the circumstances.” It is hard to miss the high degree of subjectivity in this standard.

This ambiguous threshold runs counter to antitrust principles that require a credible showing of market power in virtually all cases except horizontal collusion. Those principles make perfect sense. Market power is the gateway concept that enables courts to distinguish between claims that plausibly target alleged harms to competition and those that do not. Without a well-defined market, it is difficult to know whether a particular practice reflects market power or market competition. Removing the market power requirement can remove any meaningful grounds on which a defendant could avoid a nuisance lawsuit or contest or appeal a conclusory allegation or finding of anticompetitive conduct.

Anti-Market Antitrust

The bill’s transparently outcome-driven approach is likely to give rise to a cloud of liability that penalizes businesses that benefit consumers through price and quality combinations that competitors cannot replicate. This obviously runs directly counter to the purpose of the antitrust laws. Certainly, winners can and sometimes do entrench themselves through potentially anticompetitive practices that should be closely scrutinized. However, the proposed legislation seems to reflect a presumption that successful businesses usually win by employing illegitimate tactics, rather than simply being the most efficient firm in the market. Under that assumption, competition law becomes a tool for redoing, rather than enabling, competitive outcomes.

While this populist approach may be popular, it is neither economically sound nor consistent with a market-driven economy in which resources are mostly allocated through pricing mechanisms and government intervention is the exception, not the rule. It would appear that some legislators would like to reverse that presumption. Far from being a victory for consumers, that outcome would constitute a resounding loss.

The slew of recent antitrust cases in the digital, tech, and pharmaceutical industries has brought significant attention to the investments many firms in these industries make in “intangibles,” such as software and research and development (R&D).

Intangibles are recognized to have an important effect on a company’s (and the economy’s) performance. For example, Jonathan Haskel and Stian Westlake (2017) highlight the increasingly large investments companies have been making in things like programming in-house software, organizational structures, and, yes, a firm’s stock of knowledge obtained through R&D. They also note the considerable difficulties associated with valuing both those investments and the outcomes (such as new operational procedures, a new piece of software, or a new patent) of those investments.

This difficulty in valuing intangibles has gone somewhat under the radar until relatively recently. There has been progress in valuing them at the aggregate level (see Ellen R. McGrattan and Edward C. Prescott (2008)) and in examining their effects at the level of individual sectors (see McGrattan (2020)). It remains difficult, however, to ascertain the value of the entire stock of intangibles held by an individual firm.

There is a method to estimate the value of one component of a firm’s stock of intangibles. Specifically, the “stock of knowledge obtained through research and development” is likely to form a large proportion of most firms’ intangibles. Treating R&D as a “stock” might not be the most common way to frame the subject, but it does have an intuitive appeal.

What a firm knows (i.e., its intellectual property) is an input to its production process, just like physical capital. The most direct way for firms to acquire knowledge is to conduct R&D, which adds to its “stock of knowledge,” as represented by its accumulated stock of R&D. In this way, a firm’s accumulated investment in R&D then becomes a stock of R&D that it can use in production of whatever goods and services it wants. Thankfully, there is a relatively straightforward (albeit imperfect) method to measure a firm’s stock of R&D that relies on information obtained from a company’s accounts, along with a few relatively benign assumptions.

This method (set out by Bronwyn Hall (1990, 1993)) uses a firm’s annual expenditures on R&D (a separate line item in most company accounts) in the “perpetual inventory” method to calculate a firm’s stock of R&D in any particular year. This perpetual inventory method is commonly used to estimate a firm’s stock of physical capital, so applying it to obtain an estimate of a firm’s stock of knowledge—i.e., their stock of R&D—should not be controversial.

All this method requires to obtain a firm’s stock of R&D for this year is knowledge of a firm’s R&D stock and its investment in R&D (i.e., its R&D expenditures) last year. This year’s R&D stock is then the sum of those R&D expenditures and its undepreciated R&D stock that is carried forward into this year.

As some R&D expenditure datasets include, for example, wages paid to scientists and research workers, this is not exactly the same as calculating a firm’s physical capital stock, which would only use a firm’s expenditures on physical capital. But given that paying people to perform R&D also adds to a firm’s stock of R&D through the increased knowledge and expertise of their employees, it seems reasonable to include this in a firm’s stock of R&D.

As mentioned previously, this method requires making certain assumptions. In particular, it is necessary to assume a rate of depreciation of the stock of R&D each period. Hall suggests a depreciation of 15% per year (compared to the roughly 7% per year for physical capital), and estimates presented by Hall, along with Wendy Li (2018), suggest that, in some industries, the figure can be as high as 50%, albeit with a wide range across industries.

The other assumption required for this method is an estimate of the firm’s initial level of stock. To see why such an assumption is necessary, suppose that you have data on a firm’s R&D expenditure running from 1990-2016. This means that you can calculate a firm’s stock of R&D for each year once you have their R&D stock in the previous year via the formula above.

When calculating the firm’s R&D stock for 2016, you need to know what their R&D stock was in 2015, while to calculate their R&D stock for 2015 you need to know their R&D stock in 2014, and so on backward until you reach the first year for which you have data: in this, case 1990.

However, working out the firm’s R&D stock in 1990 requires data on the firm’s R&D stock in 1989. The dataset does not contain any information about 1989, nor the firm’s actual stock of R&D in 1990. Hence, it is necessary to make an assumption regarding the firm’s stock of R&D in 1990.

There are several different assumptions one can make regarding this “starting value.” You could assume it is just a very small number. Or you can assume, as per Hall, that it is the firm’s R&D expenditure in 1990 divided by the sum of the R&D depreciation and average growth rates (the latter being taken as 8% per year by Hall). Note that, given the high depreciation rates for the stock of R&D, it turns out that the exact starting value does not matter significantly (particularly in years toward the end of the dataset) if you have a sufficiently long data series. At a 15% depreciation rate, more than 50% of the initial value disappears after five years.

Although there are other methods to measure a firm’s stock of R&D, these tend to provide less information or rely on stronger assumptions than the approach described above does. For example, sometimes a firm’s stock of R&D is measured using a simple count of the number of patents they hold. However, this approach does not take into account the “value” of a patent. Since, by definition, each patent is unique (with differing number of years to run, levels of quality, ability to be challenged or worked around, and so on), it is unlikely to be appropriate to use an “average value of patents sold recently” to value it. At least with the perpetual inventory method described above, a monetary value for a firm’s stock of R&D can be obtained.

The perpetual inventory method also provides a way to calculate market shares of R&D in R&D-intensive industries, which can be used alongside current measures. This would be akin to looking at capacity shares in some manufacturing industries. Of course, using market shares in R&D industries can be fraught with issues, such as whether it is appropriate to use a backward-looking measure to assess competitive constraints in a forward-looking industry. This is why any investigation into such industries should also look, for example, at a firm’s research pipeline.

Naturally, this only provides for the valuation of the R&D stock and says nothing about valuing other intangibles that are likely to play an important role in a much wider range of industries. Nonetheless, this method could provide another means for competition authorities to assess the current and historical state of R&D stocks in industries in which R&D plays an important part. It would be interesting to see what firms’ shares of R&D stocks look like, for example, in the pharmaceutical and tech industries.

The U.S. Supreme Court will hear a challenge next month to the 9th U.S. Circuit Court of Appeals’ 2020 decision in NCAA v. Alston. Alston affirmed a district court decision that enjoined the National Collegiate Athletic Association (NCAA) from enforcing rules that restrict the education-related benefits its member institutions may offer students who play Football Bowl Subdivision football and Division I basketball.

This will be the first Supreme Court review of NCAA practices since NCAA v. Board of Regents in 1984, which applied the antitrust rule of reason in striking down the NCAA’s “artificial limit” on the quantity of televised college football games, but also recognized that “this case involves an industry in which horizontal restraints on competition are essential if the product [intercollegiate athletic contests] is to be available at all.” Significantly, in commenting on the nature of appropriate, competition-enhancing NCAA restrictions, the court in Board of Regents stated that:

[I]n order to preserve the character and quality of the [NCAA] ‘product,’ athletes must not be paid, must be required to attend class, and the like. And the integrity of the ‘product’ cannot be preserved except by mutual agreement; if an institution adopted such restrictions unilaterally, its effectiveness as a competitor on the playing field might soon be destroyed. Thus, the NCAA plays a vital role in enabling college football to preserve its character, and as a result enables a product to be marketed which might otherwise be unavailable. In performing this role, its actions widen consumer choice – not only the choices available to sports fans but also those available to athletes – and hence can be viewed as procompetitive. [footnote citation omitted]

One’s view of the Alston case may be shaped by one’s priors regarding the true nature of the NCAA. Is the NCAA a benevolent Dr. Jekyll, which seeks to promote amateurism and fairness in college sports to the benefit of student athletes and the general public?  Or is its benevolent façade a charade?  Although perhaps a force for good in its early years, has the NCAA transformed itself into an evil Mr. Hyde, using restrictive rules to maintain welfare-inimical monopoly power as a seller cartel of athletic events and a monopsony employer cartel that suppresses athletes’ wages? I will return to this question—and its bearing on the appropriate resolution of this legal dispute—after addressing key contentions by both sides in Alston.

Summarizing the Arguments in NCAA v Alston

The Alston class-action case followed in the wake of the 9th Circuit’s decision in O’Bannon v. NCAA (2015). O’Bannon affirmed in large part a district court’s ruling that the NCAA illegally restrained trade, in violation of Section 1 of the Sherman Act, by preventing football and men’s basketball players from receiving compensation for the use of their names, images, and likenesses. It also affirmed the district court’s injunction insofar as it required the NCAA to implement the less restrictive alternative of permitting athletic scholarships for the full cost of attendance. (I commented approvingly on the 9th Circuit’s decision in a previous TOTM post.) 

Subsequent antitrust actions by student-athletes were consolidated in the district court. After a bench trial, the district court entered judgment for the student-athletes, concluding in part that NCAA limits on education-related benefits were unreasonable restraints of trade. It enjoined those limits but declined to hold that other NCAA limits on compensation unrelated to education likewise violated Section 1.

In May 2020, a 9th Circuit panel held that the district court properly applied the three-step Sherman Act Section 1 rule of reason analysis in determining that the enjoined rules were unlawful restraints of trade.

First, the panel concluded that the student-athletes carried their burden at step one by showing that the restraints produced significant anticompetitive effects within the relevant market for student-athletes’ labor.

At step two, the NCAA was required to come forward with evidence of the restraints’ procompetitive effects. The panel endorsed the district court’s conclusion that only some of the challenged NCAA rules served the procompetitive purpose of preserving amateurism and thus improving consumer choice by maintaining a distinction between college and professional sports. Those rules were limits on above-cost-of-attendance payments unrelated to education, the cost-of-attendance cap on athletic scholarships, and certain restrictions on cash academic or graduation awards and incentives. The panel affirmed the district court’s conclusion that the remaining rules—restricting non-cash education-related benefits—did nothing to foster or preserve consumer demand. The panel held that the record amply supported the findings of the district court, which relied on demand analysis, survey evidence, and NCAA testimony.

The panel also affirmed the district court’s conclusion that, at step three, the student-athletes showed that any legitimate objectives could be achieved in a substantially less restrictive manner. The district court identified a less restrictive alternative of prohibiting the NCAA from capping certain education-related benefits and limiting academic or graduation awards or incentives below the maximum amount that an individual athlete may receive in athletic participation awards, while permitting individual conferences to set limits on education-related benefits. The panel held that the district court did not clearly err in determining that this alternative would be virtually as effective in serving the procompetitive purposes of the NCAA’s current rules and could be implemented without significantly increased cost.

Finally, the panel held that the district court’s injunction was not impermissibly vague and did not usurp the NCAA’s role as the superintendent of college sports. The panel also declined to broaden the injunction to include all NCAA compensation limits, including those on payments untethered to education. The panel concluded that the district court struck the right balance in crafting a remedy that both prevented anticompetitive harm to student-athletes while serving the procompetitive purpose of preserving the popularity of college sports.

The NCAA appealed to the Supreme Court, which granted the NCAA’s petition for certiorari Dec. 16, 2020. The NCAA contends that under Board of Regents, the NCAA rules regarding student-athlete compensation are reasonably related to preserving amateurism in college sports, are procompetitive, and should have been upheld after a short deferential review, rather than the full three-step rule of reason. According to the NCAA’s petition for certiorari, even under the detailed rule of reason, the 9th Circuit’s decision was defective. Specifically:

The Ninth Circuit … relieved plaintiffs of their burden to prove that the challenged rules unreasonably restrain trade, instead placing a “heavy burden” on the NCAA … to prove that each category of its rules is procompetitive and that an alternative compensation regime created by the district court could not preserve the procompetitive distinction between college and professional sports. That alternative regime—under which the NCAA must permit student-athletes to receive unlimited “education-related benefits,” including post-eligibility internships that pay unlimited amounts in cash and can be used for recruiting or retention—will vitiate the distinction between college and professional sports. And via the permanent injunction the Ninth Circuit upheld, the alternative regime will also effectively make a single judge in California the superintendent of a significant component of college sports. The Ninth Circuit’s approval of this judicial micromanagement of the NCAA denies the NCAA the latitude this Court has said it needs, and endorses unduly stringent scrutiny of agreements that define the central features of sports leagues’ and other joint ventures’ products. The decision thus twists the rule of reason into a tool to punish (and thereby deter) procompetitive activity.

Two amicus briefs support the NCAA’s position. One, filed on behalf of “antitrust law and business school professors,” stresses that the 9th Circuit’s decision misapplied the third step of the rule of reason by requiring defendants to show that their conduct was the least restrictive means available (instead of requiring plaintiff to prove the existence of an equally effective but less restrictive rule). More broadly:

[This approach] permits antitrust plaintiffs to commandeer the judiciary and use it to regulate and modify routine business conduct, so long as that conduct is not the least restrictive conduct imaginable by a plaintiff’s attorney or district judge. In turn, the risk that procompetitive ventures may be deemed unlawful and subject to treble damages liability simply because they could have operated in a marginally less restrictive manner is likely to chill beneficial business conduct.

A second brief, filed on behalf of “antitrust economists,” emphasizes that the NCAA has adapted the rules governing design of its product (college amateur sports) over time to meet consumer demand and to prevent colleges from pursuing their own interests (such as “pay to  play”) in ways that would conflict with the overall procompetitive aims of the collaboration. While acknowledging that antitrust courts are free to scrutinize collaborations’ rules that go beyond the design of the product itself (such as the NCAA’s broadcast restrictions), the brief cites key Supreme Court decisions (NCAA v. Board of Regents and Texaco Inc. v. Dagher), for the proposition that courts should stay out of restrictions on the core activity of the joint venture itself. It then summarizes the policy justification for such judicial non-interference:

Permitting judges and juries to apply the Sherman Act to such decisions [regarding core joint venture activity] will inevitably create uncertainty that undermines innovation and investment incentives across any number of industries and collaborative ventures. In these circumstances, antitrust courts would be making public policy regarding the desirability of a product with particular features, as opposed to ferreting out agreements or unilateral conduct that restricts output, raises prices, or reduces innovation to the detriment of consumers.

In their brief opposing certiorari, counsel for Alston take the position that, in reality, the NCAA is seeking a special antitrust exemption for its competitively restrictive conduct—an issue that should be determined by Congress, not courts. Their brief notes that the concept of “amateurism” has changed over the years and that some increases in athletes’ compensation have been allowed over time. Thus, in the context of big-time college football and basketball:

[A]mateurism is little more than a pretext. It is certainly not a Sherman Act concept, much less a get-out-of-jail-free card that insulates any particular set of NCAA restraints from scrutiny.

Who Has the Better Case?

The NCAA’s position is a strong one. Association rules touching on compensation for college athletes are part of the core nature of the NCAA’s “amateur sports” product, as the Supreme Court stated (albeit in dictum) in Board of Regents. Furthermore, subsequent Supreme Court jurisprudence (see 2010’s American Needle Inc. v. NFL) has eschewed second-guessing of joint-venture product design decisions—which, in the case of the NCAA, involve formulating the restrictions (such as whether and how to compensate athletes) that are deemed key to defining amateurism.

The Alston amicus curiae briefs ably set forth the strong policy considerations that support this approach, centered on preserving incentives for the development of efficient welfare-generating joint ventures. Requiring joint venturers to provide “least restrictive means” justifications for design decisions discourages innovative activity and generates costly uncertainty for joint-venture planners, to the detriment of producers and consumers (who benefit from joint-venture innovations) alike. Claims by defendant Alston that the NCAA is in effect seeking to obtain a judicial antitrust exemption miss the mark; rather, the NCAA merely appears to be arguing that antitrust should be limited to evaluating restrictions that fall outside the scope of the association’s core mission. Significantly, as discussed in the NCAA’s brief petitioning for certiorari, other federal courts of appeals decisions in the 3rd, 5th, and 7th Circuits have treated NCAA bylaws going to the definition of amateurism in college sports as presumptively procompetitive and not subject to close scrutiny. Thus, based on the arguments set forth by litigants, a Supreme Court victory for the NCAA in Alston would appear sound as a matter of law and economics.

There may, however, be a catch. Some popular commentary has portrayed the NCAA as a malign organization that benefits affluent universities (and their well-compensated coaches) while allowing member colleges to exploit athletes by denying them fair pay—in effect, an institutional Mr. Hyde.

What’s more, consistent with the Mr. Hyde story, a number of major free-market economists (including, among others, Nobel laureate Gary Becker) have portrayed the NCAA as an anticompetitive monopsony employer cartel that has suppressed the labor market demand for student athletes, thereby limiting their wages, fringe benefits, and employment opportunities. (In a similar vein, the NCAA is seen as a monopolist seller cartel in the market for athletic events.) Consistent with this perspective, promoting the public good of amateurism (the Dr. Jekyll story) is merely a pretextual façade (a cover story, if you will) for welfare-inimical naked cartel conduct. If one buys this alternative story, all core product restrictions adopted by the NCAA should be fair game for close antitrust scrutiny—and thus, the 9th Circuit’s decision in Alston merits affirmation as a matter of antitrust policy.

There is, however, a persuasive response to the cartel story, set forth in Richard McKenzie and Dwight Lee’s essay “The NCAA:  A Case Study of the Misuse of the Monopsony and Monopoly Models” (Chapter 8 of their 2008 book “In Defense of Monopoly:  How Market Power Fosters Creative Production”). McKenzie and Lee examine the evidence bearing on economists’ monopsony cartel assertions (and, in particular, the evidence presented in a 1992 study by Arthur Fleischer, Brian Goff, and Richard Tollison) and find it wanting:

Our analysis leads inexorably to the conclusion that the conventional economic wisdom regarding the intent and consequences of NCAA restrictions is hardly as solid, on conceptual grounds, as the NCAA critics assert, often without citing relevant court cases. We have argued that the conventional wisdom is wrong in suggesting that, as a general proposition,

• college athletes are materially “underpaid” and are “exploited”;

• cheating on NCAA rules is prima facie evidence of a cartel intending to restrict employment and suppress athletes’ wages;

• NCAA rules violate conventional antitrust doctrine;          

• barriers to entry ensure the continuance of the NCAA’s monopsony powers over athletes.

No such entry barriers (other than normal organizational costs, which need to be covered to meet any known efficiency test for new entrants) exist. In addition, the Supreme Court’s decision in NCAA indicates that the NCAA would be unable to prevent through the courts the emergence of competing athletic associations. The actual existence of other athletic associations indicates that entry would be not only possible but also practical if athletes’ wages were materially suppressed.

Conventional economic analysis of NCAA rules that we have challenged also is misleading in suggesting that collegiate sports would necessarily be improved if the NCAA were denied the authority to regulate the payment of athletes. Given the absence of legal barriers to entry into the athletic association market, it appears that if athletes’ wages were materially suppressed (or as grossly suppressed as the critics claim), alternative sports associations would form or expand, and the NCAA would be unable to maintain its presumed monopsony market position. The incentive for colleges and universities to break with the NCAA would be overwhelming.

From our interpretation of NCAA rules, it does not follow necessarily that athletes should not receive any more compensation than they do currently. Clearly, market conditions change, and NCAA rules often must be adjusted to accommodate those changes. In the absence of entry barriers, we can expect the NCAA to adjust, as it has adjusted, in a competitive manner its rules of play, recruitment, and retention of athletes. Our central point is that contrary to the proponents of the monopsony thesis, the collegiate athletic market is subject to the self-correcting mechanism of market pressures. We have reason to believe that the proposed extension of the antitrust enforcement to the NCAA rules or proposed changes in sports law explicitly or implicitly recommended by the proponents of the cartel thesis would be not only unnecessary but also counterproductive.

Although a closer examination of the McKenzie and Lee’s critique of the economists’ cartel story is beyond the scope of this comment, I find it compelling.

Conclusion

In sum, the claim that antitrust may properly be applied to combat the alleged “exploitation” of college athletes by NCAA compensation regulations does not stand up to scrutiny. The NCAA’s rules that define the scope of amateurism may be imperfect, but there is no reason to think that empowering federal judges to second guess and reformulate NCAA athletic compensation rules would yield a more socially beneficial (let alone optimal) outcome. (Believing that the federal judiciary can optimally reengineer core NCAA amateurism rules is a prime example of the Nirvana fallacy at work.)  Furthermore, a Supreme Court decision affirming the 9th Circuit could do broad mischief by undermining case law that has accorded joint venturers substantial latitude to design the core features of their collective enterprise without judicial second-guessing. It is to be hoped that the Supreme Court will do the right thing and strongly reaffirm the NCAA’s authority to design and reformulate its core athletic amateurism product as it sees fit.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

On October 20, 2020, the U.S. Department of Justice (DOJ) and eleven states with Republican attorneys general sued Google for monopolizing and attempting to monopolize the markets for general internet search services, search advertising, and “general search text” advertising (i.e., ads that resemble search results).  Last week, California joined the lawsuit, making it a bipartisan affair.

DOJ and the states (collectively, “the government”) allege that Google has used contractual arrangements to expand and cement its dominance in the relevant markets.  In particular, the government complains that Google has agreed to share search ad revenues in exchange for making Google Search the default search engine on various “search access points.” 

Google has entered such agreements with Apple (for search on iPhones and iPads), manufacturers of Android devices and the mobile service carriers that support them, and producers of web browsers.  Google is also pursuing default status on new internet-enabled consumer products, such as voice assistants and “smart” TVs, appliances, and wearables.  In the government’s telling, this all amounts to Google’s sharing of monopoly profits with firms that can ensure its continued monopoly by imposing search defaults that users are unlikely to alter.

There are several obvious weaknesses with the government’s case.  One is that preset internet defaults are super easy to change and, in other contexts, are regularly altered.  For example, while 88% of desktop and laptop computers use the Windows operating system, which defaults to a Microsoft browser (Internet Explorer or Edge), Google’s Chrome browser commands a 69% market share on desktops and laptops, compared to around 13% for Internet Explorer and Edge combined.  Changing a default search engine is as easy as changing a browser default—three simple steps on an iPhone!—and it seems consumers will change defaults they don’t actually prefer.

A second obvious weakness, related to the first, is that the government has alleged no facts suggesting that Google’s search rivals—primarily Bing, Yahoo, and DuckDuckGo—would have enjoyed more success but for Google’s purportedly exclusionary agreements.  Even absent default status, people likely would have selected Google Search because it’s the better search engine.  It doesn’t seem the challenged arrangements caused Google’s search dominance.

Admittedly, the standard of causation in monopolization cases (at least those seeking only injunctive relief) is low.  The D.C. Circuit’s Microsoft decision described it as “edentulous” or, less pretentiously, toothless.  Nevertheless, the government is unlikely to prevail in its action against Google—and that’s a good thing.  Below, I highlight the central deficiency in the government’s Google case and point out problems with the government’s challenges to each of Google’s purportedly exclusionary arrangements.   

The Lawsuit’s Overarching Deficiency

We’ve all had the experience of typing a query only to have Google, within a few key strokes, accurately predict what we were going to ask and provide us with exactly the answer we were looking for.  It’s both eerie and awesome, and it keeps us returning to Google time and again.

But it’s not magic.  Nor has Google hacked our brains.  Google is so good at predicting our questions and providing responsive search results because its top-notch algorithms process gazillions of searches and can “learn” from users’ engagement.  Scale is thus essential to Google’s quality. 

The government’s complaint concedes as much.  It acknowledges that “[g]reater scale improves the quality of a general search engine’s algorithms” (¶35) and that “[t]he additional data from scale allows improved automated learning for algorithms to deliver more relevant results, particularly on ‘fresh’ queries (queries seeking recent information), location-based queries (queries asking about something in the searcher’s vicinity), and ‘long-tail’ queries (queries used infrequently)” (¶36). The complaint also asserts that “[t]he most effective way to achieve scale is for the general search engine to be the preset default on mobile devices, computers, and other devices…” (¶38).

Oddly, though, the government chides Google for pursuing “[t]he most effective way” of securing the scale that concededly “improves the quality of a general search engine’s algorithms.”  Google’s efforts to ensure and enhance its own product quality are improper, the government says, because “they deny rivals scale to compete effectively” (¶8).  In the government’s view, Google is legally obligated to forego opportunities to make its own product better so as to give its rivals a chance to improve their own offerings.

This is inconsistent with U.S. antitrust law.  Just as firms are not required to hold their prices high to create a price umbrella for their less efficient rivals, they need not refrain from efforts to improve the quality of their own offerings so as to give their rivals a foothold. 

Antitrust does forbid anticompetitive foreclosure of rivals—i.e., business-usurping arrangements that are not the result of efforts to compete on the merits by reducing cost or enhancing quality.  But firms are, and should be, free to make their products better, even if doing so makes things more difficult for their rivals.  Antitrust, after all, protects competition, not competitors.    

The central deficiency in the government’s case is that it concedes that scale is crucial to search engine quality, but it does not assert that there is a “minimum efficient scale”—i.e., a point at which scale economies are exhausted.  If a firm takes actions to enhance its own scale beyond minimum efficient scale, and if its efforts may hold its rivals below such scale, then it may have engaged in anticompetitive foreclosure.  But a firm that pursues scale that makes its products better is simply competing on the merits.

The government likely did not allege that there is a minimum efficient scale in general internet search services because returns to scale go on indefinitely, or at least for a very long time.  But the absence of such an allegation damns the government’s case against Google, for it implies that Google’s efforts to secure the distribution, and thus the greater use, of its services make those services better.

In this regard, the Microsoft case, which the government points to as a model for its action against Google (¶10), is inapposite.  Inthat case, the government alleged that Microsoft had entered license agreements that foreclosed Netscape, a potential rival, from the best avenues of browser distribution: original equipment manufacturers (OEMs) and internet access providers.  The government here similarly alleges that Google has foreclosed rival search engines from the best avenues of search distribution: default settings on mobile devices and web browsers.  But a key difference (in addition to the fact that search defaults are quite easy to change) is that Microsoft’s license restrictions foreclosed Netscape without enhancing the quality of Microsoft’s offerings.  Indeed, the court emphasized that the challenged Microsoft agreements were anticompetitive because they “reduced rival browsers’ usage share not by improving [Microsoft’s] own product but, rather, by preventing OEMs from taking actions that could increase rivals’ share of usage” (emphasis added).  Here, any foreclosure of Google’s search rivals is incidental to Google’s efforts to improve its product by enhancing its scale.

Now, the government might contend that the anticompetitive harms from raising rivals’ distribution costs exceed the procompetitive benefits of enhancing the quality of Google’s search services.  Courts, though, have generally been skeptical of claims that exclusion-causing product enhancements are anticompetitive because they do more harm than good.  There’s a sound reason for this: courts are ill-equipped to weigh the benefits of product enhancements against the costs of competition reductions resulting from product-enhancement efforts.  For that reason, they should—and likely will—stick with the rule that this sort of product-enhancing conduct is competition on the merits, even if it has the incidental effect of raising rivals’ costs.  And if they do so, the government will lose this case.     

Problems with the Government’s Specific Challenges

Agreements with Android OEMs and Wireless Carriers

The government alleges that Google has foreclosed its search rivals from distribution opportunities on the Android platform.  It has done so, the government says, by entering into exclusion-causing agreements with OEMs that produce Android products (Samsung, Motorola, etc.) and with carriers that provide wireless service for Android devices (AT&T, Verizon, etc.).

Android is an open source operating system that is owned by Google and licensed, for free, to producers of mobile internet devices.  Under the terms of the challenged agreements, Google’s counterparties promise not to produce Android “forks”—operating systems that are Android-based but significantly alter or “fragment” the basic platform—in order to get access to proprietary Google apps that Android users typically desire and to certain application protocol interfaces (APIs) that enable various functionalities.  In addition to these “anti-forking agreements,” counterparties enter various “pre-installation agreements” obligating them to install a suite of Google apps that use Google Search as a default.  Installing that suite is a condition for obtaining the right to pre-install Google’s app store (Google Play) and other must-have apps.  Finally, OEMs and carriers enter “revenue sharing agreements” that require the use of Google Search as the sole preset default on a number of search access points in exchange for a percentage of search ad revenue derived from covered devices.  Taken together, the government says, these anti-forking, pre-installation, and revenue-sharing agreements preclude the emergence of Android rivals (from forks) and ensure the continued dominance of Google Search on Android devices.

Eliminating these agreements, though, would likely harm consumers by reducing competition in the market for mobile operating systems.  Within that market, there are two dominant players: Apple’s iOS and Google’s Android.  Apple earns money off iOS by selling hardware—iPhones and iPads that are pre-installed with iOS.  Google licenses Android to OEMs for free but then earns advertising revenue off users’ searches (which provide an avenue for search ads) and other activities (which generate user data for better targeted display ads).  Apple and Google thus compete on revenue models.  As Randy Picker has explained, Microsoft tried a third revenue model—licensing a Windows mobile operating system to OEMs for a fee—but it failed.  The continued competition between Apple and Google, though, allows for satisfaction of heterogenous consumer preferences: Apple products are more expensive but more secure (due to Apple’s tight control over software and hardware); Android devices are cheaper (as the operating system is ad-supported) and offer more innovations (as OEMs have more flexibility), but tend to be less secure.  Such variety—a result of business model competition—is good for consumers. 

If the government were to prevail and force Google to end the agreements described above, thereby reducing the advertising revenue Google derives from Android, Google would have to either copy Apple’s vertically integrated model so as to recoup its Android investments through hardware sales, charge OEMs for Android (a la Microsoft), or cut back on its investments in Android.  In each case, consumers would suffer.  The first option would take away an offering preferred by many consumers—indeed most globally, as Android dominates iOS on a worldwide basis.  The second option would replace Google’s business model with one that failed, suggesting that consumers value it less.  The third option would reduce product quality in the market for mobile operating systems. 

In the end, then, the government’s challenge to Google’s Android agreements is myopic and misguided.  Competition among business models, like competition along any dimension, inures to the benefit of consumers.  Precluding it as the government is demanding would be silly.       

Agreements with Browser Producers

Web browsers like Apple’s Safari and Mozilla’s Firefox are a primary distribution channel for search engines.  The government claims that Google has illicitly foreclosed rival search engines from this avenue of distribution by entering revenue-sharing agreements with the major non-Microsoft browsers (i.e., all but Microsoft’s Edge and Internet Explorer).  Under those agreements, Google shares up to 40% of ad revenues generated from a browser in exchange for being the preset default on both computer and mobile versions of the browser.

Surely there is no problem, though, with search engines paying royalties to web browsers.  That’s how independent browsers like Opera and Firefox make money!  Indeed, 95% of Firefox’s revenue comes from search royalties.  If browsers were precluded from sharing in search engines’ ad revenues, they would have to find an alternative source of financing.  Producers of independent browsers would likely charge license fees, which consumers would probably avoid.  That means the only available browsers would be those affiliated with an operating system (Microsoft’s Edge, Apple’s Safari) or a search engine (Google’s Chrome).  It seems doubtful that reducing the number of viable browsers would benefit consumers.  The law should therefore allow payment of search royalties to browsers.  And if such payments are permitted, a browser will naturally set its default search engine so as to maximize its payout.  

Google’s search rivals can easily compete for default status on a browser by offering a better deal to the browser producer.  In 2014, for example, search engine Yahoo managed to wrest default status on Mozilla’s Firefox away from Google.  The arrangement was to last five years, but in 2017, Mozilla terminated the agreement and returned Google to default status because so many Firefox users were changing the browser’s default search engine from Yahoo to Google.  This historical example undermines the government’s challenges to Google’s browser agreements by showing (1) that other search engines can attain default status by competing, and (2) that defaults aren’t as “sticky” as the government claims—at least, not when the default is set to a search engine other than the one most people prefer.

In short, there’s nothing anticompetitive about Google’s browser agreements, and enjoining such deals would likely injure consumers by reducing competition among browsers.

Agreements with Apple

That brings us to the allegations that have gotten the most attention in the popular press: those concerning Google’s arrangements with Apple.  The complaint alleges that Google pays Apple $8-12 billion a year—a whopping 15-20% of Apple’s net income—for granting Google default search status on iOS devices.  In the government’s telling, Google is agreeing to share a significant portion of its monopoly profits with Apple in exchange for Apple’s assistance in maintaining Google’s search monopoly.

An alternative view, of course, is that Google is just responding to Apple’s power: Apple has assembled a giant installed base of loyal customers and can demand huge payments to favor one search engine over another on its popular mobile devices.  In that telling, Google may be paying Apple to prevent it from making Bing or another search engine the default on Apple’s search access points.

If that’s the case, what Google is doing is both procompetitive and a boon to consumers.  Microsoft could easily outbid Google to have Bing set as the default search engine on Apple’s devices. Microsoft’s market capitalization exceeds that of Google parent Alphabet by about $420 billion ($1.62 trillion versus $1.2 trillion), which is roughly the value of Walmart.  Despite its ability to outbid Google for default status, Microsoft hasn’t done so, perhaps because it realizes that defaults aren’t that sticky when the default service isn’t the one most people prefer.  Microsoft knows that from its experience with Internet Explorer and Edge (which collectively command only around 13% of the desktop browser market even though they’re the defaults on Windows, which has a 88% market share on desktops and laptops), and from its experience with Bing (where “Google” is the number one search term).  Nevertheless, the possibility remains that Microsoft could outbid Google for default status, improve its quality to prevent users from changing the default (or perhaps pay users for sticking with Bing), and thereby take valuable scale from Google, impairing the quality of Google Search.  To prevent that from happening, Google shares with Apple a generous portion of its search ad revenues, which, given the intense competition for mobile device sales, Apple likely passes along to consumers in the form of lower phone and tablet prices.

If the government succeeds in enjoining Google’s payments to Apple for default status, other search engines will presumably be precluded from such arrangements as well.  After all, the “foreclosure” effect of paying for default search status on Apple products is the same regardless of which search engine does the paying, and U.S. antitrust law does not “punish” successful firms by forbidding them from engaging in competitive activities that are open to their rivals. 

Ironically, then, the government’s success in its challenge to Google’s Apple payments would benefit Google at the expense of consumers:  Google would almost certainly remain the default search engine on Apple products, as it is most preferred by consumers and no rival could pay to dislodge it; Google would not have to pay a penny to retain its default status; and Apple would lose revenues that it likely passes along to consumers in the form of lower prices.  The courts are unlikely to countenance this perverse result by ruling that Google’s arrangements with Apple violate the antitrust laws.

Arrangements with Producers of Internet-Enabled “Smart” Devices

The final part of the government’s case against Google starkly highlights a problem that is endemic to the entire lawsuit.  The government claims that Google, having locked up all the traditional avenues of search distribution with the arrangements described above, is now seeking to foreclose search distribution in the new avenues being created by internet-enabled consumer products like wearables (e.g., smart watches), voice assistants, smart TVs, etc.  The alleged monopolistic strategy is similar to those described above: Google will share some of its monopoly profits in exchange for search default status on these smart devices, thereby preventing rival search engines from attaining valuable scale.

It’s easy to see in this context, though, why Google’s arrangements are likely procompetitive.  Unlike web browsers, mobile phones, and tablets, internet-enabled smart devices are novel.  Innovators are just now discovering new ways to embed internet functionality into everyday devices. 

Putting oneself in the position of these innovators helps illuminate a key beneficial aspect of Google’s arrangements:  They create an incentive to develop new and attractive means of distributing search.  Innovators currently at work on internet-enabled devices are no doubt spurred on by the possibility of landing a lucrative distribution agreement with Google or another search engine.  Banning these sorts of arrangements—the consequence of governmental success in this lawsuit—would diminish the incentive to innovate.

But that can be said of every single one of the arrangements the government is challenging. Because of Google’s revenue-sharing with search distributors, each of them has an added incentive to make their distribution channels desirable to consumers.  Android OEMs and Apple will work harder to produce mobile devices that people will want to use for internet searches; browser producers will endeavor to improve their offerings.  By paying producers of search access points a portion of the search ad revenues generated on their platforms, Google motivates them to generate more searches, which they can best do by making their products as attractive as possible. 

At the end of the day, then, the government’s action against Google seeks to condemn conduct that benefits consumers.  Because of the challenged arrangements, Google makes its own search services better, is able to license Android for free, ensures the continued existence of independent web browsers like Firefox and Opera, helps lower the price of iPhones and iPads, and spurs innovators to develop new “Internet of Things” devices that can harness the power of the web. 

The Biden administration would do well to recognize this lawsuit for what it is: a poorly conceived effort to appear to be “doing something” about a Big Tech company that has drawn the ire (for different reasons) of both progressives and conservatives.  DOJ and its state co-plaintiffs should seek dismissal of this action.  

The Federal Trade Commission and 46 state attorneys general (along with the District of Columbia and the Territory of Guam) filed their long-awaited complaints against Facebook Dec. 9. The crux of the arguments in both lawsuits is that Facebook pursued a series of acquisitions over the past decade that aimed to cement its prominent position in the “personal social media networking” market. 

Make no mistake, if successfully prosecuted, these cases would represent one of the most fundamental shifts in antitrust law since passage of the Hart-Scott-Rodino Act in 1976. That law required antitrust authorities to be notified of proposed mergers and acquisitions that exceed certain value thresholds, essentially shifting the paradigm for merger enforcement from ex-post to ex-ante review.

While the prevailing paradigm does not explicitly preclude antitrust enforcers from taking a second bite of the apple via ex-post enforcement, it has created an assumption among that regulatory clearance of a merger makes subsequent antitrust proceedings extremely unlikely. 

Indeed, the very point of ex-ante merger regulations is that ex-post enforcement, notably in the form of breakups, has tremendous social costs. It can scupper economies of scale and network effects on which both consumers and firms have come to rely. Moreover, the threat of costly subsequent legal proceedings will hang over firms’ pre- and post-merger investment decisions, and may thus reduce incentives to invest.

With their complaints, the FTC and state AGs threaten to undo this status quo. Even if current antitrust law allows it, pursuing this course of action threatens to quash the implicit assumption that regulatory clearance generally shields a merger from future antitrust scrutiny. Ex-post review of mergers and acquisitions does also entail some positive features, but the Facebook complaints fail to consider these complicated trade-offs. This oversight could hamper tech and other U.S. industries.

Mergers and uncertainty

Merger decisions are probabilistic. Of the thousands of corporate acquisitions each year, only a handful end up deemed “successful.” These relatively few success stories have to pay for the duds in order to preserve the incentive to invest.

Switching from ex-ante to ex-post review enables authorities to focus their attention on the most lucrative deals. It stands to reason that they will not want to launch ex-post antitrust proceedings against bankrupt firms whose assets have already been stripped. Instead, as with the Facebook complaint, authorities are far more likely to pursue high-profile cases that boost their political capital.

This would be unproblematic if:

  1. Authorities would commit to ex-post prosecution only of anticompetitive mergers; and
  2. If parties could reasonably anticipate whether their deals would be deemed anticompetitive in the future. 

If those were the conditions, ex-post enforcement would merely reduce the incentive to partake in problematic mergers. It would leave welfare-enhancing deals unscathed. But where firms could not have ex-ante knowledge that a given deal would be deemed anticompetitive, the associated error-costs should weigh against prosecuting such mergers ex post, even if such enforcement might appear desirable. The deterrent effect that would arise from such prosecutions would be applied by the market to all mergers, including efficient ones. Put differently, authorities might get the ex-post assessment right in one case, such as the Facebook proceedings, but the bigger picture remains that they could be wrong in many other cases. Firms will perceive this threat and it may hinder their investments.

There is also reason to doubt that either of the ideal conditions for ex-post enforcement could realistically be met in practice.Ex-ante merger proceedings involve significant uncertainty. Indeed, antitrust-merger clearance decisions routinely have an impact on the merging parties’ stock prices. If management and investors knew whether their transactions would be cleared, those effects would be priced-in when a deal is announced, not when it is cleared or blocked. Indeed, if firms knew a given merger would be blocked, they would not waste their resources pursuing it. This demonstrates that ex-ante merger proceedings involve uncertainty for the merging parties.

Unless the answer is markedly different for ex-post merger reviews, authorities should proceed with caution. If parties cannot properly self-assess their deals, the threat of ex-post proceedings will weigh on pre- and post-merger investments (a breakup effectively amounts to expropriating investments that are dependent upon the divested assets). 

Furthermore, because authorities will likely focus ex-post reviews on the most lucrative deals, their incentive effects can be particularly pronounced. Parties may fear that the most successful mergers will be broken up. This could have wide-reaching effects for all merging firms that do not know whether they might become “the next Facebook.” 

Accordingly, for ex-post merger reviews to be justified, it is essential that:

  1. Their outcomes be predictable for the parties; and that 
  2. Analyzing the deals after the fact leads to better decision-making (fewer false acquittals and convictions) than ex-ante reviews would yield.

If these conditions are not in place, ex-post assessments will needlessly weigh down innovation, investment and procompetitive merger activity in the economy.

Hindsight does not disentangle efficiency from market power

So, could ex-post merger reviews be so predictable and effective as to alleviate the uncertainties described above, along with the costs they entail? 

Based on the recently filed Facebook complaints, the answer appears to be no. We simply do not know what the counterfactual to Facebook’s acquisitions of Instagram and WhatsApp would look like. Hindsight does not tell us whether Facebook’s acquisitions led to efficiencies that allowed it to thrive (a pro-competitive scenario), or whether Facebook merely used these deals to kill off competitors and maintain its monopoly (an anticompetitive scenario).

As Sam Bowman and I have argued elsewhere, when discussing the leaked emails that spurred the current proceedings and on which the complaints rely heavily:

These email exchanges may not paint a particularly positive picture of Zuckerberg’s intent in doing the merger, and it is possible that at the time they may have caused antitrust agencies to scrutinise the merger more carefully. But they do not tell us that the acquisition was ultimately harmful to consumers, or about the counterfactual of the merger being blocked. While we know that Instagram became enormously popular in the years following the merger, it is not clear that it would have been just as successful without the deal, or that Facebook and its other products would be less popular today. 

Moreover, it fails to account for the fact that Facebook had the resources to quickly scale Instagram up to a level that provided immediate benefits to an enormous number of users, instead of waiting for the app to potentially grow to such scale organically.

In fact, contrary to what some have argued, hindsight might even complicate matters (again from Sam and me):

Today’s commentators have the benefit of hindsight. This inherently biases contemporary takes on the Facebook/Instagram merger. For instance, it seems almost self-evident with hindsight that Facebook would succeed and that entry in the social media space would only occur at the fringes of existing platforms (the combined Facebook/Instagram platform) – think of the emergence of TikTok. However, at the time of the merger, such an outcome was anything but a foregone conclusion.

In other words, ex-post reviews will, by definition, focus on mergers where today’s outcomes seem preordained — when, in fact, they were probabilistic. This will skew decisions toward finding anticompetitive conduct. If authorities think that Instagram was destined to become great, they are more likely to find that Facebook’s acquisition was anticompetitive because they implicitly dismiss the idea that it was the merger itself that made Instagram great.

Authorities might also confuse correlation for causality. For instance, the state AGs’ complaint ties Facebook’s acquisitions of Instagram and WhatsApp to the degradation of these services, notably in terms of privacy and advertising loads. As the complaint lays out:

127. Following the acquisition, Facebook also degraded Instagram users’ privacy by matching Instagram and Facebook Blue accounts so that Facebook could use information that users had shared with Facebook Blue to serve ads to those users on Instagram. 

180. Facebook’s acquisition of WhatsApp thus substantially lessened competition […]. Moreover, Facebook’s subsequent degradation of the acquired firm’s privacy features reduced consumer choice by eliminating a viable, competitive, privacy-focused option

But these changes may have nothing to do with Facebook’s acquisition of these services. At the time, nearly all tech startups focused on growth over profits in their formative years. It should be no surprise that the platforms imposed higher “prices” to users after their acquisition by Facebook; they were maturing. Further monetizing their platform would have been the logical next step, even absent the mergers.

It is just as hard to determine whether post-merger developments actually harmed consumers. For example, the FTC complaint argues that Facebook stopped developing its own photo-sharing capabilities after the Instagram acquisition,which the commission cites as evidence that the deal neutralized a competitor:

98. Less than two weeks after the acquisition was announced, Mr. Zuckerberg suggested canceling or scaling back investment in Facebook’s own mobile photo app as a direct result of the Instagram deal.

But it is not obvious that Facebook or consumers would have gained anything from the duplication of R&D efforts if Facebook continued to develop its own photo-sharing app. More importantly, this discontinuation is not evidence that Instagram could have overthrown Facebook. In other words, the fact that Instagram provided better photo-sharing capabilities does necessarily imply that it could also provide a versatile platform that posed a threat to Facebook.

Finally, if Instagram’s stellar growth and photo-sharing capabilities were certain to overthrow Facebook’s monopoly, why do the plaintiffs ignore the competitive threat posed by the likes of TikTok today? Neither of the complaints makes any mention of TikTok,even though it currently has well over 1 billion monthly active users. The FTC and state AGs would have us believe that Instagram posed an existential threat to Facebook in 2012 but that Facebook faces no such threat from TikTok today. It is exceedingly unlikely that both these statements could be true, yet both are essential to the plaintiffs’ case.

Some appropriate responses

None of this is to say that ex-post review of mergers and acquisitions should be categorically out of the question. Rather, such proceedings should be initiated only with appropriate caution and consideration for their broader consequences.

When undertaking reviews of past mergers, authorities do  not necessarily need to impose remedies every time they find a merger was wrongly cleared. The findings of these ex-post reviews could simply be used to adjust existing merger thresholds and presumptions. This would effectively create a feedback loop where false acquittals lead to meaningful policy reforms in the future. 

At the very least, it may be appropriate for policymakers to set a higher bar for findings of anticompetitive harm and imposition of remedies in such cases. This would reduce the undesirable deterrent effects that such reviews may otherwise entail, while reserving ex-post remedies for the most problematic cases.

Finally, a tougher system of ex-post review could be used to allow authorities to take more risks during ex-ante proceedings. Indeed, when in doubt, they could effectively  experiment by allowing  marginal mergers to proceed, with the understanding that bad decisions could be clawed back afterwards. In that regard, it might also be useful to set precise deadlines for such reviews and to outline the types of concerns that might prompt scrutiny  or warrant divestitures.

In short, some form of ex-post review may well be desirable. It could help antitrust authorities to learn what works and subsequently to make useful changes to ex-ante merger-review systems. But this would necessitate deep reflection on the many ramifications of ex-post reassessments. Legislative reform or, at the least, publication of guidance documents by authorities, seem like essential first steps. 

Unfortunately, this is the exact opposite of what the Facebook proceedings would achieve. Plaintiffs have chosen to ignore these complex trade-offs in pursuit of a case with extremely dubious underlying merits. Success for the plaintiffs would thus prove a pyrrhic victory, destroying far more than it intends to achieve.

The Limits of Rivalry

Kelly Fayne —  2 November 2020
[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Kelly Fayne (Antitrust Associate, Latham & Watkins).
]

Nicholas Petit, with Big Tech and the Digital Economy: The Moligopoly Scenario, enters the fray at this moment of peak consternation about big tech platforms to reexamine antitrust’s role as referee.  Amongst calls on the one hand like those in the Majority Staff Report and Recommendation from the Subcommittee on Antitrust (“these firms have too much power, and that power must be reined in and subject to appropriate oversight and enforcement”) and, on the other hand, understandably strong disagreement from the firms targeted, Petit offers a diagnosis.  A focus on the protection of rivalry for rivalry’s sake is insufficiently adaptive to the “distinctive features of digital industries, firms, and markets.”

I am left wondering, however, if he’s misdiagnosed the problem – or at least whether the cure he offers would be seen as sufficient by those most vocally asserting that antitrust is failing.  And, of course, I recognize that his objective in writing this book is not to bring harmony to a deeply divided debate, but to offer an improved antitrust framework for navigating big tech.

Petit, in Chapter 5 (“Antitrust in Moligopoly Markets”), says: “So the real question is this: should we abandon, or at least radically alter traditional antitrust principals modeled on rivalry in digital markets? The answer is yes.”  He argues that “protecting rivalry is not perforce socially beneficial in industries with increasing returns to adoption.”  But it is his tethering to the notion of what is “socially beneficial” that creates a challenge.

Petit argues that the function of the current antitrust legal regimes – most significantly the US and EU – is to protect rivalry.   He observes several issues with rivalry when applied as both a test and a remedy for market power.  One of the most valuable insights Petit offers in his impressive work in this book, is that tipped markets may not be all that bad.  In fact, when markets exhibit increasing returns to adoption, allowing the winner to take it all (or most) may be more welfare enhancing than trying to do the antitrust equivalent of forcing two magnets to remain apart.  And, assuming all the Schumpeterian dynamics align, he’s right.  Or rather, he’s right if you agree that welfare is the standard by which what is socially beneficial should be measured.  

Spoiler alert: My own view is that antitrust requires an underlying system of measurement, and the best available system is welfare-based. More on this below. 

When it comes to evaluating horizontal mergers, Petit suggests an alternative regime calibrated to handle the unique circumstances that arise in tech deals.  But his new framework remains largely tethered to (or at least based in the intuitions of) a variation of the welfare standard that, for the most part, still underlies modern applications of antitrust laws. So the question becomes, if you alter the means, but leave the ends unchanged, do you get different results?  At least in the  merger context, I’m not so sure.  And if the results are for the most part the same, do we really need an alternative path to achieving them?  Probably not. 

The Petit horizontal merger test (1) applies a non-rebuttable (OMG!) presumption of prohibition on mergers to monopoly by the dominant platform in “tipped markets,” and (2) permits some acquisitions in untipped markets without undue regard to whether the acquiring firm is dominant in another market.  A non-rebuttable presumption, admittedly, elicited heavy-pressure red pen in the margins upon my first read.  Upon further reflection … I still don’t like it. I am, however, somewhat comforted because I suspect that its practical application would land us largely in the same place as current applications of antitrust for at least the vast majority of tech transactions.  And that is because Petit’s presumptive prohibition on mergers in tipped markets doesn’t cancel the fight, it changes the venue.  

The exercise of determining whether or not the market is tipped in effect replicates the exercise of assessing whether the dominant firm has a significant degree of market power, and concludes in the affirmative.  Enforcers around the world already look skeptically at firms with perceived market power when they make horizontal acquisitions (among an already rare group of cases in which such deals are attempted).  I recognize that there is theoretical daylight between Petit’s proposed test and one in which the merging parties are permitted an efficiencies defense, but in practice, the number of deals cleared solely on the basis of countervailing procompetitive efficiencies has historically been small. Thus, the universe of deals swept up in the per se prohibition could easily end up a null set.  (Or at least, I think it should be a null set given how quickly the tech industry evolves and transforms). 

As for the untipped markets, Petit argues that it is “unwarranted to treat firms with monopoly positions in tipped markets more strictly than others when they make indirect entry in untipped markets.”  He further argues that there is “no economic basis to prefer indirect entry by an incumbent firm from a tipped market over entry from (i) a new firm or (ii) an established firm from an untipped market.  Firm type is not determinative of the weight of social welfare brought by a unit of innovation.”  His position is closely aligned with the existing guidance on vertical and conglomerate mergers, including in the recently issued FTC and DOJ Vertical Merger Guidelines, although his discussion contains a far more nuanced perspective on how network effects and the leveraging of market power from one market to another overlay into the vertical merger math.  In the end, however, whether one applies the existing vertical merger approach or the Petit proposal, I hypothesize little divergence in outcomes.  

All of the above notwithstanding, Petit’s endeavor to devise a framework more closely calibrated to the unique features of tech platforms is admirable, as is the care and thoughtfulness he’s taken to the task.  If the audience for this book takes the view that the core principals of economic welfare should underlie antitrust laws and their application, Petit is likely to find it receptive.  While many (me included) may not think a new regime is necessary, the way that he articulates the challenges presented by platforms and evolving technologies is enlightening even for those who think an old approach can learn new tricks.  And, of course, the existing approach, but has the added benefit of being adaptable to applications outside of tech platforms. 

Still, the purpose of antitrust law is where the far more difficult debate is taking place.  And this is where, as I mentioned above, I think Petit may have misdiagnosed the shortcomings of neo-structuralism (or the neo-Brandeisian school, or Antitrust 2.0, or Hipster Antitrust, and so on). In short, these are frameworks that focus first on the number and size of players in an industry and guard against concentration, even in the absence of a causal link between these structural elements and adverse impact on consumer, and/or total welfare. Petit describes neo-structuralism as focusing on rivalry without having an “an evaluative premise” (i.e., an explanation for why big = bad).  I’m less sure that it lacks an evaluative premise, rather, I think it might have several (potentially competing) evaluative premises.  

Rivalry indeed has no inherent value, it is good – or perceived as good – as a means to an end.  If that end is consumer welfare, then the limiting principle on when rivalry is achieving its end is whether welfare is enhanced or not.  But many have argued that rivalry could have other potential benefits.  For instance, the Antitrust Subcommittee House Report, identifies several potential objectives for competition law: driving innovation and entrepreneurship, privacy, the protection of political and economic liberties, and controlling influence of private firms over the policymaking process.  Even if we grant that competition could be a means to achieving these ends, the measure of success for competition laws would have to be the degree to which the ends are achieved.  For example, if one argues that competition law should be used to promote privacy, we would measure the success of those laws by whether they do in fact promote privacy, not whether they maintain a certain number of players in an industry.  Although, we should also consider whether competition law really is the most efficient and effective means to those ends. 

Returning again to merger control, in the existing US regime, and under the Petit proposal, a dominant tech platform might be permitted to acquire a large player in an unrelated market assuming there is no augmentation of market power as a result of the interplay between the two and if the deal is, on net, efficiency enhancing.  In simpler terms, if consumers are made better off through lower prices, better services, increased innovation etc. the deal is permitted to proceed.  Yet, if antitrust were calibrated, e.g., for a primary purpose of disaggregating corporate control over capital to minimize political influence by large firms, you could see the same transition failing to achieve approval.  If privacy were the primary goal, perhaps certain deals would be blocked if the merging parties are both in possession of detailed consumer data without regard to their size or existence of other players in the same space.  

The failure of neo-structuralism (etc.) is, in my view, also likely the basis for its growing popularity.  Petit argues that the flaw is that it promotes rivalry as an end in itself.  I posit instead that neo-structuralism is flawed because it promotes rivalry as a means and is agnostic to the ends.  As a result, people with strongly differing views on the optimal ends of competition law can appear to agree with one another by agreeing on the means and in doing so, promote a competition law framework that risks being untethered and undisciplined.  In the absence of a clearly articulated policy goal – whether it is privacy, or economic equality, or diluting political influence, or even consumer welfare – there is no basis on which to evaluate whether any given competition law is structured or applied optimally.  If rivalry is to be the means by which we implement our policy goals, how do we know when we have enough rivalry, or too little?  We can’t.  

It is on this point that I think there is more work to undertake in a complete critique of the failings of neo-structuralism (and any other neo-isms to come).  In addition to other merits, welfare maximization gives us a framework to hold the construct and application of competition law accountable.  It is irresponsible to replace a system that has, as Petit puts it, an “evaluative premise” with one possesses no ends-based framework for evaluation, leaving the law rudderless and susceptible to arbitrary or even selective enforcement.

Congressman Buck’s “Third Way” report offers a compromise between the House Judiciary Committee’s majority report, which proposes sweeping new regulation of tech companies, and the status quo, which Buck argues is unfair and insufficient. But though Buck rejects many of the majority’s reports proposals, what he proposes instead would lead to virtually the same outcome via a slightly longer process. 

The most significant majority proposals that Buck rejects are the structural separation to prevent a company that runs a platform from operating on that platform “in competition with the firms dependent on its infrastructure”, and line-of-business restrictions that would confine tech companies to a small number of markets, to prevent them from preferencing their other products to the detriment of competitors.

Buck rules these out, saying that they are “regulatory in nature [and] invite unforeseen consequences and divert attention away from public interest antitrust enforcement by our antitrust agencies.” He goes on to say that “this proposal is a thinly veiled call to break up Big Tech firms.”

Instead, Buck endorses, either fully or provisionally, measures including revitalising the essential facilities doctrine, imposing data interoperability mandates on platforms, and changing antitrust law to prevent “monopoly leveraging and predatory pricing”. 

Put together, though, these would amount to the same thing that the Democratic majority report proposes: a world where platforms are basically just conduits, regulated to be neutral and open, and where the companies that run them require a regulator’s go-ahead for important decisions — a process that would be just as influenced lobbying and political considerations, and insulated from market price signals, as any other regulator’s decisions are.

Revitalizing the essential facilities doctrine

Buck describes proposals to “revitalize the essential facilities doctrine” as “common ground” that warrant further consideration. This would mean that platforms deemed to be “essential facilities” would be required to offer access to their platform to third parties at a “reasonable” price, except in exceptional circumstances. The presumption would be that these platforms were anticompetitively foreclosing third party developers and merchants by either denying them access to their platforms or by charging them “too high” prices. 

This would require the kind of regulatory oversight that Buck says he wants to avoid. He says that “conservatives should be wary of handing additional regulatory authority to agencies in an attempt to micromanage platforms’ access rules.” But there’s no way to avoid this when the “facility” — and hence its pricing and access rules — changes as frequently as any digital platform does. In practice, digital platforms would have to justify their pricing rules and decisions about exclusion of third parties to courts or a regulator as often as they make those decisions.

If Apple’s App Store were deemed an essential facility such that it is presumed to be foreclosing third party developers any time it rejected their submissions, it would have to submit to regulatory scrutiny of the “reasonableness” of its commercial decisions on, literally, a daily basis.

That would likely require price controls to prevent platforms from using pricing to de facto exclude third parties they did not want to deal with. Adjudication of “fair” pricing by courts is unlikely to be a sustainable solution. Justice Breyer, in Town of Concord v. Boston Edison Co., considered this to be outside the courts’ purview:

[H]ow is a judge or jury to determine a ‘fair price?’ Is it the price charged by other suppliers of the primary product? None exist. Is it the price that competition ‘would have set’ were the primary level not monopolized? How can the court determine this price without examining costs and demands, indeed without acting like a rate-setting regulatory agency, the rate-setting proceedings of which often last for several years? Further, how is the court to decide the proper size of the price ‘gap?’ Must it be large enough for all independent competing firms to make a ‘living profit,’ no matter how inefficient they may be? . . . And how should the court respond when costs or demands change over time, as they inevitably will?

In practice, infrastructure treated as an essential facility is usually subject to pricing control by a regulator. This has its own difficulties. The UK’s energy and water infrastructure is an example. In determining optimal access pricing, regulators must determine the price that weighs competing needs to maximise short-term output, incentivise investment by the infrastructure owner, incentivise innovation and entry by competitors (e.g., local energy grids) and, of course, avoid “excessive” pricing. 

This is a near-impossible task, and the process is often drawn out and subject to challenges even in markets where the infrastructure is relatively simple. It is even less likely that these considerations would be objectively tractable in digital markets.

Treating a service as an essential facility is based on the premise that, absent mandated access, it is impossible to compete with it. But mandating access does not, on its own, prevent it from extracting monopoly rents from consumers; it just means that other companies selling inputs can have their share of the rents. 

So you may end up with two different sets of price controls: on the consumer side, to determine how much monopoly rent can be extracted from consumers, and on the access side, to determine how the monopoly rents are divided.

The UK’s energy market has both, for example. In the case of something like an electricity network, where it may simply not be physically or economically feasible to construct a second, competing network, this might be the least-bad course of action. In such circumstances, consumer-side price regulation might make sense. 

But if a service could, in fact, be competed with by others, treating it as an essential facility may be affirmatively harmful to competition and consumers if it diverts investment and time away from that potential competitor by allowing other companies to acquire some of the incumbent’s rents themselves.

The HJC report assumes that Apple is a monopolist, because, among people who own iPhones, the App Store is the only way to install third-party software. Treating the App Store as an essential facility may mean a ban on Apple charging “excessive prices” to companies like Spotify or Epic that would like to use it, or on Apple blocking them for offering users alternative in-app ways of buying their services.

If it were impossible for users to switch from iPhones, or for app developers to earn revenue through other mechanisms, this logic might be sound. But it would still not change the fact that the App Store platform was able to charge users monopoly prices; it would just mean that Epic and Spotify could capture some of those monopoly rents for themselves. Nice for them, but not for consumers. And since both companies have already grown to be pretty big and profitable with the constraints they object to in place, it seems difficult to argue that they cannot compete with these in place and sounds more like they’d just like a bigger share of the pie.

And, in fact, it is possible to switch away from the iPhone to Android. I have personally switched back and forth several times over the past few years, for example. And so have many others — despite what some claim, it’s really not that hard, especially now that most important data is stored on cloud-based services, and both companies offer an app to switch from the other. Apple also does not act like a monopolist — its Bionic chips are vastly better than any competitor’s and it continues to invest in and develop them.

So in practice, users switching from iPhone to Android if Epic’s games and Spotify’s music are not available constrains Apple, to some extent. If Apple did drive those services permanently off their platform, it would make Android relatively more attractive, and some users would move away — Apple would bear some of the costs of its ecosystem becoming worse. 

Assuming away this kind of competition, as Buck and the majority report do, is implausible. Not only that, but Buck and the majority believe that competition in this market is impossible — no policy or antitrust action could change things, and all that’s left is to regulate the market like it’s an electricity grid. 

And it means that platforms could often face situations where they could not expect to make themselves profitable after building their markets, since they could not control the supply side in order to earn revenues. That would make it harder to build platforms, and weaken competition, especially competition faced by incumbents.

Mandating interoperability

Interoperability mandates, which Buck supports, require platforms to make their products open and interoperable with third party software. If Twitter were required to be interoperable, for example, it would have to provide a mechanism (probably a set of open APIs) by which third party software could tweet and read its feeds, upload photos, send and receive DMs, and so on. 

Obviously, what interoperability actually involves differs from service to service, and involves decisions about design that are specific to each service. These variations are relevant because they mean interoperability requires discretionary regulation, including about product design, and can’t just be covered by a simple piece of legislation or a court order. 

To give an example: interoperability means a heightened security risk, perhaps from people unwittingly authorising a bad actor to access their private messages. How much is it appropriate to warn users about this, and how tight should your security controls be? It is probably excessive to require that users provide a sworn affidavit with witnesses, and even some written warnings about the risks may be so over the top as to scare off virtually any interested user. But some level of warning and user authentication is appropriate. So how much? 

Similarly, a company that has been required to offer its customers’ data through an API, but doesn’t really want to, can make life miserable for third party services that want to use it. Changing the API without warning, or letting its service drop or slow down, can break other services, and few users will be likely to want to use a third-party service that is unreliable. But some outages are inevitable, and some changes to the API and service are desirable. How do you decide how much?

These are not abstract examples. Open Banking in the UK, which requires interoperability of personal and small business current accounts, is the most developed example of interoperability in the world. It has been cited by former Chair of the Council of Economic Advisors, Jason Furman, among others, as a model for interoperability in tech. It has faced all of these questions: one bank, for instance, required that customers pass through twelve warning screens to approve a third party app to access their banking details.

To address problems like this, Open Banking has needed an “implementation entity” to design many of its most important elements. This is a de facto regulator, and it has taken years of difficult design decisions to arrive at Open Banking’s current form. 

Having helped write the UK’s industry review into Open Banking, I am cautiously optimistic about what it might be able to do for banking in Britain, not least because that market is already heavily regulated and lacking in competition. But it has been a huge undertaking, and has related to a relatively narrow set of data (its core is just two different things — the ability to read an account’s balance and transaction history, and the ability to initiate payments) in a sector that is not known for rapidly changing technology. Here, the costs of regulation may be outweighed by the benefits.

I am deeply sceptical that the same would be the case in most digital markets, where products do change rapidly, where new entrants frequently attempt to enter the market (and often succeed), where the security trade-offs are even more difficult to adjudicate, and where the economics are less straightforward, given that many services are provided at least in part because of the access to customer data they provide. 

Even if I am wrong, it is unavoidable that interoperability in digital markets would require an equivalent body to make and implement decisions when trade-offs are involved. This, again, would require a regulator like the UK’s implementation entity, and one that was enormous, given the number and diversity of services that it would have to oversee. And it would likely have to make important and difficult design decisions to which there is no clear answer. 

Banning self-preferencing

Buck’s Third Way would also ban digital platforms from self-preferencing. This typically involves an incumbent that can provide a good more cheaply than its third-party competitors — whether it’s through use of data that those third parties do not have access to, reputational advantages that mean customers will be more likely to use their products, or through scale efficiencies that allow it to provide goods to a larger customer base for a cheaper price. 

Although many people criticise self-preferencing as being unfair on competitors, “self-preferencing” is an inherent part of almost every business. When a company employs its own in-house accountants, cleaners or lawyers, instead of contracting out for them, it is engaged in internal self-preferencing. Any firm that is vertically integrated to any extent, instead of contracting externally for every single ancillary service other than the one it sells in the market, is self-preferencing. Coase’s theory of the firm is all about why this kind of behaviour happens, instead of every worker contracting on the open market for everything they do. His answer is that transaction costs make it cheaper to bring certain business relationships in-house than to contract externally for them. Virtually everyone agrees that this is desirable to some extent.

Nor does it somehow become a problem when the self-preferencing takes place on the consumer product side. Any firm that offers any bundle of products — like a smartphone that can run only the manufacturer’s operating system — is engaged in self-preferencing, because users cannot construct their own bundle with that company’s hardware and another’s operating system. But the efficiency benefits often outweigh the lack of choice.

Self-preferencing in digital platforms occurs, for example, when Google includes relevant Shopping or Maps results at the top of its general Search results, or when Amazon gives its own store-brand products (like the AmazonBasics range) a prominent place in the results listing.

There are good reasons to think that both of these are good for competition and consumer welfare. Google making Shopping results easily visible makes it a stronger competitor to Amazon, and including Maps results when you search for a restaurant just makes it more convenient to get the information you’re looking for.

Amazon sells its own private label products partially because doing so is profitable (even when undercutting rivals), partially to fill holes in product lines (like clothing, where 11% of listings were Amazon private label as of November 2018), and partially because it increases users’ likelihood to use Amazon if they expect to find a reliable product from a brand they trust. According to Amazon, they account for less than 1% of its annual retail sales, in contrast to the 19% of revenues ($54 billion) Amazon makes from third party seller services, which includes Marketplace commissions. Any analysis that ignores that Amazon has to balance those sources of revenue, and so has to tread carefully, is deficient. 

With “commodity” products (like, say, batteries and USB cables), where multiple sellers are offering very similar or identical versions of the same thing, private label competition works well for both Amazon and consumers. By Amazon’s own rules it can enter this market using aggregated data, but this doesn’t give it a significant advantage, because that data is easily obtainable from multiple sources, including Amazon itself, which makes detailed aggregated sales data freely available to third-party retailers

Amazon does profit from sales of these products, of course. And other merchants suffer by having to cut their prices to compete. That’s precisely what competition involves — competition is incompatible with a quiet life for businesses. But consumers benefit, and the biggest benefit to Amazon is that it assures its potential customers that when they visit they will be able to find a product that is cheap and reliable, so they keep coming back.

It is even hard to argue that in aggregate this practice is damaging to third-party sellers: many, like Anker, have built successful businesses on Amazon despite private-label competition precisely because the value of the platform increases for all parties as user trust and confidence in it does.

In these cases and in others, platforms act to solve market failures on the markets they host, as Andrei Hagiu has argued. To maximize profits, digital platforms need to strike a balance between being an attractive place for third-party merchants to sell their goods and being attractive to consumers by offering low prices. The latter will frequently clash with the former — and that’s the difficulty of managing a platform. 

To mistake this pro-competitive behaviour with an absence of competition is misguided. But that is a key conclusion of Buck’s Third Way: that the damage to competitors makes this behaviour harmful overall, and that it should be curtailed with “non-discrimination” rules. 

Treating below-cost selling as “predatory pricing”

Buck’s report equates below-cost selling with predatory pricing (“predatory pricing, also known as below-cost selling”). This is mistaken. Predatory pricing refers to a particular scenario where your price cut is temporary and designed to drive a competitor out of business, so that you can raise prices later and recoup your losses. 

It is easy to see that this does not describe the vast majority of below-cost selling. Buck’s formulation would describe all of the following as “predatory pricing”:

  • A restaurants that gives away ketchup for free;
  • An online retailer that offers free shipping and returns;
  • A grocery store that sells tins of beans for 3p a can. (This really happened when I was a child.)

The rationale for offering below-cost prices differs in each of these cases. Sometimes it’s a marketing ploy — Tesco sells those beans to get some free media, and to entice people into their stores, hoping they’ll decide to do the rest of their weekly shop there at the same time. Sometimes it’s about reducing frictions — the marginal cost of ketchup is so low that it’s simpler to just give it away. Sometimes it’s about reducing the fixed costs of transactions so more take place — allowing customers who buy your products to return them easily may mean more are willing to buy them overall, because there’s less risk for them if they don’t like what they buy. 

Obviously, none of these is “predatory”: none is done in the expectation that the below-cost selling will drive those businesses’ competitors out of business, allowing them to make monopoly profits later.

True predatory pricing is theoretically possible, but very difficult. As David Henderson describes, to successfully engage in predatory pricing means taking enormous and rising losses that grow for the “predatory” firm as customers switch to it from its competitor. And once the rival firm has exited the market, if the predatory firm raises prices above average cost (i.e., to recoup its losses), there is no guarantee that a new competitor will not enter the market selling at the previously competitive price. And the competing firm can either shut down temporarily or, in some cases, just buy up the “predatory” firm’s discounted goods to resell later. It is debatable whether the canonical predatory pricing case, Standard Oil, is itself even an example of that behaviour.

Offering a product below cost in a multi-sided market (like a digital platform) can be a way of building a customer base in order to incentivise entry on the other side of the market. When network effects exist, so additional users make the service more valuable to existing users, it can be worthwhile to subsidise the initial users until the service reaches a certain size. 

Uber subsidising drivers and riders in a new city is an example of this — riders want enough drivers on the road that they know they’ll be picked up fairly quickly if they order one, and drivers want enough riders that they know they’ll be able to earn a decent night’s fares if they use the app. This requires a certain volume of users on both sides — to get there, it can be in everyone’s interest for the platform to subsidise one or both sides of the market to reach that critical mass.

The slightly longer road to regulation

That is another reason for below-cost pricing: someone other than the user may be part-paying for a product, to build a market they hope to profit from later. Platforms must adjust pricing and their offerings to each side of their market to manage supply and demand. Epic, for example, is trying to build a desktop computer game store to rival the largest incumbent, Steam. To win over customers, it has been giving away games for free to users, who can own them on that store forever. 

That is clearly pro-competitive — Epic is hoping to get users over the habit of using Steam for all their games, in the hope that they will recoup the costs of doing so later in increased sales. And it is good for consumers to get free stuff. This kind of behaviour is very common. As well as Uber and Epic, smaller platforms do it too. 

Buck’s proposals would make this kind of behaviour much more difficult, and permitted only if a regulator or court allows it, instead of if the market can bear it. On both sides of the coin, Buck’s proposals would prevent platforms from the behaviour that allows them to grow in the first place — enticing suppliers and consumers and subsidising either side until critical mass has been reached that allows the platform to exist by itself, and the platform owner to recoup its investments. Fundamentally, both Buck and the majority take the existence of platforms as a given, ignoring the incentives to create new ones and compete with incumbents. 

In doing so, they give up on competition altogether. As described, Buck’s provisions would necessitate ongoing rule-making, including price controls, to work. It is unlikely that a court could do this, since the relevant costs would change too often for one-shot rule-making of the kind a court could do. To be effective at all, Buck’s proposals would require an extensive, active regulator, just as the majority report’s would. 

Buck nominally argues against this sort of outcome — “Conservatives should be wary of handing additional regulatory authority to agencies in an attempt to micromanage platforms’ access rules” — but it is probably unavoidable, given the changes he proposes. And because the rule changes he proposes would apply to the whole economy, not just tech, his proposals may, perversely, end up being even more extensive and interventionist than the majority’s.

Other than this, the differences in practice between Buck’s proposals and the Democrats’ proposals would be trivial. At best, Buck’s Third Way is just a longer route to the same destination.

In the hands of a wise philosopher-king, the Sherman Act’s hard-to-define prohibitions of “restraints of trade” and “monopolization” are tools that will operate inevitably to advance the public interest in competitive markets. In the hands of real-world litigators, regulators and judges, those same words can operate to advance competitors’ private interests in securing commercial advantages through litigation that could not be secured through competition in the marketplace. If successful, this strategy may yield outcomes that run counter to antitrust law’s very purpose.

The antitrust lawsuit filed by Epic Games against Apple in August 2020, and Apple’s antitrust lawsuit against Qualcomm (settled in April 2019), suggest that antitrust law is heading in this unfortunate direction.

From rent-minimization to rent-maximization

The first step in converting antitrust law from an instrument to minimize rents to an instrument to maximize rents lies in expanding the statute’s field of application on the apparently uncontroversial grounds of advancing the public interest in “vigorous” enforcement. In surprisingly short order, this largely unbounded vision of antitrust’s proper scope has become the dominant fashion in policy discussions, at least as expressed by some legislators, regulators, and commentators.

Following the new conventional wisdom, antitrust law has pursued over the past decades an overly narrow path, consequently overlooking and exacerbating a panoply of social ills that extend well beyond the mission to “merely” protect the operation of the market pricing mechanism. This line of argument is typically coupled with the assertion that courts, regulators and scholars have been led down this path by incumbents that welcome the relaxed scrutiny of a purportedly deferential antitrust policy.

This argument, and related theory of regulatory capture, has things roughly backwards.

Placing antitrust law at the service of a largely undefined range of social purposes set by judicial and regulatory fiat threatens to render antitrust a tool that can be easily deployed to favor the private interests of competitors rather than the public interest in competition. Without the intellectual discipline imposed by the consumer welfare standard (and, outside of per se illegal restraints, operationalized through the evidentiary requirement of competitive harm), the rhetoric of antitrust provides excellent cover for efforts to re-engineer the rules of the game in lieu of seeking to win the game as it has been played.

Epic Games v. Apple

A nascent symptom of this expansive form of antitrust is provided by the much-publicized lawsuit brought by Epic Games, the maker of the wildly popular video game, Fortnite, against Apple, the operator of the even more wildly popular App Store. On August 13, 2020, Epic added a “direct” payment processing services option to its Fortnite game, which violated the developer terms of use that govern the App Store. In response, Apple exercised its contractual right to remove Fortnite from the App Store, triggering Fortnite’s antitrust suit. The same sequence has ensued between Epic Games and Google in connection with the Google Play Store. Both litigations are best understood as a breach of contract dispute cloaked in the guise of an antitrust cause of action.

In suggesting that a jury trial would be appropriate in Epic Games’ suit against Apple, the district court judge reportedly stated that the case is “on the frontier of antitrust law” and [i]t is important enough to understand what real people think.” That statement seems to suggest that this is a close case under antitrust law. I respectfully disagree. Based on currently available information and applicable law, Epic’s argument suffers from two serious vulnerabilities that would seem to be difficult for the plaintiff to overcome.

A contestably narrow market definition

Epic states three related claims: (1) Apple has a monopoly in the relevant market, defined as the App Store, (2) Apple maintains its monopoly by contractually precluding developers from distributing iOS-compatible versions of their apps outside the App Store, and (3) Apple maintains a related monopoly in the payment processing services market for the App Store by contractually requiring developers to use Apple’s processing service.

This market definition, and the associated chain of reasoning, is subject to significant doubt, both as a legal and factual matter.

Epic’s narrow definition of the relevant market as the App Store (rather than app distribution platforms generally) conveniently results in a 100% market share for Apple. Inconveniently, federal case law is generally reluctant to adopt single-brand market definitions. While the Supreme Court recognized in 1992 a single-brand market in Eastman Kodak Co. v. Image Technical Services, the case is widely considered to be an outlier in light of subsequent case law. As a federal district court observed in Spahr v. Leegin Creative Leather Products (E.D. Tenn. 2008): “Courts have consistently refused to consider one brand to be a relevant market of its own when the brand competes with other potential substitutes.”

The App Store would seem to fall into this typical category. The customer base of existing and new Fortnite users can still accessthe gamethrough multiple platforms and on multiple devices other than the iPhone, including a PC, laptop, game console, and non-Apple mobile devices. (While Google has also removed Fortnite from the Google Play store due to the added direct payment feature, users can, at some inconvenience, access the game manually on Android phones.)

Given these alternative distribution channels, it is at a minimum unclear whether Epic is foreclosed from reaching a substantial portion of its consumer base, which may already access the game on alternative platforms or could potentially do so at moderate incremental transaction costs. In the language of platform economics, it appears to be technologically and economically feasible for the target consumer base to “multi-home.” If multi-homing and related switching costs are low, even a 100% share of the App Store submarket would not translate into market power in the broader and potentially more economically relevant market for app distribution generally.

An implausible theory of platform lock-in

Even if it were conceded that the App Store is the relevant market, Epic’s claim is not especially persuasive, both as an economic and a legal matter. That is because there is no evidence that Apple is exploiting any such hypothetically attributed market power to increase the rents extracted from developers and indirectly impose deadweight losses on consumers.

In the classic scenario of platform lock-in, a three-step sequence is observed: (1) a new firm acquires a high market share in a race for platform dominance, (2) the platform winner is protected by network effects and switching costs, and (3) the entrenched platform “exploits” consumers by inflating prices (or imposing other adverse terms) to capture monopoly rents. This economic model is reflected in the case law on lock-in claims, which typically requires that the plaintiff identify an adverse change by the defendant in pricing or other terms after users were allegedly locked-in.

The history of the App Store does not conform to this model. Apple has always assessed a 30% fee and the same is true of every other leading distributor of games for the mobile and PC market, including Google Play Store, App Store’s rival in the mobile market, and Steam, the dominant distributor of video games in the PC market. This long-standing market practice suggests that the 30% fee is most likely motivated by an efficiency-driven business motivation, rather than seeking to entrench a monopoly position that Apple did not enjoy when the practice was first adopted. That is: even if Apple is deemed to be a “monopolist” for Section 2 purposes, it is not taking any “illegitimate” actions that could constitute monopolization or attempted monopolization.

The logic of the 70/30 split

Uncovering the business logic behind the 70/30 split in the app distribution market is not too difficult.

The 30% fee appears to be a low transaction-cost practice that enables the distributor to fund a variety of services, including app development tools, marketing support, and security and privacy protections, all of which are supplied at no separately priced fee and therefore do not require service-by-service negotiation and renegotiation. The same rationale credibly applies to the integrated payment processing services that Apple supplies for purposes of in-app purchases.

These services deliver significant value and would otherwise be difficult to replicate cost-effectively, protect the App Store’s valuable stock of brand capital (which yields positive spillovers for app developers on the site), and lower the costs of joining and participating in the App Store. Additionally, the 30% fee cross-subsidizes the delivery of these services to the approximately 80% of apps on the App Store that are ad-based and for which no fee is assessed, which in turn lowers entry costs and expands the number and variety of product options for platform users. These would all seem to be attractive outcomes from a competition policy perspective.

Epic’s objection

Epic would object to this line of argument by observing that it only charges a 12% fee to distribute other developers’ games on its own Epic Games Store.

Yet Epic’s lower fee is reportedly conditioned, at least in some cases, on the developer offering the game exclusively on the Epic Games Store for a certain period of time. Moreover, the services provided on the Epic Games Store may not be comparable to the extensive suite of services provided on the App Store and other leading distributors that follow the 30% standard. Additionally, the user base a developer can expect to access through the Epic Games Store is in all likelihood substantially smaller than the audience that can be reached through the App Store and other leading app and game distributors, which is then reflected in the higher fees charged by those platforms.

Hence, even the large fee differential may simply reflect the higher services and larger audiences available on the App Store, Google Play Store and other leading platforms, as compared to the Epic Games Store, rather than the unilateral extraction of market rents at developers’ and consumers’ expense.

Antitrust is about efficiency, not distribution

Epic says the standard 70/30 split between game publishers and app distributors is “excessive” while others argue that it is historically outdated.

Neither of these are credible antitrust arguments. Renegotiating the division of economic surplus between game suppliers and distributors is not the concern of antitrust law, which (as properly defined) should only take an interest if either (i) Apple is colluding on the 30% fee with other app distributors, or (ii) Apple is taking steps that preclude entry into the apps distribution market and lack any legitimate business justification. No one claims evidence for the former possibility and, without further evidence, the latter possibility is not especially compelling given the uniform use of the 70/30 split across the industry (which, as noted, can be derived from a related set of credible efficiency justifications). It is even less compelling in the face of evidence that output is rapidly accelerating, not declining, in the gaming app market: in the first half of 2020, approximately 24,500 new games were added to the App Store.

If this conclusion is right, then Epic’s lawsuit against Apple does not seem to have much to do with the public interest in preserving market competition.

But it clearly has much to do with the business interest of an input supplier in minimizing its distribution costs and maximizing its profit margin. That category includes not only Epic Games but Tencent, the world’s largest video game publisher and the holder of a 40% equity stake in Epic. Tencent also owns Riot Games (the publisher of “League of Legends”), an 84% stake in Supercell (the publisher of “Clash of Clans”), and a 5% stake in Activision Blizzard (the publisher of “Call of Duty”). It is unclear how an antitrust claim that, if successful, would simply redistribute economic value from leading game distributors to leading game developers has any necessary relevance to antitrust’s objective to promote consumer welfare.

The prequel: Apple v. Qualcomm

Ironically (and, as Dirk Auer has similarly observed), there is a symmetry between Epic’s claims against Apple and the claims previously pursued by Apple (and, concurrently, the Federal Trade Commission) against Qualcomm.

In that litigation, Apple contested the terms of the licensing arrangements under which Qualcomm made available its wireless communications patents to Apple (more precisely, Foxconn, Apple’s contract manufacturer), arguing that the terms were incompatible with Qualcomm’s commitment to “fair, reasonable and nondiscriminatory” (“FRAND”) licensing of its “standard-essential” patents (“SEPs”). Like Epic v. Apple, Apple v. Qualcomm was fundamentally a contract dispute, with the difference that Apple was in the position of a third-party beneficiary of the commitment that Qualcomm had made to the governing standard-setting organization. Like Epic, Apple sought to recharacterize this contractual dispute as an antitrust question, arguing that Qualcomm’s licensing practices constituted anticompetitive actions to “monopolize” the market for smartphone modem chipsets.

Theory meets evidence

The rhetoric used by Epic in its complaint echoes the rhetoric used by Apple in its briefs and other filings in the Qualcomm litigation. Apple (like the FTC) had argued that Qualcomm imposed a “tax” on competitors by requiring that any purchaser of Qualcomm’s chipsets concurrently enter into a license for Qualcomm’s SEP portfolio relating to 3G and 4G/LTE-enabled mobile communications devices.

Yet the history and performance of the mobile communications market simply did not track Apple’s (and the FTC’s continuing) characterization of Qualcomm’s licensing fee as a socially costly drag on market growth and, by implication, consumer welfare.

If this assertion had merit, then the decades-old wireless market should have exhibited a dismal history of increasing prices, slow user adoption and lagging innovation. In actuality, the wireless market since its inception has grown relentlessly, characterized by declining quality-adjusted prices, expanding output, relentless innovation, and rapid adoption across a broad range of income segments.

Given this compelling real-world evidence, the only remaining line of argument (still being pursued by the FTC) that could justify antitrust intervention is a theoretical conjecture that the wireless market might have grown even faster under some alternative IP licensing arrangement. This assertion rests precariously on the speculative assumption that any such arrangement would have induced the same or higher level of aggregate investment in innovation and commercialization activities. That fragile chain of “what if” arguments hardly seems a sound basis on which to rewrite the legal infrastructure behind the billions of dollars of licensing transactions that support the economically thriving smartphone market and the even larger ecosystem that has grown around it.

Antitrust litigation as business strategy

Given the absence of compelling evidence of competitive harm from Qualcomm’s allegedly anticompetitive licensing practices, Apple’s litigation would seem to be best interpreted as an economically rational attempt by a downstream producer to renegotiate a downward adjustment in the fees paid to an upstream supplier of critical technology inputs. (In fact, those are precisely the terms on which Qualcomm in 2015 settled the antitrust action brought against it by China’s competition regulator, to the obvious benefit of local device producers.) The Epic Games litigation is a mirror image fact pattern in which an upstream supplier of content inputs seeks to deploy antitrust law strategically for the purposes of minimizing the fees it pays to a leading downstream distributor.

Both litigations suffer from the same flaw. Private interests concerning the division of an existing economic value stream—a business question that is matter of indifference from an efficiency perspective—are erroneously (or, at least, reflexively) conflated with the public interest in preserving the free play of competitive forces that maximizes the size of the economic value stream.

Conclusion: Remaking the case for “narrow” antitrust

The Epic v. Apple and Apple v. Qualcomm disputes illustrate the unproductive rent-seeking outcomes to which antitrust law will inevitably be led if, as is being widely advocated, it is decoupled from its well-established foundation in promoting consumer welfare—and not competitor welfare.

Some proponents of a more expansive approach to antitrust enforcement are convinced that expanding the law’s scope of application will improve market efficiency by providing greater latitude for expert regulators and courts to reengineer market structures to the public benefit. Yet any substitution of top-down expert wisdom for the bottom-up trial-and-error process of market competition can easily yield “false positives” in which courts and regulators take actions that counterproductively intervene in markets that are already operating under reasonably competitive conditions. Additionally, an overly expansive approach toward the scope of antitrust law will induce private firms to shift resources toward securing advantages over competitors through lobbying and litigation, rather than seeking to win the race to deliver lower-cost and higher-quality products and services. Neither outcome promotes the public’s interest in a competitive marketplace.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Peter Klein (Professor of Entrepreneurship, Baylor University).
]

Nicolas Petit’s insightful and provocative book ends with a chapter on “Big Tech’s Novel Harms,” asking whether antitrust is the appropriate remedy for popular (and academic) concerns about privacy, fake news, and hate speech. In each case, he asks whether the alleged harms are caused by a lack of competition among platforms – which could support a case for breaking them up – or by the nature of the underlying technologies and business models. He concludes that these problems are not alleviated (and may even be exacerbated) by applying competition policy and suggests that regulation, not antitrust, is the more appropriate tool for protecting privacy and truth.

What kind of regulation? Treating digital platforms like public utilities won’t work, Petit argues, because the product is multidimensional and competition takes place on multiple margins (the larger theme of the book): “there is a plausible chance that increased competition in digital markets will lead to a race to the bottom, in which price competition (e.g., on ad markets) will be the winner, and non-price competition (e.g., on privacy) will be the loser.” Utilities regulation also provides incentives for rent-seeking by less efficient rivals. Retail regulation, aimed at protecting small firms, may end up helping incumbents instead by raising rivals’ costs.

Petit concludes that consumer protection regulation (such as Europe’s GDPR) is a better tool for guarding privacy and truth, though it poses challenges as well. More generally, he highlights the vast gulf between the economic analysis of privacy and speech and the increasingly loud calls for breaking up the big tech platforms, which would do little to alleviate these problems.

As in the rest of the book, Petit’s treatment of these complex issues is thoughtful, careful, and systematic. I have more fundamental problems with conventional antitrust remedies and think that consumer protection is problematic when applied to data services (even more so than in other cases). Inspired by this chapter, let me offer some additional thoughts on privacy and the nature of data which speak to regulation of digital platforms and services.

First, privacy, like information, is not an economic good. Just as we don’t buy and sell information per se but information goods (books, movies, communications infrastructure, consultants, training programs, etc.), we likewise don’t produce and consume privacy but what we might call privacy goods: sunglasses, disguises, locks, window shades, land, fences and, in the digital realm, encryption software, cookie blockers, data scramblers, and so on.

Privacy goods and services can be analyzed just like other economic goods. Entrepreneurs offer bundled services that come with varying degrees of privacy protection: encrypted or regular emails, chats, voice and video calls; browsers that block cookies or don’t; social media sites, search engines, etc. that store information or not; and so on. Most consumers seem unwilling to sacrifice other functionality for increased privacy, as suggested by the small market shares held by DuckDuckGo, Telegram, Tor, and the like suggest. Moreover, while privacy per se is appealing, there are huge efficiency gains from matching on buyer and seller characteristics on sharing platforms, digital marketplaces, and dating sites. There are also substantial cost savings from electronic storage and sharing of private information such as medical records and credit histories. And there is little evidence of sellers exploiting such information to engage in price discrimination. (Aquisti, Taylor, and Wagman, 2016 provide a detailed discussion of many of these issues.)

Regulating markets for privacy goods via bans on third-party access to customer data, mandatory data portability, and stiff penalties for data breaches is tricky. Such policies could make digital services more valuable, but it is not obvious why the market cannot figure this out. If consumers are willing to pay for additional privacy, entrepreneurs will be eager to supply it. Of course, bans on third-party access and other forms of sharing would require a fundamental change in the ad-based revenue model that makes free or low-cost access possible, so platforms would have to devise other means of monetizing their services. (Again, many platforms already offer ad-free subscriptions, so it’s unclear why those who prefer ad-based, free usage should be prevented from doing so.)

What about the idea that I own “my” data and that, therefore, I should have full control over how it is used? Some of the utilities-based regulatory models treat platforms as neutral storage places or conduits for information belonging to users. Proposals for data portability suggest that users of technology platforms should be able to move their data from platform to platform, downloading all their personal information from one platform then uploading it to another, then enjoying the same functionality on the new platform as longtime users.

Of course, there are substantial technical obstacles to such proposals. Data would have to be stored in a universal format – not just the text or media users upload to platforms, but also records of all interactions (likes, shares, comments), the search and usage patterns of users, and any other data generated as a result of the user’s actions and interactions with other users, advertisers, and the platform itself. It is unlikely that any universal format could capture this information in a form that could be transferred from one platform to another without a substantial loss of functionality, particularly for platforms that use algorithms to determine how information is presented to users based on past use. (The extreme case is a platform like TikTok which uses usage patterns as a substitute for follows, likes, and shares, portability to construct a “feed.”)

Moreover, as each platform sets its own rules for what information is allowed, the import functionality would have to screen the data for information allowed on the original platform but not the new (and the reverse would be impossible – a user switching from Twitter to Gab, for instance, would have no way to add the content that would have been permitted on Gab but was never created in the first place because it would have violated Twitter rules).

There is a deeper, philosophical issue at stake, however. Portability and neutrality proposals take for granted that users own “their” data. Users create data, either by themselves or with their friends and contacts, and the platform stores and displays the data, just as a safe deposit box holds documents or jewelry and a display case shows of an art collection. I should be able to remove my items from the safe deposit box and take them home or to another bank, and a “neutral” display case operator should not prevent me from showing off my preferred art (perhaps subject to some general rules about obscenity or harmful personal information).

These analogies do not hold for user-generated information on internet platforms, however. “My data” is a record of all my interactions with platforms, with other users on those platforms, with contractual partners of those platforms, and so on. It is co-created by these interactions. I don’t own these records any more than I “own” the fact that someone saw me in the grocery store yesterday buying apples. Of course, if I have a contract with the grocer that says he will keep my purchase records private, and he shares them with someone else, then I can sue him for breach of contract. But this isn’t theft. He hasn’t “stolen” anything; there is nothing for him to steal. If a grocer — or an owner of a tech platform — wants to attract my business by monetizing the records of our interactions and giving me a cut, he should go for it. I still might prefer another store. In any case, I don’t have the legal right to demand this revenue stream.

Likewise, “privacy” refers to what other people know about me – it is knowledge in their heads, not mine. Information isn’t property. If I know something about you, that knowledge is in my head; it’s not something I took from you. Of course, if I obtained or used that info in violation of a prior agreement, then I’m guilty of breach, and I use that information to threaten or harass you, I may be guilty of other crimes. But the popular idea that tech companies are stealing and profiting from something that’s “ours” isn’t right.

The concept of co-creation is important, because these digital records, like other co-created assets, can be more or less relationship specific. The late Oliver Williamson devoted his career to exploring the rich variety of contractual relationships devised by market participants to solve complex contracting problems, particularly in the face of asset specificity. Relationship-specific investments can be difficult for trading parties to manage, but they typically create more value. A legal regime in which only general-purpose, easily redeployable technologies were permitted would alleviate the holdup problem, but at the cost of a huge loss in efficiency. Likewise, a world in which all digital records must be fully portable reduces switching costs, but results in technologies for creating, storing, and sharing information that are less valuable. Why would platform operators invest in efficiency improvements if they cannot capture some of that value by means of proprietary formats, interfaces, sharing rules, and other arrangements?  

In short, we should not be quick to assume “market failure” in the market for privacy goods (or “true” news, whatever that is). Entrepreneurs operating in a competitive environment – not the static, partial-equilibrium notion of competition from intermediate micro texts but the rich, dynamic, complex, and multimarket kind of competition described in Petit’s book – can provide the levels of privacy and truthiness that consumers prefer.

What is a search engine?

Dirk Auer —  21 October 2020

What is a search engine? This might seem like an innocuous question, but it lies at the heart of the US Department of Justice and state Attorneys’ General antitrust complaint against Google, as well as the European Commission’s Google Search and Android decisions. It is also central to a report published by the UK’s Competition & Markets Authority (“CMA”). To varying degrees, all of these proceedings are premised on the assumption that Google enjoys a monopoly/dominant position over online search. But things are not quite this simple. 

Despite years of competition decisions and policy discussions, there are still many unanswered questions concerning the operation of search markets. For example, it is still unclear exactly which services compete against Google Search, and how this might evolve in the near future. Likewise, there has only been limited scholarly discussion as to how a search engine monopoly would exert its market power. In other words, what does a restriction of output look like on a search platform — particularly on the user side

Answering these questions will be essential if authorities wish to successfully bring an antitrust suit against Google for conduct involving search. Indeed, as things stand, these uncertainties greatly complicate efforts (i) to rigorously define the relevant market(s) in which Google Search operates, (ii) to identify potential anticompetitive effects, and (iii) to apply the quantitative tools that usually underpin antitrust proceedings.

In short, as explained below, antitrust authorities and other plaintiffs have their work cut out if they are to prevail in court.

Consumers demand information 

For a start, identifying the competitive constraints faced by Google presents authorities and plaintiffs with an important challenge.

Even proponents of antitrust intervention recognize that the market for search is complex. For instance, the DOJ and state AGs argue that Google dominates a narrow market for “general search services” — as opposed to specialized search services, content sites, social networks, and online marketplaces, etc. The EU Commission reached the same conclusion in its Google Search decision. Finally, commenting on the CMA’s online advertising report, Fiona Scott Morton and David Dinielli argue that: 

General search is a relevant market […]

In this way, an individual specialized search engine competes with a small fraction of what the Google search engine does, because a user could employ either for one specific type of search. The CMA concludes that, from the consumer standpoint, a specialized search engine exerts only a limited competitive constraint on Google.

(Note that the CMA stressed that it did not perform a market definition exercise: “We have not carried out a formal market definition assessment, but have instead looked at competitive constraints across the sector…”).

In other words, the above critics recognize that search engines are merely tools that can serve multiple functions, and that competitive constraints may be different for some of these. But this has wider ramifications that policymakers have so far overlooked. 

When quizzed about his involvement with Neuralink (a company working on implantable brain–machine interfaces), Elon Musk famously argued that human beings already share a near-symbiotic relationship with machines (a point already made by others):

The purpose of Neuralink [is] to create a high-bandwidth interface to the brain such that we can be symbiotic with AI. […] Because we have a bandwidth problem. You just can’t communicate through your fingers. It’s just too slow.

Commentators were quick to spot this implications of this technology for the search industry:

Imagine a world when humans would no longer require a device to search for answers on the internet, you just have to think of something and you get the answer straight in your head from the internet.

As things stand, this example still belongs to the realm of sci-fi. But it neatly illustrates a critical feature of the search industry. 

Search engines are just the latest iteration (but certainly not the last) of technology that enables human beings to access specific pieces of information more rapidly. Before the advent of online search, consumers used phone directories, paper maps, encyclopedias, and other tools to find the information they were looking for. They would read newspapers and watch television to know the weather forecast. They went to public libraries to undertake research projects (some still do), etc.

And, in some respects, the search engine is already obsolete for many of these uses. For instance, virtual assistants like Alexa, Siri, Cortana and Google’s own Google Assistant offering can perform many functions that were previously the preserve of search engines: checking the weather, finding addresses and asking for directions, looking up recipes, answering general knowledge questions, finding goods online, etc. Granted, these virtual assistants partly rely on existing search engines to complete tasks. However, Google is much less dominant in this space, and search engines are not the sole source on which virtual assistants rely to generate results. Amazon’s Alexa provides a fitting example (here and here).

Along similar lines, it has been widely reported that 60% of online shoppers start their search on Amazon, while only 26% opt for Google Search. In other words, Amazon’s ability to rapidly show users the product they are looking for somewhat alleviates the need for a general search engine. In turn, this certainly constrains Google’s behavior to some extent. And much of the same applies to other websites that provide a specific type of content (think of Twitter, LinkedIn, Tripadvisor, Booking.com, etc.)

Finally, it is also revealing that the most common searches on Google are, in all likelihood, made to reach other websites — a function for which competition is literally endless:

The upshot is that Google Search and other search engines perform a bundle of functions. Most of these can be done via alternative means, and this will increasingly be the case as technology continues to advance. 

This is all the more important given that the vast majority of search engine revenue derives from roughly 30 percent of search terms (notably those that are linked to product searches). The remaining search terms are effectively a loss leader. And these profitable searches also happen to be those where competition from alternative means is, in all likelihood, the strongest (this includes competition from online retail platforms, and online travel agents like Booking.com or Kayak, but also from referral sites, direct marketing, and offline sources). In turn, this undermines US plaintiffs’ claims that Google faces little competition from rivals like Amazon, because they don’t compete for the entirety of Google’s search results (in other words, Google might face strong competition for the most valuable ads):

108. […] This market share understates Google’s market power in search advertising because many search-advertising competitors offer only specialized search ads and thus compete with Google only in a limited portion of the market. 

Critics might mistakenly take the above for an argument that Google has no market power because competition is “just a click away”. But the point is more subtle, and has important implications as far as market definition is concerned.

Authorities should not define the search market by arguing that no other rival is quite like Google (or one if its rivals) — as the DOJ and state AGs did in their complaint:

90. Other search tools, platforms, and sources of information are not reasonable substitutes for general search services. Offline and online resources, such as books, publisher websites, social media platforms, and specialized search providers such as Amazon, Expedia, or Yelp, do not offer consumers the same breadth of information or convenience. These resources are not “one-stop shops” and cannot respond to all types of consumer queries, particularly navigational queries. Few consumers would find alternative sources a suitable substitute for general search services. Thus, there are no reasonable substitutes for general search services, and a general search service monopolist would be able to maintain quality below the level that would prevail in a competitive market. 

And as the EU Commission did in the Google Search decision:

(162) For the reasons set out below, there is, however, limited demand side substitutability between general search services and other online services. […]

(163) There is limited substitutability between general search services and content sites. […]

(166) There is also limited substitutability between general search services and specialised search services. […]

(178) There is also limited substitutability between general search services and social networking sites.

Ad absurdum, if consumers suddenly decided to access information via other means, Google could be the only firm to provide general search results and yet have absolutely no market power. 

Take the example of Yahoo: Despite arguably remaining the most successful “web directory”, it likely lost any market power that it had when Google launched a superior — and significantly more successful — type of search engine. Google Search may not have provided a complete, literal directory of the web (as did Yahoo), but it offered users faster access to the information they wanted. In short, the Yahoo example shows that being unique is not equivalent to having market power. Accordingly, any market definition exercise that merely focuses on the idiosyncrasies of firms is likely to overstate their actual market power. 

Given what precedes, the question that authorities should ask is thus whether Google Search (or another search engine) performs so many unique functions that it may be in a position to restrict output. So far, no one appears to have convincingly answered this question.

Similar uncertainties surround the question of how a search engine might restrict output, especially on the user side of the search market. Accordingly, authorities will struggle to produce evidence (i) the Google has market power, especially on the user side of the market, and (ii) that its behavior has anticompetitive effects.

Consider the following:

The SSNIP test (which is the standard method of defining markets in antitrust proceedings) is inapplicable to the consumer side of search platforms. Indeed, it is simply impossible to apply a hypothetical 10% price increase to goods that are given away for free.

This raises a deeper question: how would a search engine exercise its market power? 

For a start, it seems unlikely that it would start charging fees to its users. For instance, empirical research pertaining to the magazine industry (also an ad-based two-sided market) suggests that increased concentration does not lead to higher magazine prices. Minjae Song notably finds that:

Taking the advantage of having structural models for both sides, I calculate equilibrium outcomes for hypothetical ownership structures. Results show that when the market becomes more concentrated, copy prices do not necessarily increase as magazines try to attract more readers.

It is also far from certain that a dominant search engine would necessarily increase the amount of adverts it displays. To the contrary, market power on the advertising side of the platform might lead search engines to decrease the number of advertising slots that are available (i.e. reducing advertising output), thus showing less adverts to users. 

Finally, it is not obvious that market power would lead search engines to significantly degrade their product (as this could ultimately hurt ad revenue). For example, empirical research by Avi Goldfarb and Catherine Tucker suggests that there is some limit to the type of adverts that search engines could profitably impose upon consumers. They notably find that ads that are both obtrusive and targeted decrease subsequent purchases:

Ads that match both website content and are obtrusive do worse at increasing purchase intent than ads that do only one or the other. This failure appears to be related to privacy concerns: the negative effect of combining targeting with obtrusiveness is strongest for people who refuse to give their income and for categories where privacy matters most.

The preceding paragraphs find some support in the theoretical literature on two-sided markets literature, which suggests that competition on the user side of search engines is likely to be particularly intense and beneficial to consumers (because they are more likely to single-home than advertisers, and because each additional user creates a positive externality on the advertising side of the market). For instance, Jean Charles Rochet and Jean Tirole find that:

The single-homing side receives a large share of the joint surplus, while the multi-homing one receives a small share.

This is just a restatement of Mark Armstrong’s “competitive bottlenecks” theory:

Here, if it wishes to interact with an agent on the single-homing side, the multi-homing side has no choice but to deal with that agent’s chosen platform. Thus, platforms have monopoly power over providing access to their single-homing customers for the multi-homing side. This monopoly power naturally leads to high prices being charged to the multi-homing side, and there will be too few agents on this side being served from a social point of view (Proposition 4). By contrast, platforms do have to compete for the single-homing agents, and high profits generated from the multi-homing side are to a large extent passed on to the single-homing side in the form of low prices (or even zero prices).

All of this is not to suggest that Google Search has no market power, or that monopoly is necessarily less problematic in the search engine industry than in other markets. 

Instead, the argument is that analyzing competition on the user side of search platforms is unlikely to yield dispositive evidence of market power or anticompetitive effects. This is because market power is hard to measure on this side of the market, and because even a monopoly platform might not significantly restrict user output. 

That might explain why the DOJ and state AGs analysis of anticompetitive effects is so limited. Take the following paragraph (provided without further supporting evidence):

167. By restricting competition in general search services, Google’s conduct has harmed consumers by reducing the quality of general search services (including dimensions such as privacy, data protection, and use of consumer data), lessening choice in general search services, and impeding innovation. 

Given these inherent difficulties, antitrust investigators would do better to focus on the side of those platforms where mainstream IO tools are much easier to apply and where a dominant search engine would likely restrict output: the advertising market. Not only is it the market where search engines are most likely to exert their market power (thus creating a deadweight loss), but — because it involves monetary transactions — this side of the market lends itself to the application of traditional antitrust tools.  

Looking at the right side of the market

Finally, and unfortunately for Google’s critics, available evidence suggests that its position on the (online) advertising market might not meet the requirements necessary to bring a monopolization case (at least in the US).

For a start, online advertising appears to exhibit the prima facie signs of a competitive market. As Geoffrey Manne, Sam Bowman and Eric Fruits have argued:

Over the past decade, the price of advertising has fallen steadily while output has risen. Spending on digital advertising in the US grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period the Producer Price Index for Internet advertising sales declined by nearly 40%. The rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year. Since 2000, advertising spending has been falling as a share of GDP, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost, and increasing total revenues are consistent with a growing and increasingly competitive market.

Second, empirical research suggests that the market might need to be widened to include offline advertising. For instance, Avi Goldfarb and Catherine Tucker show that there can be important substitution effects between online and offline advertising channels:

Using data on the advertising prices paid by lawyers for 139 Google search terms in 195 locations, we exploit a natural experiment in “ambulance-chaser” regulations across states. When lawyers cannot contact clients by mail, advertising prices per click for search engine advertisements are 5%–7% higher. Therefore, online advertising substitutes for offline advertising.

Of course, a careful examination of the advertising industry could also lead authorities to define a narrower relevant market. For example, the DOJ and state AG complaint argued that Google dominated the “search advertising” market:

97. Search advertising in the United States is a relevant antitrust market. The search advertising market consists of all types of ads generated in response to online search queries, including general search text ads (offered by general search engines such as Google and Bing) […] and other, specialized search ads (offered by general search engines and specialized search providers such as Amazon, Expedia, or Yelp). 

Likewise, the European Commission concluded that Google dominated the market for “online search advertising” in the AdSense case (though the full decision has not yet been made public). Finally, the CMA’s online platforms report found that display and search advertising belonged to separate markets. 

But these are empirical questions that could dispositively be answered by applying traditional antitrust tools, such as the SSNIP test. And yet, there is no indication that the authorities behind the US complaint undertook this type of empirical analysis (and until its AdSense decision is made public, it is not clear that the EU Commission did so either). Accordingly, there is no guarantee that US courts will go along with the DOJ and state AGs’ findings.

In short, it is far from certain that Google currently enjoys an advertising monopoly, especially if the market is defined more broadly than that for “search advertising” (or the even narrower market for “General Search Text Advertising”). 

Concluding remarks

The preceding paragraphs have argued that a successful antitrust case against Google is anything but a foregone conclusion. In order to successfully bring a suit, authorities would notably need to figure out just what market it is that Google is monopolizing. In turn, that would require a finer understanding of what competition, and monopoly, look like in the search and advertising industries.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Shane Greenstein (Professor of Business Administration, Harvard Business School).
]

In his book, Nicolas Petit approaches antitrust issues by analyzing their economic foundations, and he aspires to bridge gaps between those foundations and the common points of view. In light of the divisiveness of today’s debates, I appreciate Petit’s calm and deliberate view of antitrust, and I respect his clear and engaging prose.

I spent a lot of time with this topic when writing a book (How the Internet Became Commercial, 2015, Princeton Press). If I have something unique to add to a review of Petit’s book, it comes from the role Microsoft played in the events in my book.

Many commentators have speculated on what precise charges could be brought against Facebook, Google/Alphabet, Apple, and Amazon. For the sake of simplicity, let’s call these the “big four.” While I have no special insight to bring to such speculation, for this post I can do something different, and look forward by looking back. For the time being, Microsoft has been spared scrutiny by contemporary political actors. (It seems safe to presume Microsoft’s managers prefer to be left out.) While it is tempting to focus on why this has happened, let’s focus on a related issue: What shadow did Microsoft’s trials cast on the antitrust issues facing the big four?

Two types of lessons emerged from Microsoft’s trials, and both tend to be less appreciated by economists. One set of lessons emerged from the media flood of the flotsam and jetsam of sensationalistic factoids and sound bites, drawn from Congressional and courtroom testimony. That yielded lessons about managing sound and fury – i.e., mostly about reducing the cringe-worthy quotes from CEOs and trial witnesses.

Another set of lessons pertained to the role and limits of economic reasoning. Many decision makers reasoned by analogy and metaphor. That is especially so for lawyers and executives. These metaphors do not make economic reasoning wrong, but they do tend to shape how an antitrust question takes center stage with a judge, as well as in the court of public opinion. These metaphors also influence the stories a CEO tells to employees.

If you asked me to forecast how things will go for the big four, based on what I learned from studying Microsoft’s trials, I forecast that the outcome depends on which metaphor and analogy gets the upper hand.

In that sense, I want to argue that Microsoft’s experience depended on “the fox and shepherd problem.” When is a platform leader better thought of as a shepherd, helping partners achieve a healthy outcome, or as a fox in charge of a henhouse, ready to sacrifice a partner for self-serving purposes? I forecast the same metaphors will shape experience of the big four.

Gaps and analysis

The fox-shepherd problem never shows up when a platform leader is young and its platform is small. As the platform reaches bigger scale, however, the problem becomes more salient. Conflicts of interests emerge and focus attention on platform leadership.

Petit frames these issues within a Schumpeterian vision. In this view, firms compete for dominant positions over time, potentially with one dominant firm replacing another. Potential competition has a salutary effect if established firms perceive a threat from the future shadow of such competitors, motivating innovation. In this view, antitrust’s role might be characterized as “keeping markets open so there is pressure on the dominant firm from potential competition.”

In the Microsoft trial economists framed the Schumpeterian tradeoff in the vocabulary of economics. Firms who supply complements at one point could become suppliers of substitutes at a later point if they are allowed to. In other words, platform leaders today support complements that enhance the value of the platform, while also having the motive and ability to discourage those same business partners from developing services that substitute for the platform’s services, which could reduce the platform’s value. Seen through this lens, platform leaders inherently face a conflict of interest, and antitrust law should intervene if platform leaders could place excessive limitations on existing business partners.

This economic framing is not wrong. Rather, it is necessary, but not sufficient. If I take a sober view of events in the Microsoft trial, I am not convinced the economics alone persuaded the judge in Microsoft’s case, or, for that matter, the public.

As judges sort through the endless detail of contracting provisions, they need a broad perspective, one that sharpens their focus on a key question. One central question in particular inhabits a lot of a judge’s mindshare: how did the platform leader use its discretion, and for what purposes? In case it is not obvious, shepherds deserve a lot of discretion, while only a fool gives a fox much license.

Before the trial, when it initially faced this question from reporters and Congress, Microsoft tried to dismiss the discussion altogether. Their representatives argued that high technology differs from every other market in its speed and productivity, and, therefore, ought to be thought of as incomparable to other antitrust examples. This reflected the high tech elite’s view of their own exceptionalism.

Reporters dutifully restated this argument, and, long story short, it did not get far with the public once the sensationalism started making headlines, and it especially did not get far with the trial judge. To be fair, if you watched recent congressional testimony, it appears as if the lawyers for the big four instructed their CEOs not to try it this approach this time around.

Origins

Well before lawyers and advocates exaggerate claims, the perspective of both sides usually have some merit, and usually the twain do not meet. Most executives tend to remember every detail behind growth, and know the risks confronted and overcome, and usually are reluctant to give up something that works for their interests, and sometimes these interests can be narrowly defined. In contrast, many partners will know examples of a rule that hindered them, and point to complaints that executives ignored, and aspire to have rules changed, and, again, their interests tend to be narrow.

Consider the quality-control process today for iPhone apps as an example. The merits and absurdity of some of Apples conduct get a lot of attention in online forums, especially the 30% take for Apple. Apple can reasonably claim the present set of rules work well overall, and only emerged after considerable experimentation, and today they seek to protect all who benefit from the entire system, like a shepherd. It is no surprise however, that some partners accuse Apple of tweaking rules to their own benefit, and using the process to further Apple’s ambitions at the expense of the partner’s, like a fox in a henhouse. So it goes.

More generally, based on publically available information, all of the big four already face this debate. Self-serving behavior shows up in different guise in different parts of the big four’s business, but it is always there. As noted, Apple’s apps compete with the apps of others, so it has incentives to shape distribution of other apps. Amazon’s products compete with some products coming from its third—party sellers, and it too faces mixed incentives. Google’s services compete with online services who also advertise on their search engine, and they too face issues over their charges for listing on the Play store. Facebook faces an additional issues, because it has bought firms that were trying to grow their own platforms to compete with Facebook.

Look, those four each contain rather different businesses in their details, which merits some caution in making a sweeping characterization. My only point: the question about self-serving behavior arises in each instance. That frames a fox-shepherd problem for prosecutors in each case.

Lessons from prior experience

Circling back to lessons of the past for antitrust today, the Shepherd-Fox problem was one of the deeper sources of miscommunication leading up to the Microsoft trial. In the late 1990s Microsoft could reasonably claim to be a shepherd for all its platform’s partners, and it could reasonably claim to have improved the platform in ways that benefited partners. Moreover, for years some of the industry gossip about their behavior stressed misinformed nonsense. Accordingly, Microsoft’s executives had learned to trust their own judgment and to mistrust the complaints of outsiders. Right in line with that mistrust, many employees and executives took umbrage to being characterized as a fox in a henhouse, dismissing the accusations out of hand.

Those habits-of-mind poorly positioned the firm for a court case. As any observer of the trial knowns, When prosecutors came looking, they found lots of examples that looked like fox-like behavior. Onerous contract restrictions and cumbersome processes for business partners produced plenty of bad optics in court, and fueled the prosecution’s case that the platform had become too self-serving at the expense of competitive processes. Prosecutors had plenty to work with when it came time to prove motive, intent, and ability to misuse discretion. 

What is the lesson for the big four? Ask an executive in technology today, and sometimes you will hear the following: As long as a platform’s actions can be construed as friendly to customers, the platform leader will be off the hook. That is not wrong lessons, but it is an incomplete one. Looking with hindsight and foresight, that perspective seems too sanguine about the prospects for the big four. Microsoft had done plenty for its customers, but so what? There was plenty of evidence of acting like a fox in a hen-house. The bigger lesson is this: all it took were a few bad examples to paint a picture of a pattern, and every firm has such examples.

Do not get me wrong. I am not saying a fox and hen-house analogy is fair or unfair to platform leaders. Rather, I am saying that economists like to think the economic trade-off between the interests of platform leaders, platform partners, and platform customers emerge from some grand policy compromise. That is not how prosecutors think, nor how judges decide. In the Microsoft case there was no such grand consideration. The economic framing of the case only went so far. As it was, the decision was vulnerable to metaphor, shrewdly applied and convincingly argued. Done persuasively, with enough examples of selfish behavior, excuses about “helping customers” came across as empty.

Policy

Some advocates argue, somewhat philosophically, that platforms deserve discretion, and governments are bound to err once they intervene. I have sympathy with that point of view, but only up to a point. Below are two examples from outside antitrust where government routinely do not give the big four a blank check.

First, when it started selling ads, Google banned ads for cigarettes, porn and alcohol, and it downgraded in its quality score for websites that used deceptive means to attract users. That helped the service foster trust with new users, enabling it to grow. After it became bigger should Google have continued to have unqualified discretion to shepherd the entire ad system? Nobody thinks so. A while ago the Federal Trade Commission decided to investigate deceptive online advertising, just as it investigates deceptive advertising in other media. It is not a big philosophical step to next ask whether Google should have unfettered discretion to structure the ad business, search process, and related e-commerce to its own benefit.

Here is another example, this one about Facebook. Over the years Facebook cycled through a number of rules for sharing information with business partners, generally taking a “relaxed” attitude enforcing those policies. Few observers cared when Facebook was small, but many governments started to care after Facebook grew to billions of users. Facebook’s lax monitoring did not line up with the preferences of many governments. It should not come as a surprise now that many governments want to regulate Facebook’s handling of data. Like it or not, this question lies squarely within the domain of government privacy policy. Again, the next step is small. Why should other parts of its business remain solely in Facebook’s discretion, like its ability to buy other businesses?

This gets us to the other legacy of the Microsoft case: As we think about future policy dilemmas, are there a general set of criteria for the antitrust issues facing all four firms? Veterans of court cases will point out that every court case is its own circus. Just because Microsoft failed to be persuasive in its day does not imply any of the big four will be unpersuasive.

Looking back on the Microsoft trial, it did not articulate a general set of principles about acceptable or excusable self-serving behavior from a platform leader. It did not settle what criteria best determine when a court should consider a platform leader’s behavior closer to that of a shepherd or a fox. The appropriate general criteria remains unclear.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Richard N. Langlois
(Professor of Economics, University of Connecticut).]

Market share has long been the talisman of antitrust economics.  Once we properly define what “the product” is, all we have to do is look at shares in the relevant market.  In such an exercise, today’s high-tech firms come off badly.  Each of them has a large share of the market for some “product.” What I appreciate about Nicolas Petit’s notion of “moligopoly” is that it recognizes that genuine competition is a far more complex and interesting phenomenon, one that goes beyond the category of “the product.”

In his chapter 4, Petit lays out how this works with six of today’s large high-tech companies, adding Netflix to the usual Big Five of Amazon, Apple, Facebook, Google, and Microsoft.  If I understand properly, what he means by “moligopoly” is that these large firms have their hands in many different relevant markets.  Because they seem to be selling different “products,” they don’t seem to be competing with one another.  Yet, in a fundamental sense, they are very much competing with one another, and perhaps with firms that do not yet exist.  

In this view, diversification is at the heart of competition.  Indeed, Petit wonders at one point whether we are in a new era of “conglomeralism.”  I would argue that the diversified high-tech firms we see today are actually very unlike the conglomerates of the late twentieth century.  In my view, the earlier conglomerates were not equilibrium phenomena but rather short-lived vehicles for the radical restructuring of the American economy in the post- Bretton Woods era of globalization.  A defining characteristic of those firms was that their diversification was unrelated, not just in terms of the SIC codes of their products but also in terms of their underlying capabilities.  If we look only at the products on the demand side, today’s high-tech firms might also seem to reflect unrelated diversification.  In fact, however, unlike in the twentieth-century conglomerates, the activities of present-day high-tech firms are connected on the supply side by a common set of capabilities involving the deployment of digital technology. 

Thus the boundaries of markets can shift and morph unexpectedly.  Enterprises that may seem entirely different actually harbor the potential to invade one other’s territory (or invade new territory – “competing against non-consumption”).  What Amazon can do, Google can do; and so can Microsoft.  The arena is competitive not because firms have a small share of relevant markets but because all of them sit beneath four or five damocletian swords, suspended by the thinnest of horsehairs.  No wonder the executives of high-tech firms sound paranoid.

Petit speculates that today’s high-tech companies have diversified (among other reasons) because of complementarities.  That may be part of the story.  But as Carliss Baldwin argues (and as Petit mentions in passing), we can think about the investments high-tech firms seem to be making as options – experiments that may or may not pay off.  The more uncertain the environment, the more valuable it is to have many diverse options.  A decade or so after the breakup of AT&T, the “baby Bells” were buying into landline, cellular, cable, satellite, and many other things, not because, as many thought at the time, that these were complementary, but because no one had any idea what would be important in the future (including whether there would be any complementarities).  As uncertainty resolved, these lines of business became more specialized, and the babies unbundled.  (As I write, AT&T, the baby Bell that snagged the original company name, is probably about to sell off DirectTV at a loss.)  From this perspective, the high degree of diversification we observe today implies not control of markets but the opposite – existential uncertainty about the future.

I wonder whether this kind of competition is unique to the age of the Internet.  There is an entire genre of business-school case built around an epiphany of the form: “we thought we were in the X business, but we were really in the Y business all along!”  I have recently read (listened to, technically) Marc Levinson’s wonderful history of containerized shipping.  Here the real competition occurred across modes of transport, not within existing well-defined markets.  The innovators came to realize that they were in the logistics business, not in the trucking business or the railroad business or the ocean-shipping business.  (Some of the most interesting parts of the story were about how entrepreneurship happens in a heavily regulated environment.  At one point early in the story, Malcolm McLean, the most important of these entrepreneurs, had to buy up other trucking firms just to obtain the ICC permits necessary to redesign routes efficiently.)  Of course, containerized shipping is also a modular system that some economists have accused of being a general-purpose technology like the Internet.