Archives For SSRN

Chances are, if you have heard of the Jones Act, you probably think it needs to be repealed. That is, at least, the consensus in the economics profession. However, this consensus seems to be driven by an application of the sort of rules of thumb that one picks up from economics courses, rather than an application of economic theory.

For those who are unaware, the Jones Act requires that any shipping between two U.S. ports is carried by a U.S.-built ship with a crew of U.S. citizens that is U.S.-owned and flies the U.S. flag. When those who have memorized some of the rules of thumb in the field of economics hear that description, they immediately think “this is protectionism and protectionism is bad.” It therefore seems obvious that the Jones Act must be bad. After all, based on this description, it seems like it is designed to protect U.S. shipbuilders, U.S. crews, and U.S.-flagged ships from foreign competition.

Critics seize on this narrative. They point to the higher cost of Jones Act ships in comparison to those ships that fly foreign flags and argue that the current law has costs that are astronomical. Based on that type of criticism, the Jones Act seems so obviously costly that one might wonder how it is possible to defend the law in any way.

I reject this criticism. I do not reject this over some minor quibble with the numbers. In true Hendricksonian fashion, I reject this criticism because it gets the underlying economic theory wrong.

Let’s start by thinking about some critical issues in Coasean terms. During peacetime, the U.S. Navy does not need maintain the sort of capacity that it would have during a time of war. It would not be cost-effective to do so. However, the Navy would like to expand its capacity rapidly in the event of a war or other national emergency. To do so, the country needs shipbuilding capacity. Building ships and training crews to operate those ships, however, takes time. This might be time that the Navy does not have. At the very least, this could leave the United States at a significant disadvantage.

Of course, there are ships and crews available in the form of the U.S. Merchant Marine. Thus, there are gains from trade to be had. The government could pay the Merchant Marine to provide sealift during times of war and other national emergencies. However, this compensation scheme is complicated. For example, if the government waits until a war or a national emergency, this could create a holdup problem. Knowing that the government needs the Merchant Marine immediately, the holdup problem could result in the government paying well-above-market prices to obtain these services. On the other hand, the government could simply requisition the ships and draft the crews into service whenever there is a war or national emergency. Knowing that this is a possibility, the Merchant Marine would tend to underinvest in both physical and human capital.

Given these problems, the solution is to agree to terms ahead of time. The Merchant Marine agrees to provide their services to the government during times of war and other national emergencies in exchange for compensation. The way to structure that compensation in order to avoid holdup problems and underinvestment is to provide this compensation in the form of peacetime subsidies.

Thus, the government provides peacetime subsidies in exchange for the services of the Merchant Marine during wartime. This is a straightforward Coasean bargain.

Now, let’s think about the Jones Act. The Jones Act ships are implicitly subsidized because ships that do not meet the law’s criteria are not allowed to engage in port-to-port shipping in the United States. The requirement that these ships need to be U.S.-owned and fly the U.S. flag gives the government the legal authority to call these ships into service. The requirement that the ships are built in the United States is designed to ensure that the ships meet the needs of the U.S. military and to subsidize shipbuilding in the United States. The requirement to use U.S. crews is designed to provide an incentive for the accumulation of the necessary human capital. Since the law restricts ships with these characteristics for port-to-port shipping within the United States, it provides the firms rents to compensate them for their service during wartime and national emergencies.

Critics, of course, are likely to argue that I have a “just so” theory of the Jones Act. In other words, they might argue that I have simply structured an economic narrative around a set of existing facts. Those critics would be wrong for the following reasons.

First, the Jones Act is not some standalone law when it comes to maritime policy. There is a long history in the United States of trying to determine the optimal way to subsidize the maritime industry. Second, if this type of policy is just a protectionist giveaway, then it should be confined to the maritime industry. However, this isn’t true. The United States has a long history of subsidizing transportation that is crucial for use in the military. This includes subsidies for horse-breeding and the airline industry. Finally, critics would have to explain why wasteful maritime policies have been quickly overturned, while the Jones Act continues to survive.

The critics also dramatically overstate the costs of the Jones Act. This is partly because they do not understand the particularities of the law. For example, to estimate the costs, critics often compare the cost of the Jones Act ships to ships that fly a foreign flag and use foreign crews. The argument here is that the repeal of the Jones Act would result in these foreign-flagged ships with foreign crews taking over U.S. port-to-port shipping.

There are two problems with this argument. One, cabotage restrictions do not originate with the Jones Act. Rather, the law clarifies and closes loopholes in previous laws. Second, the use of foreign crews would be a violation of U.S. immigration law. Furthermore, this type of shipping would still be subject to other U.S. laws to which these foreign-flagged ships are not subject today. Given that the overwhelming majority of the cost differential is explained by differences in labor costs, it therefore seems hard to understand from where, exactly, the cost savings of repeal would actually come.

None of this is to say that the Jones Act is the first-best policy or that the law is sufficient to accomplish the military’s goals. In fact, the one thing that critics and advocates of the law seem to agree on is that the law is not sufficient to accomplish the intended goals. My own work implies a need for direct subsidies (or lower tax rates) on the capital used by the maritime industry. However, the critics need to be honest and admit that, even if the law were repealed, the cost savings are nowhere near what they claim. In addition, this wouldn’t be the end of maritime subsidies (in fact, other subsidies already exist). Instead, the Jones Act would likely be replaced by some other form of subsidy to the maritime industry.

Many defense-based arguments of subsidies are dubious. However, in the case of maritime policy, the Coasean bargain is clear.

The U.S. Supreme Court’s just-published unanimous decision in AMG Capital Management LLC v. FTC—holding that Section 13(b) of the Federal Trade Commission Act does not authorize the commission to obtain court-ordered equitable monetary relief (such as restitution or disgorgement)—is not surprising. Moreover, by dissipating the cloud of litigation uncertainty that has surrounded the FTC’s recent efforts to seek such relief, the court cleared the way for consideration of targeted congressional legislation to address the issue.

But what should such legislation provide? After briefly summarizing the court’s holding, I will turn to the appropriate standards for optimal FTC consumer redress actions, which inform a welfare-enhancing legislative fix.

The Court’s Opinion

Justice Stephen Breyer’s opinion for the court is straightforward, centering on the structure and history of the FTC Act. Section 13(b) makes no direct reference to monetary relief. Its plain language merely authorizes the FTC to seek a “permanent injunction” in federal court against “any person, partnership, or corporation” that it believes “is violating, or is about to violate, any provision of law” that the commission enforces. In addition, by its terms, Section 13(b) is forward-looking, focusing on relief that is prospective, not retrospective (this cuts against the argument that payments for prior harm may be recouped from wrongdoers).

Furthermore, the FTC Act provisions that specifically authorize conditioned and limited forms of monetary relief (Section 5(l) and Section 19) are in the context of commission cease and desist orders, involving FTC administrative proceedings, unlike Section 13(b) actions that avoid the administrative route. In sum, the court concludes that:

[T]o read §13(b) to mean what it says, as authorizing injunctive but not monetary relief, produces a coherent enforcement scheme: The Commission may obtain monetary relief by first invoking its administrative procedures and then §19’s redress provisions (which include limitations). And the Commission may use §13(b) to obtain injunctive relief while administrative proceedings are foreseen or in progress, or when it seeks only injunctive relief. By contrast, the Commission’s broad reading would allow it to use §13(b) as a substitute for §5 and §19. For the reasons we have just stated, that could not have been Congress’ intent.

The court’s opinion concludes by succinctly rejecting the FTC’s arguments to the contrary.

What Comes Next

The Supreme Court’s decision has been anticipated by informed observers. All four sitting FTC Commissioners have already called for a Section 13(b) “legislative fix,” and in an April 20 hearing of Senate Commerce Committee, Chairwoman Maria Cantwell (D-Wash.) emphasized that, “[w]e have to do everything we can to protect this authority and, if necessary, pass new legislation to do so.”

What, however, should be the contours of such legislation? In considering alternative statutory rules, legislators should keep in mind not only the possible consumer benefits of monetary relief, but the costs of error, as well. Error costs are a ubiquitous element of public law enforcement, and this is particularly true in the case of FTC actions. Ideally, enforcers should seek to minimize the sum of the costs attributable to false positives (type I error), false negatives (type II error), administrative costs, and disincentive costs imposed on third parties, which may also be viewed as a subset of false positives. (See my 2014 piece “A Cost-Benefit Framework for Antitrust Enforcement Policy.”

Monetary relief is most appropriate in cases where error costs are minimal, and the quantum of harm is relatively easy to measure. This suggests a spectrum of FTC enforcement actions that may be candidates for monetary relief. Ideally, selection of targets for FTC consumer redress actions should be calibrated to yield the highest return to scarce enforcement resources, with an eye to optimal enforcement criteria.

Consider consumer protection enforcement. The strongest cases involve hardcore consumer fraud (where fraudulent purpose is clear and error is almost nil); they best satisfy accuracy in measurement and error-cost criteria. Next along the spectrum are cases of non-fraudulent but unfair or deceptive acts or practices that potentially involve some degree of error. In this category, situations involving easily measurable consumer losses (e.g., systematic failure to deliver particular goods requested or poor quality control yielding shipments of ruined goods) would appear to be the best candidates for monetary relief.

Moving along the spectrum, matters involving a higher likelihood of error and severe measurement problems should be the weakest candidates for consumer redress in the consumer protection sphere. For example, cases involve allegedly misleading advertising regarding the nature of goods, or allegedly insufficient advertising substantiation, may generate high false positives and intractable difficulties in estimating consumer harm. As a matter of judgment, given resource constraints, seeking financial recoveries solely in cases of fraud or clear deception where consumer losses are apparent and readily measurable makes the most sense from a cost-benefit perspective.

Consumer redress actions are problematic for a large proportion of FTC antitrust enforcement (“unfair methods of competition”) initiatives. Many of these antitrust cases are “cutting edge” matters involving novel theories and complex fact patterns that pose a significant threat of type I error. (In comparison, type I error is low in hardcore collusion cases brought by the U.S. Justice Department where the existence, nature, and effects of cartel activity are plain). What’s more, they generally raise extremely difficult if not impossible problems in estimating the degree of consumer harm. (Even DOJ price-fixing cases raise non-trivial measurement difficulties.)

For example, consider assigning a consumer welfare loss number to a patent antitrust settlement that may or may not have delayed entry of a generic drug by some length of time (depending upon the strength of the patent) or to a decision by a drug company to modify a drug slightly just before patent expiration in order to obtain a new patent period (raising questions of valuing potential product improvements). These and other examples suggest that only rarely should the FTC pursue requests for disgorgement or restitution in antitrust cases, if error-cost-centric enforcement criteria are to be honored.

Unfortunately, the FTC currently has nothing to say about when it will seek monetary relief in antitrust matters. Commendably, in 2003, the commission issued a Policy Statement on Monetary Equitable Remedies in Competition Cases specifying that it would only seek monetary relief in “exceptional cases” involving a “[c]lear [v]iolation” of the antitrust laws. Regrettably, in 2012, a majority of the FTC (with Commissioner Maureen Ohlhausen dissenting) withdrew that policy statement and the limitations it imposed. As I concluded in a 2012 article:

This action, which was taken without the benefit of advance notice and public comment, raises troubling questions. By increasing business uncertainty, the withdrawal may substantially chill efficient business practices that are not well understood by enforcers. In addition, it raises the specter of substantial error costs in the FTC’s pursuit of monetary sanctions. In short, it appears to represent a move away from, rather than towards, an economically enlightened antitrust enforcement policy.

In a 2013 speech, then-FTC Commissioner Josh Wright also lamented the withdrawal of the 2003 Statement, and stated that he would limit:

… the FTC’s ability to pursue disgorgement only against naked price fixing agreements among competitors or, in the case of single firm conduct, only if the monopolist’s conduct has no plausible efficiency justification. This latter category would include fraudulent or deceptive conduct, or tortious activity such as burning down a competitor’s plant.

As a practical matter, the FTC does not bring cases of this sort. The DOJ brings naked price-fixing cases and the unilateral conduct cases noted are as scarce as unicorns. Given that fact, Wright’s recommendation may rightly be seen as a rejection of monetary relief in FTC antitrust cases. Based on the previously discussed serious error-cost and measurement problems associated with monetary remedies in FTC antitrust cases, one may also conclude that the Wright approach is right on the money.

Finally, a recent article by former FTC Chairman Tim Muris, Howard Beales, and Benjamin Mundel opined that Section 13(b) should be construed to “limit[] the FTC’s ability to obtain monetary relief to conduct that a reasonable person would know was dishonest or fraudulent.” Although such a statutory reading is now precluded by the Supreme Court’s decision, its incorporation in a new statutory “fix” would appear ideal. It would allow for consumer redress in appropriate cases, while avoiding the likely net welfare losses arising from a more expansive approach to monetary remedies.

 Conclusion

The AMG Capital decision is sure to generate legislative proposals to restore the FTC’s ability to secure monetary relief in federal court. If Congress adopts a cost-beneficial error-cost framework in shaping targeted legislation, it should limit FTC monetary relief authority (recoupment and disgorgement) to situations of consumer fraud or dishonesty arising under the FTC’s authority to pursue unfair or deceptive acts or practices. Giving the FTC carte blanche to obtain financial recoveries in the full spectrum of antitrust and consumer protection cases would spawn uncertainty and could chill a great deal of innovative business behavior, to the ultimate detriment of consumer welfare.


,

Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company. 

But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.

Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.

The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention). 

Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:

But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:

— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.

— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.

— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.

— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.

The report thus asserts that:

The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.

That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]

What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard. 

Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark. 

Decisions Under Uncertainty

In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.

Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong. 

Consider the following passage from FTC economist Ken Heyer’s memo:

The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]

In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.

Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?

In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today. 

Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here). 

Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.

To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets. 

In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.

Putting Erroneous Predictions in Context

So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.

But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.

This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.

In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.

Pinterest provides a fitting example; despite relying heavily on Google for traffic, its ad-funded service has witnessed significant growth. The same is true of other vertical search engines like Airbnb, Booking.com, and Zillow. While we cannot know the counterfactual, the vertical search industry has certainly not been decimated by Google’s “monopoly”; quite the opposite. Unsurprisingly, this has coincided with a significant decrease in the cost of online advertising, and the growth of online advertising relative to other forms.

Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:

The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.

FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.

This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.

But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call: 

When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.

The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:

Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”

It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.

Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation). 

In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.

The FTC Lawyers’ Weak Case for Prosecuting Google

At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.

Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:

A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.

If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.

The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.

Moreover, as Ben Thompson argues in his Stratechery newsletter: 

The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.

This difficulty was deftly highlighted by Heyer’s memo:

If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]

Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.

And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.

Google’s ‘revenue-sharing’ agreements

It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:

FTC staff urged the agency’s five commissioners to sue Google for signing exclusive contracts with Apple and the major wireless carriers that made sure the company’s search engine came pre-installed on smartphones.

The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance. 

To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).

Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:

This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.

This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:

[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.

Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.

Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):

Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.

Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.

Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system. 

In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.

Self-preferencing

Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:

When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers

The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:

Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites…. 

…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]

More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control…. 

…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….

…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk? 

Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time. 

Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.

Competitor Harm Is Not an Indicator of the Need for Intervention

Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:

Amazon and Facebook privately complained to the FTC about Google’s conduct, saying their business suffered because of the company’s search bias, scraping of content from rival sites and restrictions on advertisers’ use of competing search engines. 

Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.

But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents. 

This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:

Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives…. 

…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest…. 

…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.

Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:

They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.

Conclusion

When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.

But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.

The slew of recent antitrust cases in the digital, tech, and pharmaceutical industries has brought significant attention to the investments many firms in these industries make in “intangibles,” such as software and research and development (R&D).

Intangibles are recognized to have an important effect on a company’s (and the economy’s) performance. For example, Jonathan Haskel and Stian Westlake (2017) highlight the increasingly large investments companies have been making in things like programming in-house software, organizational structures, and, yes, a firm’s stock of knowledge obtained through R&D. They also note the considerable difficulties associated with valuing both those investments and the outcomes (such as new operational procedures, a new piece of software, or a new patent) of those investments.

This difficulty in valuing intangibles has gone somewhat under the radar until relatively recently. There has been progress in valuing them at the aggregate level (see Ellen R. McGrattan and Edward C. Prescott (2008)) and in examining their effects at the level of individual sectors (see McGrattan (2020)). It remains difficult, however, to ascertain the value of the entire stock of intangibles held by an individual firm.

There is a method to estimate the value of one component of a firm’s stock of intangibles. Specifically, the “stock of knowledge obtained through research and development” is likely to form a large proportion of most firms’ intangibles. Treating R&D as a “stock” might not be the most common way to frame the subject, but it does have an intuitive appeal.

What a firm knows (i.e., its intellectual property) is an input to its production process, just like physical capital. The most direct way for firms to acquire knowledge is to conduct R&D, which adds to its “stock of knowledge,” as represented by its accumulated stock of R&D. In this way, a firm’s accumulated investment in R&D then becomes a stock of R&D that it can use in production of whatever goods and services it wants. Thankfully, there is a relatively straightforward (albeit imperfect) method to measure a firm’s stock of R&D that relies on information obtained from a company’s accounts, along with a few relatively benign assumptions.

This method (set out by Bronwyn Hall (1990, 1993)) uses a firm’s annual expenditures on R&D (a separate line item in most company accounts) in the “perpetual inventory” method to calculate a firm’s stock of R&D in any particular year. This perpetual inventory method is commonly used to estimate a firm’s stock of physical capital, so applying it to obtain an estimate of a firm’s stock of knowledge—i.e., their stock of R&D—should not be controversial.

All this method requires to obtain a firm’s stock of R&D for this year is knowledge of a firm’s R&D stock and its investment in R&D (i.e., its R&D expenditures) last year. This year’s R&D stock is then the sum of those R&D expenditures and its undepreciated R&D stock that is carried forward into this year.

As some R&D expenditure datasets include, for example, wages paid to scientists and research workers, this is not exactly the same as calculating a firm’s physical capital stock, which would only use a firm’s expenditures on physical capital. But given that paying people to perform R&D also adds to a firm’s stock of R&D through the increased knowledge and expertise of their employees, it seems reasonable to include this in a firm’s stock of R&D.

As mentioned previously, this method requires making certain assumptions. In particular, it is necessary to assume a rate of depreciation of the stock of R&D each period. Hall suggests a depreciation of 15% per year (compared to the roughly 7% per year for physical capital), and estimates presented by Hall, along with Wendy Li (2018), suggest that, in some industries, the figure can be as high as 50%, albeit with a wide range across industries.

The other assumption required for this method is an estimate of the firm’s initial level of stock. To see why such an assumption is necessary, suppose that you have data on a firm’s R&D expenditure running from 1990-2016. This means that you can calculate a firm’s stock of R&D for each year once you have their R&D stock in the previous year via the formula above.

When calculating the firm’s R&D stock for 2016, you need to know what their R&D stock was in 2015, while to calculate their R&D stock for 2015 you need to know their R&D stock in 2014, and so on backward until you reach the first year for which you have data: in this, case 1990.

However, working out the firm’s R&D stock in 1990 requires data on the firm’s R&D stock in 1989. The dataset does not contain any information about 1989, nor the firm’s actual stock of R&D in 1990. Hence, it is necessary to make an assumption regarding the firm’s stock of R&D in 1990.

There are several different assumptions one can make regarding this “starting value.” You could assume it is just a very small number. Or you can assume, as per Hall, that it is the firm’s R&D expenditure in 1990 divided by the sum of the R&D depreciation and average growth rates (the latter being taken as 8% per year by Hall). Note that, given the high depreciation rates for the stock of R&D, it turns out that the exact starting value does not matter significantly (particularly in years toward the end of the dataset) if you have a sufficiently long data series. At a 15% depreciation rate, more than 50% of the initial value disappears after five years.

Although there are other methods to measure a firm’s stock of R&D, these tend to provide less information or rely on stronger assumptions than the approach described above does. For example, sometimes a firm’s stock of R&D is measured using a simple count of the number of patents they hold. However, this approach does not take into account the “value” of a patent. Since, by definition, each patent is unique (with differing number of years to run, levels of quality, ability to be challenged or worked around, and so on), it is unlikely to be appropriate to use an “average value of patents sold recently” to value it. At least with the perpetual inventory method described above, a monetary value for a firm’s stock of R&D can be obtained.

The perpetual inventory method also provides a way to calculate market shares of R&D in R&D-intensive industries, which can be used alongside current measures. This would be akin to looking at capacity shares in some manufacturing industries. Of course, using market shares in R&D industries can be fraught with issues, such as whether it is appropriate to use a backward-looking measure to assess competitive constraints in a forward-looking industry. This is why any investigation into such industries should also look, for example, at a firm’s research pipeline.

Naturally, this only provides for the valuation of the R&D stock and says nothing about valuing other intangibles that are likely to play an important role in a much wider range of industries. Nonetheless, this method could provide another means for competition authorities to assess the current and historical state of R&D stocks in industries in which R&D plays an important part. It would be interesting to see what firms’ shares of R&D stocks look like, for example, in the pharmaceutical and tech industries.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

Judges sometimes claim that they do not pick winners when they decide antitrust cases. Nothing could be further from the truth.

Competitive conduct by its nature harms competitors, and so if antitrust were merely to prohibit harm to competitors, antitrust would then destroy what it is meant to promote.

What antitrust prohibits, therefore, is not harm to competitors but rather harm to competitors that fails to improve products. Only in this way is antitrust able to distinguish between the good firm that harms competitors by making superior products that consumers love and that competitors cannot match and the bad firm that harms competitors by degrading their products without offering consumers anything better than what came before.

That means, however, that antitrust must pick winners: antitrust must decide what is an improvement and what not. And a more popular search engine is a clear winner.

But one should not take its winningness for granted. For once upon a time there was another winner that the courts always picked, blocking antitrust case after antitrust case. Until one day the courts stopped picking it.

That was the economy of scale.

The Structure of the Google Case

Like all antitrust cases that challenge the exercise of power, the government’s case against Google alleges denial of an input to competitors in some market. Here the input is default search status in smartphones, the competitors are rival search providers, and the market is search advertising. The basic structure of the case is depicted in the figure below.

Although brought as a monopolization case under Section 2 of the Sherman Act, this is at heart an exclusive dealing case of the sort normally brought under Section 1 of the Sherman Act: the government’s core argument is that Google uses contracts with smartphone makers, pursuant to which the smartphone makers promise to make Google, and not competitors, the search default, to harm competing search advertising providers and by extension competition in the search advertising market.

The government must show anticompetitive conduct, monopoly power, and consumer harm in order to prevail.

Let us assume that there is monopoly power. The company has more than 70% of the search advertising market, which is in the zone normally required to prove that element of a monopolization claim.

The problem of anticompetitive conduct is only slightly more difficult.

Anticompetitive conduct is only ever one thing in antitrust: denial of an essential input to a competitor. There is no other way to harm rivals.

(To be sure, antitrust prohibits harm to competition, not competitors, but that means only that harm to competitors necessary but insufficient for liability. The consumer harm requirement decides whether the requisite harm to competitors is also harm to competition.)

It is not entirely clear just how important default search status really is to running a successful search engine, but let us assume that it is essential, as the government suggests.

Then the question whether Google’s contracts are anticompetitive turns on how much of the default search input Google’s contracts foreclose to rival search engines. If a lot, then the rivals are badly harmed. If a little, then there may be no harm at all.

The answer here is that there is a lot of foreclosure, at least if the government’s complaint is to be believed. Through its contracts with Apple and makers of Android phones, Google has foreclosed default search status to rivals on virtually every single smartphone.

That leaves consumer harm. And here is where things get iffy.

Usage as a Product Improvement: A Very Convenient Argument

The inquiry into consumer harm evokes measurements of the difference between demand curves and price lines, or extrapolations of compensating and equivalent variation using indifference curves painstakingly pieced together based on the assumptions of revealed preference.

But while the parties may pay experts plenty to spin such yarns, and judges may pretend to listen to them, in the end, for the judges, it always comes down to one question only: did exclusive dealing improve the product?

If it did, then the judge assumes that the contracts made consumers better off and the defendant wins. And if it did not, then off with their heads.

So, does foreclosing all this default search space to competitors make Google search advertising more valuable to advertisers?

Those who leap to Google’s defense say yes, for default search status increases the number of people who use Google’s search engine. And the more people use Google’s search engine, the more Google learns about how best to answer search queries and which advertisements will most interest which searchers. And that ensures that even more people will use Google’s search engine, and that Google will do an even better job of targeting ads on its search engine.

And that in turn makes Google’s search advertising even better: able to reach more people and to target ads more effectively to them.

None of that would happen if defaults were set to other engines and users spurned Google, and so foreclosing default search space to rivals undoubtedly improves Google’s product.

This is a nice argument. Indeed, it is almost too nice, for it seems to suggest that almost anything Google might do to steer users away from competitors and to itself deserves antitrust immunity. Suppose Google were to brandish arms to induce you to run your next search on Google. That would be a crime, but, on this account, not an antitrust crime. For getting you to use Google does make Google better.

The argument that locking up users improves the product is of potential use not just to Google but to any of the many tech companies that run on advertising—Facebook being a notable example—so it potentially immunizes an entire business model from antitrust scrutiny.

It turns out that has happened before.

Economies of Scale as a Product Improvement: Once a Convenient Argument

Once upon a time, antitrust exempted another kind of business for which products improve the more people used them. The business was industrial production, and it differs from online advertising only in the irrelevant characteristic that the improvement that comes with expanding use is not in the quality of the product but in the cost per unit of producing it.

The hallmark of the industrial enterprise is high fixed costs and low marginal costs. The textile mill differs from pre-industrial piecework weaving in that once a $10 million investment in machinery has been made, the mill can churn out yard after yard of cloth for pennies. The pieceworker, by contrast, makes a relatively small up-front investment—the cost of raising up the hovel in which she labors and making her few tools—but spends the same large amount of time to produce each new yard of cloth.

Large fixed costs and low marginal costs lie at the heart of the bounty of the modern age: the more you produce, the lower the unit cost, and so the lower the price at which you can sell your product. This is a recipe for plenty.

But it also means that, so long as consumer demand in a given market is lower than the capacity of any particular plant, driving buyers to a particular seller and away from competitors always improves the product, in the sense that it enables the firm to increase volume and reduce unit cost, and therefore to sell the product at a lower price.

If the promise of the modern age is goods at low prices, then the implication is that antitrust should never punish firms for driving rivals from the market and taking over their customers. Indeed, efficiency requires that only one firm should ever produce in any given market, at least in any market for which a single plant is capable of serving all customers.

For antitrust in the late 19th and early 20th centuries, beguiled by this advantage to size, exclusive dealing, refusals to deal, even the knife in a competitor’s back: whether these ran afoul of other areas of law or not, it was all for the better because it allowed industrial enterprises to achieve economies of scale.

It is no accident that, a few notable triumphs aside, antitrust did not come into its own until the mid-1930s, 40 years after its inception, on the heels of an intellectual revolution that explained, for the first time, why it might actually be better for consumers to have more than one seller in a market.

The Monopolistic Competition Revolution

The revolution came in the form of the theory of monopolistic competition and its cousin, the theory of creative destruction, developed between the 1920s and 1940s by Edward Chamberlin, Joan Robinson and Joseph Schumpeter.

These theories suggested that consumers might care as much about product quality as they do about product cost, and indeed would be willing to abandon a low-cost product for a higher-quality, albeit more expensive, one.

From this perspective, the world of economies of scale and monopoly production was the drab world of Soviet state-owned enterprises churning out one type of shoe, one brand of cleaning detergent, and so on.

The world of capitalism and technological advance, by contrast, was one in which numerous firms produced batches of differentiated products in amounts sometimes too small fully to realize all scale economies, but for which consumers were nevertheless willing to pay because the products better fit their preferences.

What is more, the striving of monopolistically competitive firms to lure away each other’s customers with products that better fit their tastes led to disruptive innovation— “creative destruction” was Schumpeter’s famous term for it—that brought about not just different flavors of the same basic concept but entirely new concepts. The competition to create a better flip phone, for example, would lead inevitably to a whole new paradigm, the smartphone.

This reasoning combined with work in the 1940s and 1950s on economic growth that quantified for the first time the key role played by technological change in the vigor of capitalist economies—the famous Solow residual—to suggest that product improvements, and not the cost reductions that come from capital accumulation and their associated economies of scale, create the lion’s share of consumer welfare. Innovation, not scale, was king.

Antitrust responded by, for the first time in its history, deciding between kinds of product improvements, rather than just in favor of improvements, casting economies of scale out of the category of improvements subject to antitrust immunity, while keeping quality improvements immune.

Casting economies of scale out of the protected product improvement category gave antitrust something to do for the first time. It meant that big firms had to plead more than just the cost advantages of being big in order to obtain license to push their rivals around. And government could now start reliably to win cases, rather than just the odd cause célèbre.

It is this intellectual watershed, and not Thurman Arnold’s tenacity, that was responsible for antitrust’s emergence as a force after World War Two.

Usage-Based Improvements Are Not Like Economies of Scale

The improvements in advertising that come from user growth fall squarely on the quality side of the ledger—the value they create is not due to the ability to average production costs over more ad buyers—and so they count as the kind of product improvements that antitrust continues to immunize today.

But given the pervasiveness of this mode of product improvement in the tech economy—the fact that virtually any tech firm that sells advertising can claim to be improving a product by driving users to itself and away from competitors—it is worth asking whether we have not reached a new stage in economic development in which this form of product improvement ought, like economies of scale, to be denied protection.

Shouldn’t the courts demand more and better innovation of big tech firms than just the same old big-data-driven improvements they serve up year after year?

Galling as it may be to those who, like myself, would like to see more vigorous antitrust enforcement in general, the answer would seem to be “no.” For what induced the courts to abandon antitrust immunity for economies of scale in the mid-20th century was not the mere fact that immunizing economies of scale paralyzed antitrust. Smashing big firms is not, after all, an end in itself.

Instead, monopolistic competition, creative destruction and the Solow residual induced the change, because they suggested both that other kinds of product improvement are more important than economies of scale and, crucially, that protecting economies of scale impedes development of those other kinds of improvements.

A big firm that excludes competitors in order to reach scale economies not only excludes competitors who might have produced an identical or near-identical product, but also excludes competitors who might have produced a better-quality product, one that consumers would have preferred to purchase even at a higher price.

To cast usage-based improvements out of the product improvement fold, a case must be made that excluding competitors in order to pursue such improvements will block a different kind of product improvement that contributes even more to consumer welfare.

If we could say, for example, that suppressing search competitors suppresses more-innovative search engines that ad buyers would prefer, even if those innovative search engines were to lack the advantages that come from having a large user base, then a case might be made that user growth should no longer count as a product improvement immune from antitrust scrutiny.

And even then, the case against usage-based improvements would need to be general enough to justify an epochal change in policy, rather than be limited to a particular technology in a particular lawsuit. For the courts hate to balance in individual cases, statements to the contrary in their published opinions notwithstanding.

But there is nothing in the Google complaint, much less the literature, to suggest that usage-based improvements are problematic in this way. Indeed, much of the value created by the information revolution seems to inhere precisely in its ability to centralize usage.

Americans Keep Voting to Centralize the Internet

In the early days of the internet, theorists mistook its decentralized architecture for a feature, rather than a bug. But internet users have since shown, time and again, that they believe the opposite.

For example, the basic protocols governing email were engineered to allow every American to run his own personal email server.

But Americans hated the freedom that created—not least the spam—and opted instead to get their email from a single server: the one run by Google as Gmail.

The basic protocols governing web traffic were also designed to allow every American to run whatever other communications services he wished—chat, video chat, RSS, webpages—on his own private server in distributed fashion.

But Americans hated the freedom that created—not least having to build and rebuild friend networks across platforms–—and they voted instead overwhelmingly to get their social media from a single server: Facebook.

Indeed, the basic protocols governing internet traffic were designed to allow every business to store and share its own data from its own computers, in whatever form.

But American businesses hated that freedom—not least the cost of having to buy and service their own data storage machines—and instead 40% of the internet is now stored and served from Amazon Web Services.

Similarly, advertisers have the option of placing advertisements on the myriad independently-run websites that make up the internet—known in the business as the “open web”—by placing orders through competitive ad exchanges. But advertisers have instead voted mostly to place ads on the handful of highly centralized platforms known as “walled gardens,” including Facebook, Google’s YouTube and, of course, Google Search.

The communications revolution, they say, is all about “bringing people together.” It turns out that’s true.

And that Google should win on consumer harm.

Remember the Telephone

Indeed, the same mid-20th century antitrust that thought so little of economies of scale as a defense immunized usage-based improvements when it encountered them in that most important of internet precursors: the telephone.

The telephone, like most internet services, gets better as usage increases. The more people are on a particular telephone network, the more valuable the network becomes to subscribers.

Just as with today’s internet services, the advantage of a large user base drove centralization of telephone services a century ago into the hands of a single firm: AT&T. Aside from a few business executives who liked the look of a desk full of handsets, consumers wanted one phone line that they could use to call everyone.

Although the government came close to breaking AT&T up in the early 20th century, the government eventually backed off, because a phone system in which you must subscribe to the right carrier to reach a friend just doesn’t make sense.

Instead, Congress and state legislatures stepped in to take the edge off monopoly by regulating phone pricing. And when antitrust finally did break AT&T up in 1982, it did so in a distinctly regulatory fashion, requiring that AT&T’s parts connect each other’s phone calls, something that Congress reinforced in the Telecommunications Act of 1996.

The message was clear: the sort of usage-based improvements one finds in communications are real product improvements. And antitrust can only intervene if it has a way to preserve them.

The equivalent of interconnection in search, that the benefits of usage, in the form of data and attention, be shared among competing search providers, might be feasible. But it is hard to imagine the court in the Google case ordering interconnection without the benefit of decades of regulatory experience with the defendant’s operations that the district court in 1982 could draw upon in the AT&T case.

The solution for the tech giants today is the same as the solution for AT&T a century ago: to regulate rather than to antitrust.

Microsoft Not to the Contrary, Because Users Were in Common

Parallels to the government’s 1990s-era antitrust case against Microsoft are not to the contrary.

As Sam Weinstein has pointed out to me, Microsoft, like Google, was at heart an exclusive dealing case: Microsoft contracted with computer manufacturers to prevent Netscape Navigator, an early web browser, from serving as the default web browser on Windows PCs.

That prevented Netscape, the argument went, from growing to compete with Windows in the operating system market, much the way the Google’s Chrome browser has become a substitute for Windows on low-end notebook computers today.

The D.C. Circuit agreed that default status was an essential input for Netscape as it sought eventually to compete with Windows in the operating system market.

The court also accepted the argument that the exclusive dealing did not improve Microsoft’s operating system product.

This at first seems to contradict the notion that usage improves products, for, like search advertising, operating systems get better as their user bases increase. The more people use an operating system, the more application developers are willing to write for the system, and the better the system therefore becomes.

It seems to follow that keeping competitors off competing operating systems and on Windows made Windows better. If the court nevertheless held Microsoft liable, it must be because the court refused to extend antitrust immunity to usage-based improvements.

The trouble with this line of argument is that it ignores the peculiar thing about the Microsoft case: that while the government alleged that Netscape was a potential competitor of Windows, Netscape was also an application that ran on Windows.

That means that, unlike Google and rival search engines, Windows and Netscape shared users.

So, Microsoft’s exclusive dealing did not increase its user base and therefore could not have improved Windows, at least not by making Windows more appealing for applications developers. Driving Netscape from Windows did not enable developers to reach even one more user. Conversely, allowing Netscape to be the default browser on Windows would not have reduced the number of Windows users, because Netscape ran on Windows.

By contrast, a user who runs a search in Bing does not run the same search simultaneously in Google, and so Bing users are not Google users. Google’s exclusive dealing therefore increases its user base and improves Google’s product, whereas Microsoft’s exclusive dealing served only to reduce Netscape’s user base and degrade Netscape’s product.

Indeed, if letting Netscape be the default browser on Windows was a threat to Windows, it was not because it prevented Microsoft from improving its product, but because Netscape might eventually have become an operating system, and indeed a better operating system, than Windows, and consumers and developers, who could be on both at the same time if they wished, might have nevertheless chosen eventually to go with Netscape alone.

Though it does not help the government in the Google case, Microsoft still does offer a beacon of hope for those concerned about size, for Microsoft’s subsequent history reminds us that yesterday’s behemoth is often today’s also ran.

And the favorable settlement terms Microsoft ultimately used to escape real consequences for its conduct 20 years ago imply that, at least in high-tech markets, we don’t always need antitrust for that to be true.

Rolled by Rewheel, Redux

Eric Fruits —  15 December 2020

The Finnish consultancy Rewheel periodically issues reports using mobile wireless pricing information to make claims about which countries’ markets are competitive and which are not. For example, Rewheel claims Canada and Greece have the “least competitive monthly prices” while the United Kingdom and Finland have the most competitive.

Rewheel often claims that the number of carriers operating in a country is the key determinant of wireless pricing. 

Their pricing studies attract a great deal of attention. For example, in February 2019 testimony before the U.S. House Energy and Commerce Committee, Phillip Berenbroick of Public Knowledge asserted: “Rewheel found that consumers in markets with three facilities-based providers paid twice as much per gigabyte as consumers in four firm markets.” So, what’s wrong with Rewheel? An earlier post highlights some of the flaws in Rewheel’s methodology. But there’s more.

Rewheel creates fictional market baskets of mobile plans for each provider in a county. Country-by-country comparisons are made by evaluating the lowest-priced basket for each country and the basket with the median price.

Rewheel’s market baskets are hypothetical packages that say nothing about which plans are actually chosen by consumers or what the actual prices paid by those consumers were. This is not a new criticism. In 2014, Pauline Affeldt and Rainer Nitsche called these measures “meaningless”:

Such approaches are taken by Rewheel (2013) and also the Austrian regulator rtr … Such studies face the following problems: They may pick tariffs that are relatively meaningless in the country. They will have to assume one or more consumption baskets (voice minutes, data volume etc.) in order to compare tariffs. This may drive results. Apart from these difficulties such comparisons require very careful tracking of tariffs and their changes. Even if one assumes studying a sample of tariffs is potentially meaningful, a comparison across countries (or over time) would still require taking into account key differences across countries (or over time) like differences in demand, costs, network quality etc.

For example, reporting that the average price of a certain T-Mobile USA smartphone, tablet and home Internet plan is $125 is about as useless as knowing that the average price of a Kroger shopping cart containing a six-pack of Budweiser, a dozen eggs, and a pound of oranges is $10. Is Safeway less “competitive” if the price of the same cart of goods is $12? What could you say about pricing at a store that doesn’t sell Budweiser (e.g., Trader Joe’s)?

Rewheel solves that last problem by doing something bonkers. If a carrier doesn’t offer a plan in one of Rewheel’s baskets, they “assign” the HIGHEST monthly price in the world. 

For example, Rewheel notes that Vodafone India does not offer a fixed wireless broadband plan with at least 1,000GB of data and download speeds of 100 Mbps or faster. So, Rewheel “assigns” Vodafone India the highest price in its dataset. That price belongs to a plan that’s sold in the United Kingdom. It simply makes no sense. 

To return to the supermarket analogy, it would be akin to saying that, if a Trader Joe’s in the United States doesn’t sell six-packs of Budweiser, we should assume the price of Budweiser at Trader Joe’s is equal to the world’s most expensive six-pack of the beer. In reality, Trader Joe’s is known for having relatively low prices. But using the Rewheel approach, the store would be assessed to have some of the highest prices.

Because of Rewheel’s “assignment” of highest monthly prices to many plans, it’s irrelevant whether their analysis is based on a country’s median price or lowest price. The median is skewed and the lowest actual may be missing from the dataset.

Rewheel publishes these reports to support its argument that mobile prices are lower in markets with four carriers than in those with three carriers. But even if we accept Rewheel’s price data as reliable, which it isn’t, their own data show no relationship between the number of carriers and average price.

Notice the huge overlap of observations among markets with three and four carriers. 

Rewheel’s latest report provides a redacted dataset, reporting only data usage and weighted average price for each provider. So, we have to work with what we have. 

A simple regression analysis shows there is no statistically significant difference in the intercept or the slopes for markets with three, four or five carriers (the default is three carriers in the regression). Based on the data Rewheel provides to the public, the number of carriers in a country has no relationship to wireless prices.

Rewheel seems to have a rich dataset of pricing information that could be useful to inform policy. It’s a shame that their topline summaries seem designed to support a predetermined conclusion.

Congressman Buck’s “Third Way” report offers a compromise between the House Judiciary Committee’s majority report, which proposes sweeping new regulation of tech companies, and the status quo, which Buck argues is unfair and insufficient. But though Buck rejects many of the majority’s reports proposals, what he proposes instead would lead to virtually the same outcome via a slightly longer process. 

The most significant majority proposals that Buck rejects are the structural separation to prevent a company that runs a platform from operating on that platform “in competition with the firms dependent on its infrastructure”, and line-of-business restrictions that would confine tech companies to a small number of markets, to prevent them from preferencing their other products to the detriment of competitors.

Buck rules these out, saying that they are “regulatory in nature [and] invite unforeseen consequences and divert attention away from public interest antitrust enforcement by our antitrust agencies.” He goes on to say that “this proposal is a thinly veiled call to break up Big Tech firms.”

Instead, Buck endorses, either fully or provisionally, measures including revitalising the essential facilities doctrine, imposing data interoperability mandates on platforms, and changing antitrust law to prevent “monopoly leveraging and predatory pricing”. 

Put together, though, these would amount to the same thing that the Democratic majority report proposes: a world where platforms are basically just conduits, regulated to be neutral and open, and where the companies that run them require a regulator’s go-ahead for important decisions — a process that would be just as influenced lobbying and political considerations, and insulated from market price signals, as any other regulator’s decisions are.

Revitalizing the essential facilities doctrine

Buck describes proposals to “revitalize the essential facilities doctrine” as “common ground” that warrant further consideration. This would mean that platforms deemed to be “essential facilities” would be required to offer access to their platform to third parties at a “reasonable” price, except in exceptional circumstances. The presumption would be that these platforms were anticompetitively foreclosing third party developers and merchants by either denying them access to their platforms or by charging them “too high” prices. 

This would require the kind of regulatory oversight that Buck says he wants to avoid. He says that “conservatives should be wary of handing additional regulatory authority to agencies in an attempt to micromanage platforms’ access rules.” But there’s no way to avoid this when the “facility” — and hence its pricing and access rules — changes as frequently as any digital platform does. In practice, digital platforms would have to justify their pricing rules and decisions about exclusion of third parties to courts or a regulator as often as they make those decisions.

If Apple’s App Store were deemed an essential facility such that it is presumed to be foreclosing third party developers any time it rejected their submissions, it would have to submit to regulatory scrutiny of the “reasonableness” of its commercial decisions on, literally, a daily basis.

That would likely require price controls to prevent platforms from using pricing to de facto exclude third parties they did not want to deal with. Adjudication of “fair” pricing by courts is unlikely to be a sustainable solution. Justice Breyer, in Town of Concord v. Boston Edison Co., considered this to be outside the courts’ purview:

[H]ow is a judge or jury to determine a ‘fair price?’ Is it the price charged by other suppliers of the primary product? None exist. Is it the price that competition ‘would have set’ were the primary level not monopolized? How can the court determine this price without examining costs and demands, indeed without acting like a rate-setting regulatory agency, the rate-setting proceedings of which often last for several years? Further, how is the court to decide the proper size of the price ‘gap?’ Must it be large enough for all independent competing firms to make a ‘living profit,’ no matter how inefficient they may be? . . . And how should the court respond when costs or demands change over time, as they inevitably will?

In practice, infrastructure treated as an essential facility is usually subject to pricing control by a regulator. This has its own difficulties. The UK’s energy and water infrastructure is an example. In determining optimal access pricing, regulators must determine the price that weighs competing needs to maximise short-term output, incentivise investment by the infrastructure owner, incentivise innovation and entry by competitors (e.g., local energy grids) and, of course, avoid “excessive” pricing. 

This is a near-impossible task, and the process is often drawn out and subject to challenges even in markets where the infrastructure is relatively simple. It is even less likely that these considerations would be objectively tractable in digital markets.

Treating a service as an essential facility is based on the premise that, absent mandated access, it is impossible to compete with it. But mandating access does not, on its own, prevent it from extracting monopoly rents from consumers; it just means that other companies selling inputs can have their share of the rents. 

So you may end up with two different sets of price controls: on the consumer side, to determine how much monopoly rent can be extracted from consumers, and on the access side, to determine how the monopoly rents are divided.

The UK’s energy market has both, for example. In the case of something like an electricity network, where it may simply not be physically or economically feasible to construct a second, competing network, this might be the least-bad course of action. In such circumstances, consumer-side price regulation might make sense. 

But if a service could, in fact, be competed with by others, treating it as an essential facility may be affirmatively harmful to competition and consumers if it diverts investment and time away from that potential competitor by allowing other companies to acquire some of the incumbent’s rents themselves.

The HJC report assumes that Apple is a monopolist, because, among people who own iPhones, the App Store is the only way to install third-party software. Treating the App Store as an essential facility may mean a ban on Apple charging “excessive prices” to companies like Spotify or Epic that would like to use it, or on Apple blocking them for offering users alternative in-app ways of buying their services.

If it were impossible for users to switch from iPhones, or for app developers to earn revenue through other mechanisms, this logic might be sound. But it would still not change the fact that the App Store platform was able to charge users monopoly prices; it would just mean that Epic and Spotify could capture some of those monopoly rents for themselves. Nice for them, but not for consumers. And since both companies have already grown to be pretty big and profitable with the constraints they object to in place, it seems difficult to argue that they cannot compete with these in place and sounds more like they’d just like a bigger share of the pie.

And, in fact, it is possible to switch away from the iPhone to Android. I have personally switched back and forth several times over the past few years, for example. And so have many others — despite what some claim, it’s really not that hard, especially now that most important data is stored on cloud-based services, and both companies offer an app to switch from the other. Apple also does not act like a monopolist — its Bionic chips are vastly better than any competitor’s and it continues to invest in and develop them.

So in practice, users switching from iPhone to Android if Epic’s games and Spotify’s music are not available constrains Apple, to some extent. If Apple did drive those services permanently off their platform, it would make Android relatively more attractive, and some users would move away — Apple would bear some of the costs of its ecosystem becoming worse. 

Assuming away this kind of competition, as Buck and the majority report do, is implausible. Not only that, but Buck and the majority believe that competition in this market is impossible — no policy or antitrust action could change things, and all that’s left is to regulate the market like it’s an electricity grid. 

And it means that platforms could often face situations where they could not expect to make themselves profitable after building their markets, since they could not control the supply side in order to earn revenues. That would make it harder to build platforms, and weaken competition, especially competition faced by incumbents.

Mandating interoperability

Interoperability mandates, which Buck supports, require platforms to make their products open and interoperable with third party software. If Twitter were required to be interoperable, for example, it would have to provide a mechanism (probably a set of open APIs) by which third party software could tweet and read its feeds, upload photos, send and receive DMs, and so on. 

Obviously, what interoperability actually involves differs from service to service, and involves decisions about design that are specific to each service. These variations are relevant because they mean interoperability requires discretionary regulation, including about product design, and can’t just be covered by a simple piece of legislation or a court order. 

To give an example: interoperability means a heightened security risk, perhaps from people unwittingly authorising a bad actor to access their private messages. How much is it appropriate to warn users about this, and how tight should your security controls be? It is probably excessive to require that users provide a sworn affidavit with witnesses, and even some written warnings about the risks may be so over the top as to scare off virtually any interested user. But some level of warning and user authentication is appropriate. So how much? 

Similarly, a company that has been required to offer its customers’ data through an API, but doesn’t really want to, can make life miserable for third party services that want to use it. Changing the API without warning, or letting its service drop or slow down, can break other services, and few users will be likely to want to use a third-party service that is unreliable. But some outages are inevitable, and some changes to the API and service are desirable. How do you decide how much?

These are not abstract examples. Open Banking in the UK, which requires interoperability of personal and small business current accounts, is the most developed example of interoperability in the world. It has been cited by former Chair of the Council of Economic Advisors, Jason Furman, among others, as a model for interoperability in tech. It has faced all of these questions: one bank, for instance, required that customers pass through twelve warning screens to approve a third party app to access their banking details.

To address problems like this, Open Banking has needed an “implementation entity” to design many of its most important elements. This is a de facto regulator, and it has taken years of difficult design decisions to arrive at Open Banking’s current form. 

Having helped write the UK’s industry review into Open Banking, I am cautiously optimistic about what it might be able to do for banking in Britain, not least because that market is already heavily regulated and lacking in competition. But it has been a huge undertaking, and has related to a relatively narrow set of data (its core is just two different things — the ability to read an account’s balance and transaction history, and the ability to initiate payments) in a sector that is not known for rapidly changing technology. Here, the costs of regulation may be outweighed by the benefits.

I am deeply sceptical that the same would be the case in most digital markets, where products do change rapidly, where new entrants frequently attempt to enter the market (and often succeed), where the security trade-offs are even more difficult to adjudicate, and where the economics are less straightforward, given that many services are provided at least in part because of the access to customer data they provide. 

Even if I am wrong, it is unavoidable that interoperability in digital markets would require an equivalent body to make and implement decisions when trade-offs are involved. This, again, would require a regulator like the UK’s implementation entity, and one that was enormous, given the number and diversity of services that it would have to oversee. And it would likely have to make important and difficult design decisions to which there is no clear answer. 

Banning self-preferencing

Buck’s Third Way would also ban digital platforms from self-preferencing. This typically involves an incumbent that can provide a good more cheaply than its third-party competitors — whether it’s through use of data that those third parties do not have access to, reputational advantages that mean customers will be more likely to use their products, or through scale efficiencies that allow it to provide goods to a larger customer base for a cheaper price. 

Although many people criticise self-preferencing as being unfair on competitors, “self-preferencing” is an inherent part of almost every business. When a company employs its own in-house accountants, cleaners or lawyers, instead of contracting out for them, it is engaged in internal self-preferencing. Any firm that is vertically integrated to any extent, instead of contracting externally for every single ancillary service other than the one it sells in the market, is self-preferencing. Coase’s theory of the firm is all about why this kind of behaviour happens, instead of every worker contracting on the open market for everything they do. His answer is that transaction costs make it cheaper to bring certain business relationships in-house than to contract externally for them. Virtually everyone agrees that this is desirable to some extent.

Nor does it somehow become a problem when the self-preferencing takes place on the consumer product side. Any firm that offers any bundle of products — like a smartphone that can run only the manufacturer’s operating system — is engaged in self-preferencing, because users cannot construct their own bundle with that company’s hardware and another’s operating system. But the efficiency benefits often outweigh the lack of choice.

Self-preferencing in digital platforms occurs, for example, when Google includes relevant Shopping or Maps results at the top of its general Search results, or when Amazon gives its own store-brand products (like the AmazonBasics range) a prominent place in the results listing.

There are good reasons to think that both of these are good for competition and consumer welfare. Google making Shopping results easily visible makes it a stronger competitor to Amazon, and including Maps results when you search for a restaurant just makes it more convenient to get the information you’re looking for.

Amazon sells its own private label products partially because doing so is profitable (even when undercutting rivals), partially to fill holes in product lines (like clothing, where 11% of listings were Amazon private label as of November 2018), and partially because it increases users’ likelihood to use Amazon if they expect to find a reliable product from a brand they trust. According to Amazon, they account for less than 1% of its annual retail sales, in contrast to the 19% of revenues ($54 billion) Amazon makes from third party seller services, which includes Marketplace commissions. Any analysis that ignores that Amazon has to balance those sources of revenue, and so has to tread carefully, is deficient. 

With “commodity” products (like, say, batteries and USB cables), where multiple sellers are offering very similar or identical versions of the same thing, private label competition works well for both Amazon and consumers. By Amazon’s own rules it can enter this market using aggregated data, but this doesn’t give it a significant advantage, because that data is easily obtainable from multiple sources, including Amazon itself, which makes detailed aggregated sales data freely available to third-party retailers

Amazon does profit from sales of these products, of course. And other merchants suffer by having to cut their prices to compete. That’s precisely what competition involves — competition is incompatible with a quiet life for businesses. But consumers benefit, and the biggest benefit to Amazon is that it assures its potential customers that when they visit they will be able to find a product that is cheap and reliable, so they keep coming back.

It is even hard to argue that in aggregate this practice is damaging to third-party sellers: many, like Anker, have built successful businesses on Amazon despite private-label competition precisely because the value of the platform increases for all parties as user trust and confidence in it does.

In these cases and in others, platforms act to solve market failures on the markets they host, as Andrei Hagiu has argued. To maximize profits, digital platforms need to strike a balance between being an attractive place for third-party merchants to sell their goods and being attractive to consumers by offering low prices. The latter will frequently clash with the former — and that’s the difficulty of managing a platform. 

To mistake this pro-competitive behaviour with an absence of competition is misguided. But that is a key conclusion of Buck’s Third Way: that the damage to competitors makes this behaviour harmful overall, and that it should be curtailed with “non-discrimination” rules. 

Treating below-cost selling as “predatory pricing”

Buck’s report equates below-cost selling with predatory pricing (“predatory pricing, also known as below-cost selling”). This is mistaken. Predatory pricing refers to a particular scenario where your price cut is temporary and designed to drive a competitor out of business, so that you can raise prices later and recoup your losses. 

It is easy to see that this does not describe the vast majority of below-cost selling. Buck’s formulation would describe all of the following as “predatory pricing”:

  • A restaurants that gives away ketchup for free;
  • An online retailer that offers free shipping and returns;
  • A grocery store that sells tins of beans for 3p a can. (This really happened when I was a child.)

The rationale for offering below-cost prices differs in each of these cases. Sometimes it’s a marketing ploy — Tesco sells those beans to get some free media, and to entice people into their stores, hoping they’ll decide to do the rest of their weekly shop there at the same time. Sometimes it’s about reducing frictions — the marginal cost of ketchup is so low that it’s simpler to just give it away. Sometimes it’s about reducing the fixed costs of transactions so more take place — allowing customers who buy your products to return them easily may mean more are willing to buy them overall, because there’s less risk for them if they don’t like what they buy. 

Obviously, none of these is “predatory”: none is done in the expectation that the below-cost selling will drive those businesses’ competitors out of business, allowing them to make monopoly profits later.

True predatory pricing is theoretically possible, but very difficult. As David Henderson describes, to successfully engage in predatory pricing means taking enormous and rising losses that grow for the “predatory” firm as customers switch to it from its competitor. And once the rival firm has exited the market, if the predatory firm raises prices above average cost (i.e., to recoup its losses), there is no guarantee that a new competitor will not enter the market selling at the previously competitive price. And the competing firm can either shut down temporarily or, in some cases, just buy up the “predatory” firm’s discounted goods to resell later. It is debatable whether the canonical predatory pricing case, Standard Oil, is itself even an example of that behaviour.

Offering a product below cost in a multi-sided market (like a digital platform) can be a way of building a customer base in order to incentivise entry on the other side of the market. When network effects exist, so additional users make the service more valuable to existing users, it can be worthwhile to subsidise the initial users until the service reaches a certain size. 

Uber subsidising drivers and riders in a new city is an example of this — riders want enough drivers on the road that they know they’ll be picked up fairly quickly if they order one, and drivers want enough riders that they know they’ll be able to earn a decent night’s fares if they use the app. This requires a certain volume of users on both sides — to get there, it can be in everyone’s interest for the platform to subsidise one or both sides of the market to reach that critical mass.

The slightly longer road to regulation

That is another reason for below-cost pricing: someone other than the user may be part-paying for a product, to build a market they hope to profit from later. Platforms must adjust pricing and their offerings to each side of their market to manage supply and demand. Epic, for example, is trying to build a desktop computer game store to rival the largest incumbent, Steam. To win over customers, it has been giving away games for free to users, who can own them on that store forever. 

That is clearly pro-competitive — Epic is hoping to get users over the habit of using Steam for all their games, in the hope that they will recoup the costs of doing so later in increased sales. And it is good for consumers to get free stuff. This kind of behaviour is very common. As well as Uber and Epic, smaller platforms do it too. 

Buck’s proposals would make this kind of behaviour much more difficult, and permitted only if a regulator or court allows it, instead of if the market can bear it. On both sides of the coin, Buck’s proposals would prevent platforms from the behaviour that allows them to grow in the first place — enticing suppliers and consumers and subsidising either side until critical mass has been reached that allows the platform to exist by itself, and the platform owner to recoup its investments. Fundamentally, both Buck and the majority take the existence of platforms as a given, ignoring the incentives to create new ones and compete with incumbents. 

In doing so, they give up on competition altogether. As described, Buck’s provisions would necessitate ongoing rule-making, including price controls, to work. It is unlikely that a court could do this, since the relevant costs would change too often for one-shot rule-making of the kind a court could do. To be effective at all, Buck’s proposals would require an extensive, active regulator, just as the majority report’s would. 

Buck nominally argues against this sort of outcome — “Conservatives should be wary of handing additional regulatory authority to agencies in an attempt to micromanage platforms’ access rules” — but it is probably unavoidable, given the changes he proposes. And because the rule changes he proposes would apply to the whole economy, not just tech, his proposals may, perversely, end up being even more extensive and interventionist than the majority’s.

Other than this, the differences in practice between Buck’s proposals and the Democrats’ proposals would be trivial. At best, Buck’s Third Way is just a longer route to the same destination.

In the latest congressional hearing, purportedly analyzing Google’s “stacking the deck” in the online advertising marketplace, much of the opening statement and questioning by Senator Mike Lee and later questioning by Senator Josh Hawley focused on an episode of alleged anti-conservative bias by Google in threatening to demonetize The Federalist, a conservative publisher, unless they exercised a greater degree of control over its comments section. The senators connected this to Google’s “dominance,” arguing that it is only because Google’s ad services are essential that Google can dictate terms to a conservative website. A similar impulse motivates Section 230 reform efforts as well: allegedly anti-conservative online platforms wield their dominance to censor conservative speech, either through deplatforming or demonetization.

Before even getting into the analysis of how to incorporate political bias into antitrust analysis, though, it should be noted that there likely is no viable antitrust remedy. Even aside from the Section 230 debate, online platforms like Google are First Amendment speakers who have editorial discretion over their sites and apps, much like newspapers. An antitrust remedy compelling these companies to carry speech they disagree with would almost certainly violate the First Amendment.

But even aside from the First Amendment aspect of this debate, there is no easy way to incorporate concerns about political bias into antitrust. Perhaps the best way to understand this argument in the antitrust sense is as a non-price effects analysis. 

Political bias could be seen by end consumers as an important aspect of product quality. Conservatives have made the case that not only Google, but also Facebook and Twitter, have discriminated against conservative voices. The argument would then follow that consumer welfare is harmed when these dominant platforms leverage their control of the social media marketplace into the marketplace of ideas by censoring voices with whom they disagree. 

While this has theoretical plausibility, there are real practical difficulties. As Geoffrey Manne and I have written previously, in the context of incorporating privacy into antitrust analysis:

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application. 

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist. 

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

Just as with privacy and other product qualities, the analysis becomes increasingly complex first when tradeoffs between price and quality are introduced, and then even more so when tradeoffs between what different consumer groups perceive as quality is added. In fact, it is more complex than privacy. All but the most exhibitionistic would prefer more to less privacy, all other things being equal. But with political media consumption, most would prefer to have more of what they want to read available, even if it comes at the expense of what others may want. There is no easy way to understand what consumer welfare means in a situation where one group’s preferences need to come at the expense of another’s in moderation decisions.

Consider the case of The Federalist again. The allegation is that Google is imposing their anticonservative bias by “forcing” the website to clean up its comments section. The argument is that since The Federalist needs Google’s advertising money, it must play by Google’s rules. And since it did so, there is now one less avenue for conservative speech.

What this argument misses is the balance Google and other online services must strike as multi-sided platforms. The goal is to connect advertisers on one side of the platform, to the users on the other. If a site wants to take advantage of the ad network, it seems inevitable that intermediaries like Google will need to create rules about what can and can’t be shown or they run the risk of losing advertisers who don’t want to be associated with certain speech or conduct. For instance, most companies don’t want to be associated with racist commentary. Thus, they will take great pains to make sure they don’t sponsor or place ads in venues associated with racism. Online platforms connecting advertisers to potential consumers must take that into consideration.

Users, like those who frequent The Federalist, have unpriced access to content across those sites and apps which are part of ad networks like Google’s. Other models, like paid subscriptions (which The Federalist also has available), are also possible. But it isn’t clear that conservative voices or conservative consumers have been harmed overall by the option of unpriced access on one side of the platform, with advertisers paying on the other side. If anything, it seems the opposite is the case since conservatives long complained about legacy media having a bias and lauded the Internet as an opportunity to gain a foothold in the marketplace of ideas.

Online platforms like Google must balance the interests of users from across the political spectrum. If their moderation practices are too politically biased in one direction or another, users could switch to another online platform with one click or swipe. Assuming online platforms wish to maximize revenue, they will have a strong incentive to limit political bias from its moderation practices. The ease of switching to another platform which markets itself as more free speech-friendly, like Parler, shows entrepreneurs can take advantage of market opportunities if Google and other online platforms go too far with political bias. 

While one could perhaps argue that the major online platforms are colluding to keep out conservative voices, this is difficult to square with the different moderation practices each employs, as well as the data that suggests conservative voices are consistently among the most shared on Facebook

Antitrust is not a cure-all law. Conservatives who normally understand this need to reconsider whether antitrust is really well-suited for litigating concerns about anti-conservative bias online. 

This week the Senate will hold a hearing into potential anticompetitive conduct by Google in its display advertising business—the “stack” of products that it offers to advertisers seeking to place display ads on third-party websites. It is also widely reported that the Department of Justice is preparing a lawsuit against Google that will likely include allegations of anticompetitive behavior in this market, and is likely to be joined by a number of state attorneys general in that lawsuit. Meanwhile, several papers have been published detailing these allegations

This aspect of digital advertising can be incredibly complex and difficult to understand. Here we explain how display advertising fits in the broader digital advertising market, describe how display advertising works, consider the main allegations against Google, and explain why Google’s critics are misguided to focus on antitrust as a solution to alleged problems in the market (even if those allegations turn out to be correct).

Display advertising in context

Over the past decade, the price of advertising has fallen steadily while output has risen. Spending on digital advertising in the US grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period the Producer Price Index for Internet advertising sales declined by nearly 40%. The rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year. Since 2000, advertising spending has been falling as a share of GDP, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost, and increasing total revenues are consistent with a growing and increasingly competitive market.

Display advertising on third-party websites is only a small subsection of the digital advertising market, comprising approximately 15-20% of digital advertising spending in the US. The rest of the digital advertising market is made up of ads on search results pages on sites like Google, Amazon and Kayak, on people’s Instagram and Facebook feeds, listings on sites like Zillow (for houses) or Craigslist, referral fees paid to price comparison websites for things like health insurance, audio and visual ads on services like Spotify and Hulu, and sponsored content from influencers and bloggers who will promote products to their fans. 

And digital advertising itself is only one of many channels through which companies can market their products. About 53% of total advertising spending in the United States goes on digital channels, with 30% going on TV advertising and the rest on things like radio ads, billboards and other more traditional forms of advertising. A few people still even read physical newspapers and the ads they contain, although physical newspapers’ bigger money makers have traditionally been classified ads, which have been replaced by less costly and more effective internet classifieds, such as those offered by Craigslist, or targeted ads on Google Maps or Facebook.

Indeed, it should be noted that advertising itself is only part of the larger marketing market of which non-advertising marketing communication—e.g., events, sales promotion, direct marketing, telemarketing, product placement—is as big a part as is advertising (each is roughly $500bn globally); it just hasn’t been as thoroughly disrupted by the Internet yet. But it is a mistake to assume that digital advertising is not a part of this broader market. And of that $1tr global market, Internet advertising in total occupies only about 18%—and thus display advertising only about 3%.

Ad placement is only one part of the cost of digital advertising. An advertiser trying to persuade people to buy its product must also do market research and analytics to find out who its target market is and what they want. Moreover, there are the costs of designing and managing a marketing campaign and additional costs to analyze and evaluate the effectiveness of the campaign. 

Nevertheless, one of the most straightforward ways to earn money from a website is to show ads to readers alongside the publisher’s content. To satisfy publishers’ demand for advertising revenues, many services have arisen to automate and simplify the placement of and payment for ad space on publishers’ websites. Google plays a large role in providing these services—what is referred to as “open display” advertising. And it is Google’s substantial role in this space that has sparked speculation and concern among antitrust watchdogs and enforcement authorities.

Before delving into the open display advertising market, a quick note about terms. In these discussions, “advertisers” are businesses that are trying to sell people stuff. Advertisers include large firms such as Best Buy and Disney and small businesses like the local plumber or financial adviser. “Publishers” are websites that carry those ads, and publish content that users want to read. Note that the term “publisher” refers to all websites regardless of the things they’re carrying: a blog about the best way to clean stains out of household appliances is a “publisher” just as much as the New York Times is. 

Under this broad definition, Facebook, Instagram, and YouTube are also considered publishers. In their role as publishers, they have a common goal: to provide content that attracts users to their pages who will act on the advertising displayed. “Users” are you and me—the people who want to read publishers’ content, and to whom advertisers want to show ads. Finally, “intermediaries” are the digital businesses, like Google, that sit in between the advertisers and the publishers, allowing them to do business with each other without ever meeting or speaking.

The display advertising market

If you’re an advertiser, display advertising works like this: your company—one that sells shoes, let’s say—wants to reach a certain kind of person and tell her about the company’s shoes. These shoes are comfortable, stylish, and inexpensive. You use a tool like Google Ads (or, if it’s a big company and you want a more expansive campaign over which you have more control, Google Marketing Platform) to design and upload an ad, and tell Google about the people you want to read—their age and location, say, and/or characterizations of their past browsing and searching habits (“interested in sports”). 

Using that information, Google finds ad space on websites whose audiences match the people you want to target. This ad space is auctioned off to the highest bidder among the range of companies vying, with your shoe company, to reach users matching the characteristics of the website’s users. Thanks to tracking data, it doesn’t just have to be sports-relevant websites: as a user browses sports-related sites on the web, her browser picks up files (cookies) that will tag her as someone potentially interested in sports apparel for targeting later.

So a user might look at a sports website and then later go to a recipe blog, and there receive the shoes ad on the basis of her earlier browsing. You, the shoe seller, hope that she will either click through and buy (or at least consider buying) the shoes when she sees those ads, but one of the benefits of display advertising over search advertising is that—as with TV ads or billboard ads—just seeing the ad will make her aware of the product and potentially more likely to buy it later. Advertisers thus sometimes pay on the basis of clicks, sometimes on the basis of views, and sometimes on the basis of conversion (when a consumer takes an action of some sort, such as making a purchase or filling out a form).

That’s the advertiser’s perspective. From the publisher’s perspective—the owner of that recipe blog, let’s say—you want to auction ad space off to advertisers like that shoe company. In that case, you go to an ad server—Google’s product is called AdSense—give them a little bit of information about your site, and add some html code to your website. These ad servers gather information about your content (e.g., by looking at keywords you use) and your readers (e.g., by looking at what websites they’ve used in the past to make guesses about what they’ll be interested in) and places relevant ads next to and among your content. If they click, lucky you—you’ll get paid a few cents or dollars. 

Apart from privacy concerns about the tracking of users, the really tricky and controversial part here concerns the way scarce advertising space is allocated. Most of the time, it’s done through auctions that happen in real time: each time a user loads a website, an auction is held in a fraction of a second to decide which advertiser gets to display an ad. The longer this process takes, the slower pages load and the more likely users are to get frustrated and go somewhere else.

As well as the service hosting the auction, there are lots of little functions that different companies perform that make the auction and placement process smoother. Some fear that by offering a very popular product integrated end to end, Google’s “stack” of advertising products can bias auctions in favour of its own products. There’s also speculation that Google’s product is so tightly integrated and so effective at using data to match users and advertisers that it is not viable for smaller rivals to compete.

We’ll discuss this speculation and fear in more detail below. But it’s worth bearing in mind that this kind of real-time bidding for ad placement was not always the norm, and is not the only way that websites display ads to their users even today. Big advertisers and websites often deal with each other directly. As with, say, TV advertising, large companies advertising often have a good idea about the people they want to reach. And big publishers (like popular news websites) often have a good idea about who their readers are. For example, big brands often want to push a message to a large number of people across different customer types as part of a broader ad campaign. 

Of these kinds of direct sales, sometimes the space is bought outright, in advance, and reserved for those advertisers. In most cases, direct sales are run through limited, intermediated auction services that are not open to the general market. Put together, these kinds of direct ad buys account for close to 70% of total US display advertising spending. The remainder—the stuff that’s left over after these kinds of sales have been done—is typically sold through the real-time, open display auctions described above.

Different adtech products compete on their ability to target customers effectively, to serve ads quickly (since any delay in the auction and ad placement process slows down page load times for users), and to do so inexpensively. All else equal (including the effectiveness of the ad placement), advertisers want to pay the lowest possible price to place an ad. Similarly, publishers want to receive the highest possible price to display an ad. As a result, both advertisers and publishers have a keen interest in reducing the intermediary’s “take” of the ad spending.

This is all a simplification of how the market works. There is not one single auction house for ad space—in practice, many advertisers and publishers end up having to use lots of different auctions to find the best price. As the market evolved to reach this state from the early days of direct ad buys, new functions that added efficiency to the market emerged. 

In the early years of ad display auctions, individual processes in the stack were performed by numerous competing companies. Through a process of “vertical integration” some companies, such as Google, brought these different processes under the same roof, with the expectation that integration would streamline the stack and make the selling and placement of ads more efficient and effective. The process of vertical integration in pursuit of efficiency has led to a more consolidated market in which Google is the largest player, offering simple, integrated ad buying products to advertisers and ad selling products to publishers. 

Google is by no means the only integrated adtech service provider, however: Facebook, Amazon, Verizon, AT&T/Xandr, theTradeDesk, LumenAd, Taboola and others also provide end-to-end adtech services. But, in the market for open auction placement on third-party websites, Google is the biggest.

The cases against Google

The UK’s Competition and Markets Authority (CMA) carried out a formal study into the digital advertising market between 2019 and 2020, issuing its final report in July of this year. Although also encompassing Google’s Search advertising business and Facebook’s display advertising business (both of which relate to ads on those companies “owned and operated” websites and apps), the CMA study involved the most detailed independent review of Google’s open display advertising business to date. 

That study did not lead to any competition enforcement proceedings, but it did conclude that Google’s vertically integrated products led to conflicts of interest that could lead it to behaving in ways that did not benefit the advertisers and publishers that use it. One example was Google’s withholding of certain data from publishers that would make it easier for them to use other ad selling products; another was the practice of setting price floors that allegedly led advertisers to pay more than they would otherwise.

Instead the CMA recommended the setting up of a “Digital Markets Unit” (DMU) that could regulate digital markets in general, and a code of conduct for Google and Facebook (and perhaps other large tech platforms) intended to govern their dealings with smaller customers.

The CMA’s analysis is flawed, however. For instance, it makes big assumptions about the dependency of advertisers on display advertising, largely assuming that they would not switch to other forms of advertising if prices rose, and it is light on economics. But factually it is the most comprehensively researched investigation into digital advertising yet published.

Piggybacking on the CMA’s research, and mounting perhaps the strongest attack on Google’s adtech offerings to date, was a paper released just prior to the CMA’s final report called “Roadmap for a Digital Advertising Monopolization Case Against Google”, by Yale economist Fiona Scott Morton and Omidyar Network lawyer David Dinielli. Dinielli will testify before the Senate committee.

While the Scott Morton and Dinielli paper is extremely broad, it also suffers from a number of problems. 

One, because it was released before the CMA’s final report, it is largely based on the interim report released months earlier by the CMA, halfway through the market study in December 2019. This means that several of its claims are out of date. For example, it makes much of the possibility raised by the CMA in its interim report that Google may take a larger cut of advertising spending than its competitors, and claims made in another report that Google introduces “hidden” fees that increases the overall cut it takes from ad auctions. 

But in the final report, after further investigation, the CMA concludes that this is not the case. In the final report, the CMA describes its analysis of all Google Ad Manager open auctions related to UK web traffic during the period between 8–14 March 2020 (involving billions of auctions). This, according to the CMA, allowed it to observe any possible “hidden” fees as well. The CMA concludes:

Our analysis found that, in transactions where both Google Ads and Ad Manager (AdX) are used, Google’s overall take rate is approximately 30% of advertisers’ spend. This is broadly in line with (or slightly lower than) our aggregate market-wide fee estimate outlined above. We also calculated the margin between the winning bid and the second highest bid in AdX for Google and non-Google DSPs, to test whether Google was systematically able to win with a lower margin over the second highest bid (which might have indicated that they were able to use their data advantage to extract additional hidden fees). We found that Google’s average winning margin was similar to that of non-Google DSPs. Overall, this evidence does not indicate that Google is currently extracting significant hidden fees. As noted below, however, it retains the ability and incentive to do so. (p. 275, emphasis added)

Scott Morton and Dinielli also misquote and/or misunderstand important sections of the CMA interim report as relating to display advertising when, in fact, they relate to search. For example, Scott Morton and Dinielli write that the “CMA concluded that Google has nearly insurmountable advantages in access to location data, due to the location information [uniquely available to it from other sources].” (p. 15). The CMA never makes any claim of “insurmountable advantage,” however. Rather, to support the claim, Scott Morton and Dinielli cite to a portion of the CMA interim report recounting a suggestion made by Microsoft regarding the “critical” value of location data in providing relevant advertising. 

But that portion of the report, as well as the suggestion made by Microsoft, is about search advertising. While location data may also be valuable for display advertising, it is not clear that the GPS-level data that is so valuable in providing mobile search ad listings (for a nearby cafe or restaurant, say) is particularly useful for display advertising, which may be just as well-targeted by less granular, city- or county-level location data, which is readily available from a number of sources. In any case, Scott Morton and Dinielli are simply wrong to use a suggestion offered by Microsoft relating to search advertising to demonstrate the veracity of an assertion about a conclusion drawn by the CMA regarding display advertising. 

Scott Morton and Dinielli also confusingly word their own judgements about Google’s conduct in ways that could be misinterpreted as conclusions by the CMA:

The CMA reports that Google has implemented an anticompetitive sales strategy on the publisher ad server end of the intermediation chain. Specifically, after purchasing DoubleClick, which became its publisher ad server, Google apparently lowered its prices to publishers by a factor of ten, at least according to one publisher’s account related to the CMA. (p. 20)

In fact, the CMA does not conclude that Google lowering its prices was an “anticompetitive sales strategy”—it does not use these words at all—and what Scott Morton and Dinielli are referring to is a claim by a rival ad server business, Smart, that Google cutting its prices after acquiring Doubleclick led to Google expanding its market share. Apart from the misleading wording, it is unclear why a competition authority should consider it to be “anticompetitive” when prices are falling and kept low, and—as Smart reported to the CMA—its competitor’s response is to enhance its own offering. 

The case that remains

Stripping away the elements of Scott Morton and Dinielli’s case that seem unsubstantiated by a more careful reading of the CMA reports, and with the benefit of the findings in the CMA’s final report, we are left with a case that argues that Google self-preferences to an unreasonable extent, giving itself a product that is as successful as it is in display advertising only because of Google’s unique ability to gain advantage from its other products that have little to do with display advertising. Because of this self-preferencing, they might argue, innovative new entrants cannot compete on an equal footing, so the market loses out on incremental competition because of the advantages Google gets from being the world’s biggest search company, owning YouTube, running Google Maps and Google Cloud, and so on. 

The most significant examples of this are Google’s use of data from other products—like location data from Maps or viewing history from YouTube—to target ads more effectively; its ability to enable advertisers placing search ads to easily place display ads through the same interface; its introduction of faster and more efficient auction processes that sidestep the existing tools developed by other third-party ad exchanges; and its design of its own tool (“open bidding”) for aggregating auction bids for advertising space to compete with (rather than incorporate) an alternative tool (“header bidding”) that is arguably faster, but costs more money to use.

These allegations require detailed consideration, and in a future paper we will attempt to assess them in detail. But in thinking about them now it may be useful to consider the remedies that could be imposed to address them, assuming they do diminish the ability of rivals to compete with Google: what possible interventions we could make in order to make the market work better for advertisers, publishers, and users. 

We can think of remedies as falling into two broad buckets: remedies that stop Google from doing things that improve the quality of its own offerings, thus making it harder for others to keep up; and remedies that require it to help rivals improve their products in ways otherwise accessible only to Google (e.g., by making Google’s products interoperable with third-party services) without inherently diminishing the quality of Google’s own products.

The first camp of these, what we might call “status quo minus,” includes rules banning Google from using data from its other products or offering single order forms for advertisers, or, in the extreme, a structural remedy that “breaks up” Google by either forcing it to sell off its display ad business altogether or to sell off elements of it. 

What is striking about these kinds of interventions is that all of them “work” by making Google worse for those that use it. Restrictions on Google’s ability to use data from other products, for example, will make its service more expensive and less effective for those who use it. Ads will be less well-targeted and therefore less effective. This will lead to lower bids from advertisers. Lower ad prices will be transmitted through the auction process to produce lower payments for publishers. Reduced publisher revenues will mean some content providers exit. Users will thus be confronted with less available content and ads that are less relevant to them and thus, presumably, more annoying. In other words: No one will be better off, and most likely everyone will be worse off.

The reason a “single order form” helps Google is that it is useful to advertisers, the same way it’s useful to be able to buy all your groceries at one store instead of lots of different ones. Similarly, vertical integration in the “ad stack” allows for a faster, cheaper, and simpler product for users on all sides of the market. A different kind of integration that has been criticized by others, where third-party intermediaries can bid more quickly if they host on Google Cloud, benefits publishers and users because it speeds up auction time, allowing websites to load faster. So does Google’s unified alternative to “header bidding,” giving a speed boost that is apparently valuable enough to publishers that they will pay for it.

So who would benefit from stopping Google from doing these things, or even forcing Google to sell its operations in this area? Not advertisers or publishers. Maybe Google’s rival ad intermediaries would; presumably, artificially hamstringing Google’s products would make it easier for them to compete with Google. But if so, it’s difficult to see how this would be an overall improvement. It is even harder to see how this would improve the competitive process—the very goal of antitrust. Rather, any increase in the competitiveness of rivals would result not from making their products better, but from making Google’s product worse. That is a weakening of competition, not its promotion. 

On the other hand, interventions that aim to make Google’s products more interoperable at least do not fall prey to this problem. Such “status quo plus” interventions would aim to take the benefits of Google’s products and innovations and allow more companies to use them to improve their own competing products. Not surprisingly, such interventions would be more in line with the conclusions the CMA came to than the divestitures and operating restrictions proposed by Scott Morton and Dinielli, as well as (reportedly) state attorneys general considering a case against Google.

But mandated interoperability raises a host of different concerns: extensive and uncertain rulemaking, ongoing regulatory oversight, and, likely, price controls, all of which would limit Google’s ability to experiment with and improve its products. The history of such mandated duties to deal or compulsory licenses is a troubled one, at best. But even if, for the sake of argument, we concluded that these kinds of remedies were desirable, they are difficult to impose via an antitrust lawsuit of the kind that the Department of Justice is expected to launch. Most importantly, if the conclusion of Google’s critics is that Google’s main offense is offering a product that is just too good to compete with without regulating it like a utility, with all the costs to innovation that that would entail, maybe we ought to think twice about whether an antitrust intervention is really worth it at all.

We’re delighted to welcome Jonathan M. Barnett as our newest blogger at Truth on the Market.

Jonathan Barnett is director of the USC Gould School of Law Media, Entertainment and Technology Law Program. Barnett specializes in intellectual property, contracts, antitrust, and corporate law. He has published in the Harvard Law Review, Yale Law Journal, Journal of Legal Studies, Review of Law & Economics, Journal of Corporation Law and other scholarly journals.

He joined USC Law in fall 2006 and was a visiting professor at New York University School of Law in fall 2010. Prior to academia, Barnett practiced corporate law as a senior associate at Cleary Gottlieb Steen & Hamilton in New York, specializing in private equity and mergers and acquisitions transactions. He was also a visiting assistant professor at Fordham University School of Law in New York. A magna cum laude graduate of University of Pennsylvania, Barnett received a MPhil from Cambridge University and a JD from Yale Law School.

You can find his scholarship at SSRN.

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.

Excerpts:

PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

PRICE DISCRIMINATION AS A PRIVACY HARM

If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.

DATA BARRIER TO ENTRY

Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive

CONCLUSION

Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.

An important new paper was recently posted to SSRN by Commissioner Joshua Wright and Joanna Tsai.  It addresses a very hot topic in the innovation industries: the role of patented innovation in standard setting organizations (SSO), what are known as standard essential patents (SEP), and whether the nature of the contractual commitment that adheres to a SEP — specifically, a licensing commitment known by another acronym, FRAND (Fair, Reasonable and Non-Discriminatory) — represents a breakdown in private ordering in the efficient commercialization of new technology.  This is an important contribution to the growing literature on patented innovation and SSOs, if only due to the heightened interest in these issues by the FTC and the Antitrust Division at the DOJ.

http://ssrn.com/abstract=2467939.

“Standard Setting, Intellectual Property Rights, and the Role of Antitrust in Regulating Incomplete Contracts”

JOANNA TSAI, Government of the United States of America – Federal Trade Commission
Email:
JOSHUA D. WRIGHT, Federal Trade Commission, George Mason University School of Law
Email:

A large and growing number of regulators and academics, while recognizing the benefits of standardization, view skeptically the role standard setting organizations (SSOs) play in facilitating standardization and commercialization of intellectual property rights (IPRs). Competition agencies and commentators suggest specific changes to current SSO IPR policies to reduce incompleteness and favor an expanded role for antitrust law in deterring patent holdup. These criticisms and policy proposals are based upon the premise that the incompleteness of SSO contracts is inefficient and the result of market failure rather than an efficient outcome reflecting the costs and benefits of adding greater specificity to SSO contracts and emerging from a competitive contracting environment. We explore conceptually and empirically that presumption. We also document and analyze changes to eleven SSO IPR policies over time. We find that SSOs and their IPR policies appear to be responsive to changes in perceived patent holdup risks and other factors. We find the SSOs’ responses to these changes are varied across SSOs, and that contractual incompleteness and ambiguity for certain terms persist both across SSOs and over time, despite many revisions and improvements to IPR policies. We interpret this evidence as consistent with a competitive contracting process. We conclude by exploring the implications of these findings for identifying the appropriate role of antitrust law in governing ex post opportunism in the SSO setting.