Archives For international

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Seth L. Cooper is director of policy studies and a senior fellow at the Free State Foundation.]

During Chairman Ajit Pai’s tenure, the Federal Communications Commission adopted key reforms that improved the agency’s processes. No less important than process reform is process integrity. The commission’s L-Band Order and the process that produced it will be the focus here. In that proceeding, Chairman Pai led a careful and deliberative process that resulted in a clearly reasoned and substantively supportable decision to put unused valuable L-Band spectrum into commercial use for wireless services.

Thanks to one of Chairman Pai’s most successful process reforms, the FCC now publicly posts draft items to be voted on three weeks in advance of the commission’s public meetings. During his chairmanship, the commission adopted reforms to help expedite the regulatory-adjudication process by specifying deadlines and facilitating written administrative law judge (ALJ) decisions rather than in-person hearings. The “Team Telecom” process also was reformed to promote faster agency determinations on matters involving foreign ownership.

Along with his process-reform achievements, Chairman Pai deserves credit for ensuring that the FCC’s proceedings were conducted in a lawful and sound manner. For example, the commission’s courtroom track record was notably better during Chairman Pai’s tenure than during the tenures of his immediate predecessors. Moreover, Chairman Pai deserves high marks for the agency process that preceded the L-Band Order – a process that was perhaps subject to more scrutiny than the process of any other proceeding during his chairmanship. The public record supports the integrity of that process, as well as the order’s merits.

In April 2020, the FCC unanimously approved an order authorizing Ligado Networks to deploy a next-generation mixed mobile-satellite network using licensed spectrum in the L-Band. This action is critical to alleviating the shortage of commercial spectrum in the United States and to ensuring our nation’s economic competitiveness. Ligado’s proposed network will provide industrial Internet-of-Things (IoT) services, and its L-Band spectrum has been identified as capable of pairing with C-Band and other mid-band spectrum for delivering future 5G services. According to the L-Band Order, Ligado plans to invest up to $800 million in network capabilities, which could create over 8,000 jobs. Economist Coleman Bazelon estimated that Ligado’s network could help create up to 3 million jobs and contribute up to $500 billion to the U.S. economy.

Opponents of the L-Band Order have claimed that Ligado’s proposed network would create signal interference with GPS services in adjacent spectrum. Moreover, in attempts to delay or undo implementation of the L-Band Order, several opponents lodged harsh but baseless attacks against the FCC’s process. Some of those process criticisms were made at a May 2020 Senate Armed Services Committee hearing that failed to include any Ligado representatives or any FCC commissioners for their viewpoints. And in a May 2020 floor speech, Sen. James Inhofe (R-Okla.) repeatedly criticized the commission’s process as sudden, hurried, and taking place “in the darkness of a weekend.”

But those process criticisms fail in the face of easily verifiable facts. Under Chairman Pai’s leadership, the FCC acted within its conceded authority, consistent with its lawful procedures, and with careful—even lengthy—deliberation.

The FCC’s proceeding concerning Ligado’s license applications dates back to 2011. It included public notice and comment periods in 2016 and 2018. An August 2019 National Telecommunications and Information Administration (NTIA) report noted the commission’s forthcoming decision. In the fall of 2019, the commission shared a draft of its order with NTIA. Publicly stated opposition to Ligado’s proposed network by GPS operators and Defense Secretary Mark Esper, as well as publicly stated support for the network by Attorney General William Barr and Secretary of State Mike Pompeo, ensured that the proceeding received ongoing attention. Claims of “surprise” when the commission finalized its order in April 2020 are impossible to credit.

Importantly, the result of the deliberative agency process helmed by Chairman Pai was a substantively supportable decision. The FCC applied its experience in adjudicating competing technical claims to make commercial spectrum policy decisions. It was persuaded in part by signal testing conducted by the National Advanced Spectrum and Communications Test Network, as well as testing by technology consultants Roberson and Associates. By contrast, the commission found unpersuasive reports of alleged signal interference involving military devices operating outside of their assigned spectrum band.

The FCC also applied its expertise in addressing potential harmful signal interference to incumbent operations in adjacent spectrum bands by imposing several conditions on Ligado’s operations. For example, the L-Band Order requires Ligado to adhere to its agreements with major GPS equipment manufacturers for resolving signal interference concerns. Ligado must dedicate 23 megahertz of its own licensed spectrum as a guard-band from neighboring spectrum and also reduce its base station power levels 99% compared to what Ligado proposed in 2015. The commission requires Ligado to expeditiously replace or repair any U.S. government GPS devices that experience harmful interference from its network. And Ligado must maintain “stop buzzer” capability to halt its network within 15 minutes of any request by the commission.

From a process standpoint, the L-Band Order is a commendable example of Chairman Pai’s perseverance in leading the FCC to a much-needed decision on an economically momentous matter in the face of conflicting government agency and market provider viewpoints. Following a careful and deliberative process, the commission persevered to make a decision that is amply supported by the record and poised to benefit America’s economic welfare.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

Ajit Pai will step down from his position as chairman of the Federal Communications Commission (FCC) effective Jan. 20. Beginning Jan. 15, Truth on the Market will host a symposium exploring Pai’s tenure, with contributions from a range of scholars and practitioners.

As we ponder the changes to FCC policy that may arise with the next administration, it’s also a timely opportunity to reflect on the chairman’s leadership at the agency and his influence on telecommunications policy more broadly. Indeed, the FCC has faced numerous challenges and opportunities over the past four years, with implications for a wide range of federal policy and law. Our symposium will offer insights into numerous legal, economic, and policy matters of ongoing importance.

Under Pai’s leadership, the FCC took on key telecommunications issues involving spectrum policy, net neutrality, 5G, broadband deployment, the digital divide, and media ownership and modernization. Broader issues faced by the commission include agency process reform, including a greater reliance on economic analysis; administrative law; federal preemption of state laws; national security; competition; consumer protection; and innovation, including the encouragement of burgeoning space industries.

This symposium asks contributors for their thoughts on these and related issues. We will explore a rich legacy, with many important improvements that will guide the FCC for some time to come.

Truth on the Market thanks all of these excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting Jan. 15.

The European Commission has unveiled draft legislation (the Digital Services Act, or “DSA”) that would overhaul the rules governing the online lives of its citizens. The draft rules are something of a mixed bag. While online markets present important challenges for law enforcement, the DSA would significantly increase the cost of doing business in Europe and harm the very freedoms European lawmakers seek to protect. The draft’s newly proposed “Know Your Business Customer” (KYBC) obligations, however, will enable smoother operation of the liability regimes that currently apply to online intermediaries. 

These reforms come amid a rash of headlines about election meddling, misinformation, terrorist propaganda, child pornography, and other illegal and abhorrent content spread on digital platforms. These developments have galvanized debate about online liability rules.

Existing rules, codified in the e-Commerce Directive, largely absolve “passive” intermediaries that “play a neutral, merely technical and passive role” from liability for content posted by their users so long as they remove it once notified. “Active” intermediaries have more legal exposure. This regime isn’t perfect, but it seems to have served the EU well in many ways.

With its draft regulation, the European Commission is effectively arguing that those rules fail to address the legal challenges posed by the emergence of digital platforms. As the EC’s press release puts it:

The landscape of digital services is significantly different today from 20 years ago, when the eCommerce Directive was adopted. […]  Online intermediaries […] can be used as a vehicle for disseminating illegal content, or selling illegal goods or services online. Some very large players have emerged as quasi-public spaces for information sharing and online trade. They have become systemic in nature and pose particular risks for users’ rights, information flows and public participation.

Online platforms initially hoped lawmakers would agree to some form of self-regulation, but those hopes were quickly dashed. Facebook released a white paper this Spring proposing a more moderate path that would expand regulatory oversight to “ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression.” The proposed regime would not impose additional liability for harmful content posted by users, a position that Facebook and other internet platforms reiterated during congressional hearings in the United States.

European lawmakers were not moved by these arguments. EU Commissioner for Internal Market and Services Thierry Breton, among other European officials, dismissed Facebook’s proposal within hours of its publication, saying:

It’s not enough. It’s too slow, it’s too low in terms of responsibility and regulation.

Against this backdrop, the draft DSA includes many far-reaching measures: transparency requirements for recommender systems, content moderation decisions, and online advertising; mandated sharing of data with authorities and researchers; and numerous compliance measures that include internal audits and regular communication with authorities. Moreover, the largest online platforms—so-called “gatekeepers”—will have to comply with a separate regulation that gives European authorities new tools to “protect competition” in digital markets (the Digital Markets Act, or “DMA”).

The upshot is that, if passed into law, the draft rules will place tremendous burdens upon online intermediaries. This would be self-defeating. 

Excessive regulation or liability would significantly increase their cost of doing business, leading to significantly smaller networks and significantly increased barriers to access for many users. Stronger liability rules would also encourage platforms to play it safe, such as by quickly de-platforming and refusing access to anyone who plausibly engaged in illegal activity. Such an outcome would harm the very freedoms European lawmakers seek to protect.

This could prove particularly troublesome for small businesses that find it harder to compete against large platforms due to rising compliance costs. In effect, the new rules will increase barriers to entry, as has already been seen with the GDPR.

In the commission’s defense, some of the proposed reforms are more appealing. This is notably the case with the KYBC requirements, as well as the decision to leave most enforcement to member states, where services providers have their main establishments. The latter is likely to preserve regulatory competition among EU members to attract large tech firms, potentially limiting regulatory overreach. 

Indeed, while the existing regime does, to some extent, curb the spread of online crime, it does little for the victims of cybercrime, who ultimately pay the price. Removing illegal content doesn’t prevent it from reappearing in the future, sometimes on the same platform. Importantly, hosts have no obligation to provide the identity of violators to authorities, or even to know their identity in the first place. The result is an endless game of “whack-a-mole”: illegal content is taken down, but immediately reappears elsewhere. This status quo enables malicious users to upload illegal content, such as that which recently led card networks to cut all ties with Pornhub

Victims arguably need additional tools. This is what the Commission seeks to achieve with the DSA’s “traceability of traders” requirement, a form of KYBC:

Where an online platform allows consumers to conclude distance contracts with traders, it shall ensure that traders can only use its services to promote messages on or to offer products or services to consumers located in the Union if, prior to the use of its services, the online platform has obtained the following information: […]

Instead of rewriting the underlying liability regime—with the harmful unintended consequences that would likely entail—the draft DSA creates parallel rules that require platforms to better protect victims.

Under the proposed rules, intermediaries would be required to obtain the true identity of commercial clients (as opposed to consumers) and to sever ties with businesses that refuse to comply (rather than just take down their content). Such obligations would be, in effect, a version of the “Know Your Customer” regulations that exist in other industries. Banks, for example, are required to conduct due diligence to ensure scofflaws can’t use legitimate financial services to further criminal enterprises. It seems reasonable to expect analogous due diligence from the Internet firms that power so much of today’s online economy.

Obligations requiring platforms to vet their commercial relationships may seem modest, but they’re likely to enable more effective law enforcement against the actual perpetrators of online harms without diminishing platform’s innovation and the economic opportunity they provide (and that everyone agrees is worth preserving).

There is no silver bullet. Illegal activity will never disappear entirely from the online world, just as it has declined, but not vanished, from other walks of life. But small regulatory changes that offer marginal improvements can have a substantial effect. Modest informational requirements would weed out the most blatant crimes without overly burdening online intermediaries. In short, it would make the Internet a safer place for European citizens.

Rolled by Rewheel, Redux

Eric Fruits —  15 December 2020

The Finnish consultancy Rewheel periodically issues reports using mobile wireless pricing information to make claims about which countries’ markets are competitive and which are not. For example, Rewheel claims Canada and Greece have the “least competitive monthly prices” while the United Kingdom and Finland have the most competitive.

Rewheel often claims that the number of carriers operating in a country is the key determinant of wireless pricing. 

Their pricing studies attract a great deal of attention. For example, in February 2019 testimony before the U.S. House Energy and Commerce Committee, Phillip Berenbroick of Public Knowledge asserted: “Rewheel found that consumers in markets with three facilities-based providers paid twice as much per gigabyte as consumers in four firm markets.” So, what’s wrong with Rewheel? An earlier post highlights some of the flaws in Rewheel’s methodology. But there’s more.

Rewheel creates fictional market baskets of mobile plans for each provider in a county. Country-by-country comparisons are made by evaluating the lowest-priced basket for each country and the basket with the median price.

Rewheel’s market baskets are hypothetical packages that say nothing about which plans are actually chosen by consumers or what the actual prices paid by those consumers were. This is not a new criticism. In 2014, Pauline Affeldt and Rainer Nitsche called these measures “meaningless”:

Such approaches are taken by Rewheel (2013) and also the Austrian regulator rtr … Such studies face the following problems: They may pick tariffs that are relatively meaningless in the country. They will have to assume one or more consumption baskets (voice minutes, data volume etc.) in order to compare tariffs. This may drive results. Apart from these difficulties such comparisons require very careful tracking of tariffs and their changes. Even if one assumes studying a sample of tariffs is potentially meaningful, a comparison across countries (or over time) would still require taking into account key differences across countries (or over time) like differences in demand, costs, network quality etc.

For example, reporting that the average price of a certain T-Mobile USA smartphone, tablet and home Internet plan is $125 is about as useless as knowing that the average price of a Kroger shopping cart containing a six-pack of Budweiser, a dozen eggs, and a pound of oranges is $10. Is Safeway less “competitive” if the price of the same cart of goods is $12? What could you say about pricing at a store that doesn’t sell Budweiser (e.g., Trader Joe’s)?

Rewheel solves that last problem by doing something bonkers. If a carrier doesn’t offer a plan in one of Rewheel’s baskets, they “assign” the HIGHEST monthly price in the world. 

For example, Rewheel notes that Vodafone India does not offer a fixed wireless broadband plan with at least 1,000GB of data and download speeds of 100 Mbps or faster. So, Rewheel “assigns” Vodafone India the highest price in its dataset. That price belongs to a plan that’s sold in the United Kingdom. It simply makes no sense. 

To return to the supermarket analogy, it would be akin to saying that, if a Trader Joe’s in the United States doesn’t sell six-packs of Budweiser, we should assume the price of Budweiser at Trader Joe’s is equal to the world’s most expensive six-pack of the beer. In reality, Trader Joe’s is known for having relatively low prices. But using the Rewheel approach, the store would be assessed to have some of the highest prices.

Because of Rewheel’s “assignment” of highest monthly prices to many plans, it’s irrelevant whether their analysis is based on a country’s median price or lowest price. The median is skewed and the lowest actual may be missing from the dataset.

Rewheel publishes these reports to support its argument that mobile prices are lower in markets with four carriers than in those with three carriers. But even if we accept Rewheel’s price data as reliable, which it isn’t, their own data show no relationship between the number of carriers and average price.

Notice the huge overlap of observations among markets with three and four carriers. 

Rewheel’s latest report provides a redacted dataset, reporting only data usage and weighted average price for each provider. So, we have to work with what we have. 

A simple regression analysis shows there is no statistically significant difference in the intercept or the slopes for markets with three, four or five carriers (the default is three carriers in the regression). Based on the data Rewheel provides to the public, the number of carriers in a country has no relationship to wireless prices.

Rewheel seems to have a rich dataset of pricing information that could be useful to inform policy. It’s a shame that their topline summaries seem designed to support a predetermined conclusion.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Doug Melamed (Professor of the Practice of Law, Stanford law School).
]

The big digital platforms make people uneasy.  Part of the unease is no doubt attributable to widespread populist concerns about large and powerful business entities.  Platforms like Facebook and Google in particular cause unease because they affect sensitive issues of communications, community, and politics.  But the platforms also make people uneasy because they seem boundless – enduring monopolies protected by ever-increasing scale and network economies, and growing monopolies aided by scope economies that enable them to conquer complementary markets.  They provoke a discussion about whether antitrust law is sufficient for the challenge.

Nicolas Petit’s Big Tech and the Digital Economy: The Moligopoly Scenario provides an insightful and valuable antidote to this unease.  While neither Panglossian nor comprehensive, Petit’s analysis persuasively argues that some of the concerns about the platforms are misguided or at least overstated.  As Petit sees it, the platforms are not so much monopolies in discrete markets – search, social networking, online commerce, and so on – as “multibusiness firms with business units in partly overlapping markets” that are engaged in a “dynamic oligopoly game” that might be “the socially optimal industry structure.”  Petit suggests that we should “abandon or at least radically alter traditional antitrust principles,” which are aimed at preserving “rivalry,” and “adapt to the specific non-rival economics of digital markets.”  In other words, the law should not try to diminish the platforms’ unique dominance in their individual sectors, which have already tipped to a winner-take-all (or most) state and in which protecting rivalry is not “socially beneficial.”  Instead, the law should encourage reductions of output in tipped markets in which the dominant firm “extracts a monopoly rent” in order to encourage rivalry in untipped markets. 

Petit’s analysis rests on the distinction between “tipped markets,” in which “tech firms with observed monopoly positions can take full advantage of their market power,” and “untipped markets,” which are “characterized by entry, instability and uncertainty.”  Notably, however, he does not expect “dispositive findings” as to whether a market is tipped or untipped.  The idea is to define markets, not just by “structural” factors like rival goods and services, market shares and entry barriers, but also by considering “uncertainty” and “pressure for change.”

Not surprisingly, given Petit’s training and work as a European scholar, his discussion of “antitrust in moligopoly markets” includes prescriptions that seem to one schooled in U.S. antitrust law to be a form of regulation that goes beyond proscribing unlawful conduct.  Petit’s principal concern is with reducing monopoly rents available to digital platforms.  He rejects direct reduction of rents by price regulation as antithetical to antitrust’s DNA and proposes instead indirect reduction of rents by permitting users on the inelastic side of a platform (the side from which the platform gains most of its revenues) to collaborate in order to gain countervailing market power and by restricting the platforms’ use of vertical restraints to limit user bypass. 

He would create a presumption against all horizontal mergers by dominant platforms in order to “prevent marginal increases of the output share on which the firms take a monopoly rent” and would avoid the risk of defining markets narrowly and thus failing to recognize that platforms are conglomerates that provide actual or potential competition in multiple partially overlapping commercial segments. By contrast, Petit would restrict the platforms’ entry into untipped markets only in “exceptional circumstances.”  For this, Petit suggests four inquiries: whether leveraging of network effects is involved; whether platform entry deters or forecloses entry by others; whether entry by others pressures the monopoly rents; and whether entry into the untipped market is intended to deter entry by others or is a long-term commitment.

One might question the proposition, which is central to much of Petit’s argument, that reducing monopoly rents in tipped markets will increase the platforms’ incentives to enter untipped markets.  Entry into untipped markets is likely to depend more on expected returns in the untipped market, the cost of capital, and constraints on managerial bandwidth than on expected returns in the tipped market.  But the more important issue, at least from the perspective of competition law, is whether – even assuming the correctness of all aspects of Petit’s economic analysis — the kind of categorical regulatory intervention proposed by Petit is superior to a law enforcement regime that proscribes only anticompetitive conduct that increases or threatens to increase market power.  Under U.S. law, anticompetitive conduct is conduct that tends to diminish the competitive efficacy of rivals and does not sufficiently enhance economic welfare by reducing costs, increasing product quality, or reducing above-cost prices.

If there were no concerns about the ability of legal institutions to know and understand the facts, a law enforcement regime would seem clearly superior.  Consider, for example, Petit’s recommendation that entry by a platform monopoly into untipped markets should be restricted only when network effects are involved and after taking into account whether the entry tends to protect the tipped market monopoly and whether it reflects a long-term commitment.  Petit’s proposed inquiries might make good sense as a way of understanding as a general matter whether market extension by a dominant platform is likely to be problematic.  But it is hard to see how economic welfare is promoted by permitting a platform to enter an adjacent market (e.g., Amazon entering a complementary product market) by predatory pricing or by otherwise unprofitable self-preferencing, even if the entry is intended to be permanent and does not protect the platform monopoly. 

Similarly, consider the proposed presumption against horizontal mergers.  That might not be a good idea if there is a small (10%) chance that the acquired firm would otherwise endure and modestly reduce the platform’s monopoly rents and an equal or even smaller chance that the acquisition will enable the platform, by taking advantage of economies of scope and asset complementarities, to build from the acquired firm an improved business that is much more valuable to consumers.  In that case, the expected value of the merger in welfare terms might be very positive.  Similarly, Petit would permit acquisitions by a platform of firms outside the tipped market as long as the platform has the ability and incentive to grow the target.  But the growth path of the target is not set in stone.  The platform might use it as a constrained complement, while an unaffiliated owner might build it into something both more valuable to consumers and threatening to the platform.  Maybe one of these stories describes Facebook’s acquisition of Instagram.

The prototypical anticompetitive horizontal merger story is one in which actual or potential competitors agree to share the monopoly rents that would be dissipated by competition between them. That story is confounded by communications that seem like threats, which imply a story of exclusion rather than collusion.  Petit refers to one such story.  But the threat story can be misleading.  Suppose, for example, that Platform sees Startup introduce a new business concept and studies whether it could profitably emulate Startup.  Suppose further that Platform concludes that, because of scale and scope economies available to it, it could develop such a business and come to dominate the market for a cost of $100 million acting alone or $25 million if it can acquire Startup and take advantage of its existing expertise, intellectual property, and personnel.  In that case, Platform might explain to Startup the reality that Platform is going to take the new market either way and propose to buy Startup for $50 million (thus offering Startup two-thirds of the gains from trade).  Startup might refuse, perhaps out of vanity or greed, in which case Platform as promised might enter aggressively and, without engaging in predatory or other anticompetitive conduct, drive Startup from the market.  To an omniscient law enforcement regime, there should be no antitrust violation from either an acquisition or the aggressive competition.  Either way, the more efficient provider prevails so the optimum outcome is realized in the new market.  The merger would have been more efficient because it would have avoided wasteful duplication of startup costs, and the merger proposal (later characterized as a threat) was thus a benign, even procompetitive, invitation to collude.  It would be a different story of course if Platform could overcome Startup’s first mover advantage only by engaging in anticompetitive conduct.

The problem is that antitrust decision makers often cannot understand all the facts.  Take the threat story, for example.  If Startup acquiesces and accepts the $50 million offer, the decision maker will have to determine whether Platform could have driven Startup from the market without engaging in predatory or anticompetitive conduct and, if not, whether absent the merger the parties would have competed against one another.  In other situations, decision makers are asked to determine whether the conduct at issue would be more likely than the but-for world to promote innovation or other, similarly elusive matters.

U.S. antitrust law accommodates its unavoidable uncertainty by various default rules and practices.  Some, like per se rules and the controversial Philadelphia National Bank presumption, might on occasion prohibit conduct that would actually have been benign or even procompetitive.  Most, however, insulate from antitrust liability conduct that might actually be anticompetitive.  These include rules applicable to predatory pricing, refusals to deal, two-sided markets, and various matters involving patents.  Perhaps more important are proof requirements in general.  U.S. antitrust law is based on the largely unexamined notion that false positives are worse than false negatives and thus, for the most part, puts the burden of uncertainty on the plaintiff.

Petit is proposing, in effect, an alternative approach for the digital platforms.  This approach would not just proscribe anticompetitive conduct.  It would, instead, apply to specific firms special rules that are intended to promote a desired outcome, the reduction in monopoly rents in tipped digital markets.  So, one question suggested by Petit’s provocative study is whether the inevitable uncertainty surrounding issues of platform competition are best addressed by the kinds of categorical rules Petit proposes or by case-by-case application of abstract legal principles.  Put differently, assuming that economic welfare is the objective, what is the best way to minimize error costs?

Broadly speaking, there are two kinds of error costs: specification errors and application errors.  Specification errors reflect legal rules that do not map perfectly to the normative objectives of the law (e.g., a rule that would prohibit all horizontal mergers by dominant platforms when some such mergers are procompetitive or welfare-enhancing).  Application errors reflect mistaken application of the legal rule to the facts of the case (e.g., an erroneous determination whether the conduct excludes rivals or provides efficiency benefits).   

Application errors are the most likely source of error costs in U.S. antitrust law.  The law relies largely on abstract principles that track the normative objectives of the law (e.g., conduct by a monopoly that excludes rivals and has no efficiency benefit is illegal). Several recent U.S. antitrust decisions (American Express, Qualcomm, and Farelogix among them) suggest that error costs in a law enforcement regime like that in the U.S. might be substantial and even that case-by-case application of principles that require applying economic understanding to diverse factual circumstances might be beyond the competence of generalist judges.  Default rules applicable in special circumstances reduce application errors but at the expense of specification errors.

Specification errors are more likely with categorical rules, like those suggested by Petit.  The total costs of those specification errors are likely to exceed the costs of mistaken decisions in individual cases because categorical rules guide firm conduct in general, not just in decided cases, and rules that embody specification errors are thus likely to encourage undesirable conduct and to discourage desirable conduct in matters that are not the subject of enforcement proceedings.  Application errors, unless systematic and predictable, are less likely to impose substantial costs beyond the costs of mistaken decisions in the decided cases themselves.  Whether any particular categorical rules are likely to have error costs greater than the error costs of the existing U.S. antitrust law will depend in large part on the specification errors of the rules and on whether their application is likely to be accompanied by substantial application costs.

As discussed above, the particular rules suggested by Petit appear to embody important specification errors.  They are likely also to lead to substantial application errors because they would require determination of difficult factual issues.  These include, for example, whether the market at issue has tipped, whether the merger is horizontal, and whether the platform’s entry into an untipped market is intended to be permanent.  It thus seems unlikely, at least from this casual review, that adoption of the rules suggested by Petit will reduce error costs.

 Petit’s impressive study might therefore be most valuable, not as a roadmap for action, but as a source of insight and understanding of the facts – what Petit calls a “mental model to help decision makers understand the idiosyncrasies of digital markets.”  If viewed, not as a prescription for action, but as a description of the digital world, the Moligopoly Scenario can help address the urgent matter of reducing the costs of application errors in U.S. antitrust law.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.]

To mark the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario”, Truth on the Market and  International Center for Law & Economics (ICLE) are hosting some of the world’s leading scholars and practitioners of competition law and economics to discuss some of the book’s themes.

In his book, Petit offers a “moligopoly” framework for understanding competition between large tech companies that may have significant market shares in their ‘home’ markets but nevertheless compete intensely in adjacent ones. Petit argues that tech giants coexist as both monopolies and oligopolies in markets defined by uncertainty and dynamism, and offers policy tools for dealing with the concerns people have about these markets that avoid crude “big is bad” assumptions and do not try to solve non-economic harms with the tools of antitrust.

This symposium asks contributors to give their thoughts either on the book as a whole or on a selected chapter that relates to their own work. In it we hope to explore some of Petit’s arguments with different perspectives from our contributors.

Confirmed Participants

As in the past (see examples of previous TOTM blog symposia here), we’ve lined up an outstanding and diverse group of scholars to discuss these issues, including:

  • Kelly Fayne, Antitrust Associate, Latham & Watkins
  • Shane Greenstein, Professor of Business Administration; Co-chair of the HBS Digital Initiative, Harvard Business School
  • Peter Klein, Professor of Entrepreneurship and Chair, Department of Entrepreneurship and Corporate Innovation, Baylor University
  • William Kovacic, Global Competition Professor of Law and Policy; Director, Competition Law Center, George Washington University Law
  • Kai-Uwe Kuhn, Academic Advisor, University of East Anglia
  • Richard Langlois, Professor of Economics, University of Connecticut
  • Doug Melamed, Professor of the Practice of Law, Stanford law School
  • David Teece, Professor in Global Business, University of California’s Haas School of Business (Berkeley); Director, Center for Global Strategy; Governance and Faculty Director, Institute for Business Innovation

Thank you again to all of the excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting later today, October 12, 2020.

Earlier this year the UK government announced it was adopting the main recommendations of the Furman Report into competition in digital markets and setting up a “Digital Markets Taskforce” to oversee those recommendations being put into practice. The Competition and Markets Authority’s digital advertising market study largely came to similar conclusions (indeed, in places it reads as if the CMA worked backwards from those conclusions).

The Furman Report recommended that the UK should overhaul its competition regime with some quite significant changes to regulate the conduct of large digital platforms and make it harder for them to acquire other companies. But, while the Report’s panel is accomplished and its tone is sober and even-handed, the evidence on which it is based does not justify the recommendations it makes.

Most of the citations in the Report are of news reports or simple reporting of data with no analysis, and there is very little discussion of the relevant academic literature in each area, even to give a summary of it. In some cases, evidence and logic are misused to justify intuitions that are just not supported by the facts.

Killer acquisitions

One particularly bad example is the report’s discussion of mergers in digital markets. The Report provides a single citation to support its proposals on the question of so-called “killer acquisitions” — acquisitions where incumbent firms acquire innovative startups to kill their rival product and avoid competing on the merits. The concern is that these mergers slip under the radar of current merger control either because the transaction is too small, or because the purchased firm is not yet in competition with the incumbent. But the paper the Report cites, by Colleen Cunningham, Florian Ederer and Song Ma, looks only at the pharmaceutical industry. 

The Furman Report says that “in the absence of any detailed analysis of the digital sector, these results can be roughly informative”. But there are several important differences between the drug markets the paper considers and the digital markets the Furman Report is focused on. 

The scenario described in the Cunningham, et al. paper is of a patent holder buying a direct competitor that has come up with a drug that emulates the patent holder’s drug without infringing on the patent. As the Cunningham, et al. paper demonstrates, decreases in development rates are a feature of acquisitions where the acquiring company holds a patent for a similar product that is far from expiry. The closer a patent is to expiry, the less likely an associated “killer” acquisition is. 

But tech typically doesn’t have the clear and predictable IP protections that would make such strategies reliable. The long and uncertain development and approval process involved in bringing a drug to market may also be a factor.

There are many more differences between tech acquisitions and the “killer acquisitions” in pharma that the Cunningham, et al. paper describes. SO-called “acqui-hires,” where a company is acquired in order to hire its workforce en masse, are common in tech and explicitly ruled out of being “killers” by this paper, for example: it is not harmful to overall innovation or output overall if a team is moved to a more productive project after an acquisition. And network effects, although sometimes troubling from a competition perspective, can also make mergers of platforms beneficial for users by growing the size of that platform (because, of course, one of the points of a network is its size).

The Cunningham, et al. paper estimates that 5.3% of pharma acquisitions are “killers”. While that may seem low, some might say it’s still 5.3% too much. However, it’s not obvious that a merger review authority could bring that number closer to zero without also rejecting more mergers that are good for consumers, making people worse off overall. Given the number of factors that are specific to pharma and that do not apply to tech, it is dubious whether the findings of this paper are useful to the Furman Report’s subject at all. Given how few acquisitions are found to be “killers” in pharma with all of these conditions present, it seems reasonable to assume that, even if this phenomenon does apply in some tech mergers, it is significantly rarer than the ~5.3% of mergers Cunningham, et al. find in pharma. As a result, the likelihood of erroneous condemnation of procompetitive mergers is significantly higher. 

In any case, there’s a fundamental disconnect between the “killer acquisitions” in the Cunningham, et al. paper and the tech acquisitions described as “killers” in the popular media. Neither Facebook’s acquisition of Instagram nor Google’s acquisition of Youtube, which FTC Commissioner Rohit Chopra recently highlighted, would count, because in neither case was the acquired company “killed.” Nor were any of the other commonly derided tech acquisitions — e.g., Facebook/Whatsapp, Google/Waze, Microsoft.LinkedIn, or Amazon/Whole Foods — “killers,” either. 

In all these high-profile cases the acquiring companies expanded the service and invested more in them. One may object that these services would have competed with their acquirers had they remained independent, but this is a totally different argument to the scenarios described in the Cunningham, et al. paper, where development of a new drug is shut down by the acquirer ostensibly to protect their existing product. It is thus extremely difficult to see how the Cunningham, et al. paper is even relevant to the digital platform context, let alone how it could justify a wholesale revision of the merger regime as applied to digital platforms.

A recent paper (published after the Furman Report) does attempt to survey acquisitions by Google, Amazon, Facebook, Microsoft, and Apple. Out of 175 acquisitions in the 2015-17 period the paper surveys, only one satisfies the Cunningham, et al. paper’s criteria for being a potentially “killer” acquisition — Facebook’s acquisition of a photo sharing app called Masquerade, which had raised just $1 million in funding before being acquired.

In lieu of any actual analysis of mergers in digital markets, the Report falls back on a puzzling logic:

To date, there have been no false positives in mergers involving the major digital platforms, for the simple reason that all of them have been permitted. Meanwhile, it is likely that some false negatives will have occurred during this time. This suggests that there has been underenforcement of digital mergers, both in the UK and globally. Remedying this underenforcement is not just a matter of greater focus by the enforcer, as it will also need to be assisted by legislative change.

This is very poor reasoning. It does not logically follow that the (presumed) existence of false negatives implies that there has been underenforcement, because overenforcement carries costs as well. Moreover, there are strong reasons to think that false positives in these markets are more costly than false negatives. A well-run court system might still fail to convict a few criminals because the cost of accidentally convicting an innocent person was so high.

The UK’s competition authority did commission an ex post review of six historical mergers in digital markets, including Facebook/Instagram and Google/Waze, two of the most controversial in the UK. Although it did suggest that the review process could have been done differently, it also highlighted efficiencies that arose from each, and did not conclude that any has led to consumer detriment.

Recommendations

The Report is vague about which mergers it considers to have been uncompetitive, and apart from the aforementioned text it does not really attempt to justify its recommendations around merger control. 

Despite this, the Report recommends a shift to a ‘balance of harms’ approach. Under the current regime, merger review focuses on the likelihood that a merger would reduce competition which, at least, gives clarity about the factors to be considered. A ‘balance of harms’ approach would require the potential scale (size) of the merged company to be considered as well. 

This could give basis for blocking any merger at all on ‘scale’ grounds. After all, if a photo editing app with a sharing timeline can grow into the world’s second largest social network, how could a competition authority say with any confidence that some other acquisition might not prevent the emergence of a new platform on a similar scale, however unlikely? This could provide a basis for blocking almost any acquisition by an incumbent firm, and make merger review an even more opaque and uncertain process than it currently is, potentially deterring efficiency-raising mergers or leading startups that would like to be acquired to set up and operate overseas instead (or not to be started up in the first place).

The treatment of mergers is just one example of the shallowness of the Report. In many other cases — the discussions of concentration and barriers to entry in digital markets, for example — big changes are recommended on the basis of a handful of papers or less. Intuition repeatedly trumps evidence and academic research.

The Report’s subject is incredibly broad, of course, and one might argue that such a limited, casual approach is inevitable. In this sense the Report may function perfectly well as an opening brief introducing the potential range of problems in the digital economy that a rational competition authority might consider addressing. But the complexity and uncertainty of the issues is no reason to eschew rigorous, detailed analysis before determining that a compelling case has been made. Adopting the Report’s assumptions — and in many cases that is the very most one can say of them — of harm and remedial recommendations on the limited bases it offers is sure to lead to erroneous enforcement of competition law in a way that would reduce, rather than enhance, consumer welfare.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ramaz Samrout, (Principal, REIM Strategies; Lay Member, Competition Tribunal of Canada)]

At a time when nations are engaged in bidding wars in the worldwide market to alleviate the shortages of critical medical necessities for the Covid-19 crisis, it certainly bares the question, have free trade and competition policies resulting in efficient global integrated market networks gone too far? Did economists and policy makers advocating for efficient competitive markets not foresee a failure of the supply chain in meeting a surge in demand during an inevitable global crisis such as this one?

The failures in securing medical supplies have escalated a global health crisis to geopolitical spats fuelled by strong nationalistic public sentiments. In the process of competing to acquire highly treasured medical equipment, governments are confiscating, outbidding, and diverting shipments at the risk of not adhering to the terms of established free trade agreements and international trading rules, all at the cost of the humanitarian needs of other nations.

Since the start of the Covid-19 crisis, all levels of government in Canada have been working on diversifying the supply chain for critical equipment both domestically and internationally. But, most importantly, these governments are bolstering domestic production and an integrated domestic supply network recognizing the increasing likelihood of tightening borders impacting the movement of critical products.

For the past 3 weeks in his daily briefings, Canada’s Prime Minister, Justin Trudeau, has repeatedly confirmed the Government’s support of domestic enterprises that are switching their manufacturing lines to produce critical medical supplies and of other “made in Canada” products.

As conditions worsen in the US and the White House hardens its position towards collaboration and sharing for the greater global humanitarian good—even in the presence of a recent bilateral agreement to keep the movement of essential goods fluid—Canada’s response has become more retaliatory. Now shifting to a message emphasizing that the need for “made in Canada” products is one of extreme urgency.

On April 3rd, President Trump ordered Minnesota-based 3M to stop exporting medical-grade masks to Canada and Latin America; a decision that was enabled by the triggering of the 1950 Defence Production Act. In response, Ontario Premier, Doug Ford, stated in his public address:

Never again in the history of Canada should we ever be beholden to companies around the world for the safety and wellbeing of the people of Canada. There is nothing we can’t build right here in Ontario. As we get these companies round up and we get through this, we can’t be going over to other sources because we’re going to save a nickel.

Premier Ford’s words ring true for many Canadians as they watch this crisis unfold and wonder where would it stop if the crisis worsens? Will our neighbour to the south block shipments of a Covid-19 vaccine when it is developed? Will it extend to other essential goods such as food or medicine? 

There are reports that the decline in the number of foreign workers in farming caused by travel restrictions and quarantine rules in both Canada and the US will cause food production shortages, which makes the actions of the White House very unsettling for Canadians.  Canada’s exports to the US constitute 75% of total Canadian exports, while imports from the US constitute 46%. Canada’s imports of food and beverages from the US were valued at US $24 billion in 2018 including: prepared foods, fresh vegetables, fresh fruits, other snack foods, and non-alcoholic beverages.

The length and depth of the crisis will determine to what extent the US and Canadian markets will experience shortages in products. For Canada, the severity of the pandemic in the US could result in further restrictions on the border. And it is becoming progressively more likely that it will also result in a significant reduction in the volume of necessities crossing the border between the two nations.

Increasingly, the depth and pain experienced from shortages in necessities will shape public sentiment towards free trade and strengthen mainstream demands of more nationalistic and protectionist policies. This will result in more pressure on political and government establishments to take action.

The reliance on free trade and competition policies favouring highly integrated supply chain networks is showing cracks in meeting national interests in this time of crisis. This goes well beyond the usual economic factors of contention between countries of domestic employment, job loss and resource allocation. The need for correction, however, risks moving the pendulum too far to the side of protectionism.

Free trade setbacks and global integration disruptions would become the new economic reality to ensure that domestic self-sufficiency comes first. A new trade trend has been set in motion and there is no going back from some level of disintegrating globalised supply chain productions.

How would domestic self-sufficiency be achieved? 

Would international conglomerates build local plants and forgo their profit maximizing strategies of producing in growing economies that offer cheap wages and resources in order to avoid increased protectionism?

Will the Canada-United States-Mexico Agreement (CUSMA) known as the NEW NAFTA, which until today has not been put into effect, be renegotiated to allow for production measures for securing domestic necessities in the form of higher tariffs, trade quotas, and state subsidies?

Are advanced capitalist economies willing to create State-Owned Industries to produce domestic products for what it deems necessities?

Many other trade policy variations and options focused on protectionism are possible which could lead to the creation of domestic monopolies. Furthermore, any return to protected national production networks will reduce consumer welfare and eventually impede technological advancements that result from competition. 

Divergence between free trade agreements and competition policy in a new era of protectionism.

For the past 30 years, national competition laws and policies have increasingly become an integrated part of free trade agreements, albeit in the form of soft competition law language, making references to the parties’ respective competition laws, and the need for transparency, procedural fairness in enforcement, and cooperation.

Similarly, free trade objectives and frameworks have become part of the design and implementation of competition legislation and, subsequently, case law. Both of which are intended to encourage competitive market systems and efficiency, an implied by-product of open markets.

In that regard, the competition legal framework in Canada, the Competition Act, seeks to maintain and strengthen competitive market forces by encouraging maximum efficiency in the use of economic resources. Provisions to determine the level of competitiveness in the market consider barriers to entry, among them, tariff and non-tariff barriers to international trade. These provisions further direct adjudicators to examine free trade agreements currently in force and their role in facilitating the current or future possibility of an international incumbent entering the market to preserve or increase competition. And it goes further to also assess the extent of an increase in the real value of exports, or substitution of domestic products for imported products.

It is evident in the design of free trade agreements and competition legislation that efficiency, competition in price, and diversification of products is to be achieved by access to imported goods and by encouraging the creation of global competitive suppliers.

Therefore, the re-emergence of protectionist nationalistic measures in international trade will result in a divergence between competition laws and free trade agreements. Such setbacks would leave competition enforcers, administrators, and adjudicators grappling with the conflict between the economic principles set out in competition law and the policy objectives that could be stipulated in future trade agreements. 

The challenge ahead facing governments and industries is how to correct for the cracks in the current globalized competitive supply networks that have been revealed during this crisis without falling into a trap of nationalism and protectionism.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ramsi Woodcock, (Assistant Professor of Law, University of Kentucky; Assistant Professor of Management, Gatton College of Business and Economics).]

Specialists know that the antitrust courses taught in law schools and economics departments have an alter ego in business curricula: the course on business strategy. The two courses cover the same material, but from opposite perspectives. Antitrust courses teach how to end monopolies; strategy courses teach how to construct and maintain them.

Strategy students go off and run businesses, and antitrust students go off and make government policy. That is probably the proper arrangement if the policy the antimonopolists make is domestic. We want the domestic economy to run efficiently, and so we want domestic policymakers to think about monopoly—and its allocative inefficiencies—as something to be discouraged.

The coronavirus, and the shortages it has caused, have shown us that putting the antimonopolists in charge of international policy is, by contrast, a very big mistake.

Because we do not yet have a world government. America’s position, in relation to the rest of the world, is therefore more akin to that of a business navigating a free market than it is to a government seeking to promote efficient interactions among the firms that it governs. To flourish, America must engage in international trade with a view to creating and maintaining monopoly positions for itself, rather than eschewing them in the interest of realizing efficiencies in the global economy. Which is to say: we need strategists, not antimonopolists.

For the global economy is not America, and there is no guarantee that competitive efficiencies will redound to America’s benefit, rather than to those of her competitors. Absent a world government, other countries will pursue monopoly regardless what America does, and unless America acts strategically to build and maintain economic power, America will eventually occupy a position of commercial weakness, with all of the consequences for national security that implies.

When Antimonopolists Make Trade Policy

The free traders who have run American economic policy for more than a generation are antimonopolists playing on a bigger stage. Like their counterparts in domestic policy, they are loyal in the first instance only to the efficiency of the market, not to any particular trader. They are content to establish rules of competitive trading—the antitrust laws in the domestic context, the World Trade Organization in the international context—and then to let the chips fall where they may, even if that means allowing present or future adversaries to, through legitimate means, build up competitive advantages that the United States is unable to overcome.

Strategy is consistent with competition when markets are filled with traders of atomic size, for then no amount of strategy can deliver a competitive advantage to any trader. But global markets, more even than domestic markets, are filled with traders of macroscopic size. Strategy then requires that each trader seek to gain and maintain advantages, undermining competition. The only way antimonopolists could induce the trading behemoth that is America to behave competitively, and to let the chips fall where they may, was to convince America voluntarily to give up strategy, to sacrifice self-interest on the altar of efficient markets.

And so they did.

Thus when the question arose whether to permit American corporations to move their manufacturing operations overseas, or to permit foreign companies to leverage their efficiencies to dominate a domestic industry and ensure that 90% of domestic supply would be imported from overseas, the answer the antimonopolists gave was: “yes.” Because it is efficient. Labor abroad is cheaper than labor at home, and transportation costs low, so efficiency requires that production move overseas, and our own resources be reallocated to more competitive uses.

This is the impeccable logic of static efficiency, of general equilibrium models allocating resources optimally. But it is instructive to recall that the men who perfected this model were not trying to describe a free market, much less international trade. They were trying to create a model that a central planner could use to allocate resources to a state’s subjects. What mattered to them in building the model was the good of the whole, not any particular part. And yet it is to a particular part of the global whole that the United States government is dedicated.

The Strategic Trader

Students of strategy would have taken a very different approach to international trade. Strategy teaches that markets are dynamic, and that businesses must make decisions based not only on the market signals that exist today, but on those that can be made to exist in the future. For the successful strategist, unlike the antimonopolist, identifying a product for which consumers are willing to pay the costs of production is not alone enough to justify bringing the product to market. The strategist must be able to secure a source of supply, or a distribution channel, that competitors cannot easily duplicate, before the strategist will enter.

Why? Because without an advantage in supply, or distribution, competitors will duplicate the product, compete away any markups, and leave the strategist no better off than if he had never undertaken the project at all. Indeed, he may be left bankrupt, if he has sunk costs that competition prevents him from recovering. Unlike the economist, the strategist is interested in survival, because he is a partisan of a part of the market—himself—not the market entire. The strategist understands that survival requires power, and all power rests, to a greater or lesser degree, on monopoly.

The strategist is not therefore a free trader in the international arena, at least not as a matter of principle. The strategist understands that trading from a position of strength can enrich, and trading from a position of weakness can impoverish. And to occupy that position of strength, America must, like any monopolist, control supply. Moreover, in the constantly-innovating markets that characterize industrial economies, markets in which innovation emerges from learning by doing, control over physical supply translates into control over the supply of inventions itself.

The strategist does not permit domestic corporations to offshore manufacturing in any market in which the strategist wishes to participate, because that is unsafe: foreign countries could use control over that supply to extract rents from America, to drive domestic firms to bankruptcy, and to gain control over the supply of inventions.

And, as the new trade theorists belatedly discovered, offshoring prevents the development of the dense, geographically-contiguous, supply networks that confer power over whole product categories, such as the electronics hub in Zhengzhou, where iPhone-maker Foxconn is located.

Or the pharmaceutical hub in Hubei.

Coronavirus and the Failure of Free Trade

Today, America is unprepared for the coming wave of coronavirus cases because the antimonopolists running our trade policy do not understand the importance of controlling supply. There is a shortage of masks, because China makes half of the world’s masks, and the Chinese have cut off supply, the state having forbidden even non-Chinese companies that offshored mask production from shipping home masks for which American customers have paid. Not only that, but in January China bought up most of the world’s existing supply of masks, with free-trade-obsessed governments standing idly by as the clock ticked down to their own domestic outbreaks.  

New York State, which lies at the epicenter of the crisis, has agreed to pay five times the market price for foreign supply. That’s not because the cost of making masks has risen, but because sellers are rationing with price. Which is to say: using their control over supply to beggar the state. Moreover, domestic mask makers report that they cannot ramp up production because of a lack of supply of raw materials, some of which are actually made in Wuhan, China. That’s the kind of problem that does not arise when restrictions on offshoring allow manufacturing hubs to develop domestically.

But a shortage of masks is just the beginning. Once a vaccine is developed, the race will be on to manufacture it, and America controls less than 30% of the manufacturing facilities that supply pharmaceuticals to American markets. Indeed, just about the only virus-relevant industries in which we do not have a real capacity shortage today are food and toilet paper, panic buying notwithstanding. Because fortunately for us antimonopolists could not find a way to offshore California and Oregon. If they could have, they surely would have, since both agriculture and timber are labor-intensive industries.

President Trump’s failed attempt to buy a German drug company working on a coronavirus vaccine shows just how damaging free market ideology has been to national security: as Trump should have anticipated given his resistance to the antimonopolists’ approach to trade, the German government nipped the deal in the bud. When an economic agent has market power, the agent can pick its prices, or refuse to sell at all. Only in general equilibrium fantasy is everything for sale, and at a competitive price to boot.

The trouble is: American policymakers, perhaps more than those in any other part of the world, continue to act as though that fantasy were real.

Failures Left and Right

America’s coronavirus predicament is rich with intellectual irony.

Progressives resist free trade ideology, largely out of concern for the effects of trade on American workers. But they seem not to have realized that in doing so they are actually embracing strategy, at least for the benefit of labor.

As a result, progressives simultaneously reject the approach to industrial organization economics that underpins strategic thinking in business: Joseph Schumpeter’s theory of creative destruction, which holds that strategic behavior by firms seeking to achieve and maintain monopolies is ultimately good for society, because it leads to a technological arms race as firms strive to improve supply, distribution, and indeed product quality, in ways that competitors cannot reproduce.

Even if progressives choose to reject Schumpeter’s argument that strategy makes society better off—a proposition that is particularly suspect at the international level, where the availability of tanks ensures that the creative destruction is not always creative—they have much to learn from his focus on the economics of survival.

By the same token, conservatives embrace Schumpeter in arguing for less antitrust enforcement in domestic markets, all the while advocating free trade at the international level and savaging governments for using dumping and tariffs—which is to say, the tools of monopoly—to strengthen their trading positions. It is deeply peculiar to watch the coronavirus expose conservative economists as pie-in-the-sky internationalists. And yet as the global market for coronavirus necessities seizes up, the ideology that urged us to dispense with producing these goods ourselves, out of faith that we might always somehow rely on the support of the rest of the world, provided through the medium of markets, looks pathetically naive.

The cynic might say that inconsistency has snuck up on both progressives and conservatives because each remains too sympathetic to a different domestic constituency.

Dodging a Bullet

America is lucky that a mere virus exposed the bankruptcy of free trade ideology. Because war could have done that instead. It is difficult to imagine how a country that cannot make medical masks—much less a Macbook—would be able to respond effectively to a sustained military attack from one of the many nations that are closing the technological gap long enjoyed by the United States.

The lesson of the coronavirus is: strategy, not antitrust.

This is the fourth, and last, in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision (the first post can be found here, and the second here, and the third here). It draws on research from a soon-to-be published ICLE white paper.

The previous parts of this series have mostly focused on the Commission’s factual and legal conclusions. However, as this blog post points out, the case’s economic underpinnings also suffer from important weaknesses.

Two problems are particularly salient: First, the economic models cited by the Commission (discussed in an official paper, but not directly in the decision) poorly match the underlying facts. Second, the Commission’s conclusions on innovation harms are out of touch with the abundant economic literature regarding the potential link between market structure and innovation.

The wrong economic models

The Commission’s Chief Economist team outlined its economic reasoning in an article published shortly after the Android decision was published. The article reveals that the Commission relied upon three economic papers to support its conclusion that Google’s tying harmed consumer welfare.

Each of these three papers attempts to address the same basic problem. Ever since the rise of the Chicago-School, it is widely accepted that a monopolist cannot automatically raise its profits by entering an adjacent market (i.e. leveraging its monopoly position), for instance through tying. This has sometimes been called the single-monopoly-profit theory. In more recent years, various scholars have refined this Chicago-School intuition, and identified instances where the theory fails.

While the single monopoly profit theory has been criticized in academic circles, it is important to note that the three papers cited by the Commission accept its basic premise. They thus attempt to show why the theory fails in the context of the Google Android case. 

Unfortunately, the assumptions upon which they rely to reach this conclusion markedly differ from the case’s fact pattern. These papers thus offer little support to the Commission’s economic conclusions.

For a start, the authors of the first paper cited by the Commission concede that their own model does not apply to the Google case:

Actual antitrust cases are fact-intensive and our model does not perfectly fit with the current Google case in one important aspect.

The authors thus rely on important modifications, lifted from a paper by Frederico Etro and Cristina Caffara (the second paper cited by the Commission), to support their conclusion that Google’s tying was anticompetitive. 

The second paper cited by the Commission, however, is equally problematic

The authors’ underlying intuition is relatively straightforward: because Google bundles its suite of Google Apps (including Search) with the Play Store, a rival search engine would have to pay a premium in order to be pre-installed and placed on the home screen, because OEMs would have to entirely forgo Google’s suite of applications. The key assumption here is that OEMs cannot obtain the Google Play app and pre-install and place favorably a rival search app

But this is simply not true of Google’s contractual terms. The best evidence is that rivals search apps have indeed concluded deals with OEMs to pre-install their search apps, without these OEMs losing access to Google’s suite of proprietary apps. Google’s contractual terms simply do not force OEMs to choose between the Google Play app and the pre-installation of a rival search app. Etro and Caffara’s model thus falls flat.

More fundamentally, even if Google’s contractual terms did prevent OEMs from pre-loading rival apps, the paper’s conclusions would still be deeply flawed. The authors essentially assume that the only way for consumers to obtain a rival app is through pre-installation. But this is a severe misreading of the prevailing market conditions. 

Users remain free to independently download rival search apps. If Google did indeed purchase exclusive pre-installation, users would not have to choose between a “full Android” device and one with a rival search app but none of Google’s apps. Instead, they could download the rival app and place it alongside Google’s applications. 

A more efficient rival could even provide side payments, of some sort, to encourage consumers to download its app. Exclusive pre-installation thus generates a much smaller advantage than Etro and Caffara assume, and their model fails to reflect this.

Finally, the third paper by Alexandre de Cornière and Greg Taylor, suffers from the exact same problem. The authors clearly acknowledge that their findings only hold if OEMs (and consumers) are effectively prevented from (pre-)installing applications that compete with Google’s apps. In their own words:

Upstream firms offer contracts to the downstream firm, who chooses which component(s) to use and then sells to consumers. For our theory to apply, the following three conditions need to hold: (i) substitutability between the two versions of B leads the downstream firm to install at most one version.

The upshot is that all three of the economic models cited by the Commission cease to be relevant in the specific context of the Google Android decision. The Commission is thus left with little to no economic evidence to support its finding of anticompetitive effects.

Critics might argue that direct downloads by consumers are but a theoretical possibility. Yet nothing could be further from the truth. Take the web browser market: The Samsung Internet Browser has more than 1 Billion downloads on Google’s Play Store. The Opera, Opera Mini and Firefox browsers each have over a 100 million downloads. The Brave browser has more than 10 million downloads, but is growing rapidly.

In short the economic papers on which the Commission relies are based on a world that does not exist. They thus fail to support the Commission’s economic findings.

An incorrect view of innovation

In its decision, the Commission repeatedly claimed that Google’s behavior stifled innovation because it prevented rivals from entering the market. However, the Commission offered no evidence to support its assumption that reduced market entry on would lead to a decrease in innovation:

(858) For the reasons set out in this Section, the Commission concludes that the tying of the Play Store and the Google Search app helps Google to maintain and strengthen its dominant position in each national market for general search services, increases barriers to entry, deters innovation and tends to harm, directly or indirectly, consumers.

(859) First, Google’s conduct makes it harder for competing general search services to gain search queries and the respective revenues and data needed to improve their services.

(861) Second, Google’s conduct increases barriers to entry by shielding Google from competition from general search services that could challenge its dominant position in the national markets for general search services:

(862) Third, by making it harder for competing general search services to gain search queries including the respective revenues and data needed to improve their services, Google’s conduct reduces the incentives of competing general search services to invest in developing innovative features, such as innovation in algorithm and user experience design.

In a nutshell, the Commission’s findings rest on the assumption that barriers to entry and more concentrated market structures necessarily reduce innovation. But this assertion is not supported by the empirical economic literature on the topic.

For example, a 2006 paper published by Richard Gilbert surveys 24 empirical studies on the topic. These studies examine the link between market structure (or firm size) and innovation. Though earlier studies tended to identify a positive relationship between concentration, as well as firm size, and innovation, more recent empirical techniques found no significant relationship. Gilbert thus suggests that:

These econometric studies suggest that whatever relationship exists at a general economy-wide level between industry structure and R&D is masked by differences across industries in technological opportunities, demand, and the appropriability of inventions.

This intuition is confirmed by another high-profile empirical paper by Aghion, Bloom, Blundell, Griffith, and Howitt. The authors identify an inverted-U relationship between competition and innovation. Perhaps more importantly, they point out that this relationship is affected by a number of sector-specific factors.

Finally, reviewing fifty years of research on innovation and market structure, Wesley Cohen concludes that:

Even before one controls for industry effects, the variance in R&D intensity explained by market concentration is small. Moreover, whatever relationship that exists in cross sections becomes imperceptible with the inclusion of controls for industry characteristics, whether expressed as industry fixed effects or in the form of survey-based and other measures of industry characteristics such as technological opportunity, appropriability conditions, and demand. In parallel to a decades-long accumulation of mixed results, theorists have also spawned an almost equally voluminous and equivocal literature on the link between market structure and innovation.[16]

The Commission’s stance is further weakened by the fact that investments in the Android operating system are likely affected by a weak appropriability regime. In other words, because of its open source nature, it is hard for Google to earn a return on investments in the Android OS (anyone can copy, modify and offer their own version of the OS). 

Loosely tying Google’s proprietary applications to the OS is arguably one way to solve this appropriability problem. Unfortunately, the Commission brushed these considerations aside. It argued that Google could earn some revenue from the Google Play app, as well as other potential venues. However, the Commission did not question whether these sources of income were even comparable to the sums invested by Google in the Android OS. It is thus possible that the Commission’s decision will prevent Google from earning a positive return on some future investments in the Android OS, ultimately causing it to cut back its investments and slowing innovation.

The upshot is that the Commission was simply wrong to assume that barriers to entry and more concentrated market structures would necessarily reduce innovation. This is especially true, given that Google may struggle to earn a return on its investments, absent the contractual provisions challenged by the Commission.

Conclusion

In short, the Commission’s economic analysis was severely lacking. It relied on economic models that had little to say about the market it which Google and its rivals operated. Its decisions thus reveals the inherent risk of basing antitrust decisions upon overfitted economic models. 

As if that were not enough, the Android decision also misrepresents the economic literature concerning the link (or absence thereof) between market structure and innovation. As a result, there is no reason to believe that Google’s behavior reduced innovation.

This is the third in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision (the first post can be found here, and the second here). It draws on research from a soon-to-be published ICLE white paper.

(Comparison of Google and Apple’s smartphone business models. Red $ symbols represent money invested; Green $ symbols represent sources of revenue; Black lines show the extent of Google and Apple’s control over their respective platforms)

For the third in my series of posts about the Google Android decision, I will delve into the theories of harm identified by the Commission. 

The big picture is that the Commission’s analysis was particularly one-sided. The Commission failed to adequately account for the complex business challenges that Google faced – such as monetizing the Android platform and shielding it from fragmentation. To make matters worse, its decision rests on dubious factual conclusions and extrapolations. The result is a highly unbalanced assessment that could ultimately hamstring Google and prevent it from effectively competing with its smartphone rivals, Apple in particular.

1. Tying without foreclosure

The first theory of harm identified by the Commission concerned the tying of Google’s Search app with the Google Play app, and of Google’s Chrome app with both the Google Play and Google Search apps.

Oversimplifying, Google required its OEMs to choose between either pre-installing a bundle of Google applications, or forgoing some of the most important ones (notably Google Play). The Commission argued that this gave Google a competitive advantage that rivals could not emulate (even though Google’s terms did not preclude OEMs from simultaneously pre-installing rival web browsers and search apps). 

To support this conclusion, the Commission notably asserted that no alternative distribution channel would enable rivals to offset the competitive advantage that Google obtained from tying. This finding is, at best, dubious. 

For a start, the Commission claimed that user downloads were not a viable alternative distribution channel, even though roughly 250 million apps are downloaded on Google’s Play store every day.

The Commission sought to overcome this inconvenient statistic by arguing that Android users were unlikely to download apps that duplicated the functionalities of a pre-installed app – why download a new browser if there is already one on the user’s phone?

But this reasoning is far from watertight. For instance, the 17th most-downloaded Android app, the “Super-Bright Led Flashlight” (with more than 587million downloads), mostly replicates a feature that is pre-installed on all Android devices. Moreover, the five most downloaded Android apps (Facebook, Facebook Messenger, Whatsapp, Instagram and Skype) provide functionalities that are, to some extent at least, offered by apps that have, at some point or another, been preinstalled on many Android devices (notably Google Hangouts, Google Photos and Google+).

The Commission countered that communications apps were not appropriate counterexamples, because they benefit from network effects. But this overlooks the fact that the most successful communications and social media apps benefited from very limited network effects when they were launched, and that they succeeded despite the presence of competing pre-installed apps. Direct user downloads are thus a far more powerful vector of competition than the Commission cared to admit.

Similarly concerning is the Commission’s contention that paying OEMs or Mobile Network Operators (“MNOs”) to pre-install their search apps was not a viable alternative for Google’s rivals. Some of the reasons cited by the Commission to support this finding are particularly troubling.

For instance, the Commission claimed that high transaction costs prevented parties from concluding these pre installation deals. 

But pre-installation agreements are common in the smartphone industry. In recent years, Microsoft struck a deal with Samsung to pre-install some of its office apps on the Galaxy Note 10. It also paid Verizon to pre-install the Bing search app on a number of Samsung phones, in 2010. Likewise, a number of Russian internet companies have been in talks with Huawei to pre-install their apps on its devices. And Yahoo reached an agreement with Mozilla to make it the default search engine for its web browser. Transaction costs do not appear to  have been an obstacle in any of these cases.

The Commission also claimed that duplicating too many apps would cause storage space issues on devices. 

And yet, a back-of-the-envelope calculation suggests that storage space is unlikely to be a major issue. For instance, the Bing Search app has a download size of 24MB, whereas typical entry-level smartphones generally have an internal memory of at least 64GB (that can often be extended to more than 1TB with the addition of an SD card). The Bing Search app thus takes up less than one-thousandth of these devices’ internal storage. Granted, the Yahoo search app is slightly larger than Microsoft’s, weighing almost 100MB. But this is still insignificant compared to a modern device’s storage space.

Finally, the Commission claimed that rivals were contractually prevented from concluding exclusive pre-installation deals because Google’s own apps would also be pre-installed on devices.

However, while it is true that Google’s apps would still be present on a device, rivals could still pay for their applications to be set as default. Even Yandex – a plaintiff – recognized that this would be a valuable solution. In its own words (taken from the Commission’s decision):

Pre-installation alongside Google would be of some benefit to an alternative general search provider such as Yandex […] given the importance of default status and pre-installation on home screen, a level playing field will not be established unless there is a meaningful competition for default status instead of Google.

In short, the Commission failed to convincingly establish that Google’s contractual terms prevented as-efficient rivals from effectively distributing their applications on Android smartphones. The evidence it adduced was simply too thin to support anything close to that conclusion.

2. The threat of fragmentation

The Commission’s second theory of harm concerned the so-called “antifragmentation” agreements concluded between Google and OEMs. In a nutshell, Google only agreed to license the Google Search and Google Play apps to OEMs that sold “Android Compatible” devices (i.e. devices sold with a version of Android did not stray too far from Google’s most recent version).

According to Google, this requirement was necessary to limit the number of Android forks that were present on the market (as well as older versions of the standard Android). This, in turn, reduced development costs and prevented the Android platform from unraveling.

The Commission disagreed, arguing that Google’s anti-fragmentation provisions thwarted competition from potential Android forks (i.e. modified versions of the Android OS).

This conclusion raises at least two critical questions: The first is whether these agreements were necessary to ensure the survival and competitiveness of the Android platform, and the second is why “open” platforms should be precluded from partly replicating a feature that is essential to rival “closed” platforms, such as Apple’s iOS.

Let us start with the necessity, or not, of Google’s contractual terms. If fragmentation did indeed pose an existential threat to the Android ecosystem, and anti-fragmentation agreements averted this threat, then it is hard to make a case that they thwarted competition. The Android platform would simply not have been as viable without them.

The Commission dismissed this possibility, relying largely on statements made by Google’s rivals (many of whom likely stood to benefit from the suppression of these agreements). For instance, the Commission cited comments that it received from Yandex – one of the plaintiffs in the case:

(1166) The fact that fragmentation can bring significant benefits is also confirmed by third-party respondents to requests for information:

[…]

(2) Yandex, which stated: “Whilst the development of Android forks certainly has an impact on the fragmentation of the Android ecosystem in terms of additional development being required to adapt applications for various versions of the OS, the benefits of fragmentation outweigh the downsides…”

Ironically, the Commission relied on Yandex’s statements while, at the same time, it dismissed arguments made by Android app developers, on account that they were conflicted. In its own words:

Google attached to its Response to the Statement of Objections 36 letters from OEMs and app developers supporting Google’s views about the dangers of fragmentation […] It appears likely that the authors of the 36 letters were influenced by Google when drafting or signing those letters.

More fundamentally, the Commission’s claim that fragmentation was not a significant threat is at odds with an almost unanimous agreement among industry insiders.

For example, while it is not dispositive, a rapid search for the terms “Google Android fragmentation”, using the DuckDuckGo search engine, leads to results that cut strongly against the Commission’s conclusions. Of the ten first results, only one could remotely be construed as claiming that fragmentation was not an issue. The others paint a very different picture (below are some of the most salient excerpts):

“There’s a fairly universal perception that Android fragmentation is a barrier to a consistent user experience, a security risk, and a challenge for app developers.” (here)

“Android fragmentation, a problem with the operating system from its inception, has only become more acute an issue over time, as more users clamor for the latest and greatest software to arrive on their phones.” (here)

“Android Fragmentation a Huge Problem: Study.” (here)

“Google’s Android fragmentation fix still isn’t working at all.” (here)

“Does Google care about Android fragmentation? Not now—but it should.” (here).

“This is very frustrating to users and a major headache for Google… and a challenge for corporate IT,” Gold said, explaining that there are a large number of older, not fully compatible devices running various versions of Android.” (here)

Perhaps more importantly, one might question why Google should be treated differently than rivals that operate closed platforms, such as Apple, Microsoft and Blackberry (before the last two mostly exited the Mobile OS market). By definition, these platforms limit all potential forks (because they are based on proprietary software).

The Commission argued that Apple, Microsoft and Blackberry had opted to run “closed” platforms, which gave them the right to prevent rivals from copying their software.

While this answer has some superficial appeal, it is incomplete. Android may be an open source project, but this is not true of Google’s proprietary apps. Why should it be forced to offer them to rivals who would use them to undermine its platform? The Commission did not meaningfully consider this question.

And yet, industry insiders routinely compare the fragmentation of Apple’s iOS and Google’s Android OS, in order to gage the state of competition between both firms. For instance, one commentator noted:

[T]he gap between iOS and Android users running the latest major versions of their operating systems has never looked worse for Google.

Likewise, an article published in Forbes concluded that Google’s OEMs were slow at providing users with updates, and that this might drive users and developers away from the Android platform:

For many users the Android experience isn’t as up-to-date as Apple’s iOS. Users could buy the latest Android phone now and they may see one major OS update and nothing else. […] Apple users can be pretty sure that they’ll get at least two years of updates, although the company never states how long it intends to support devices.

However this problem, in general, makes it harder for developers and will almost certainly have some inherent security problems. Developers, for example, will need to keep pushing updates – particularly for security issues – to many different versions. This is likely a time-consuming and expensive process.

To recap, the Commission’s decision paints a world that is either black or white: either firms operate closed platforms, and they are then free to limit fragmentation as they see fit, or they create open platforms, in which case they are deemed to have accepted much higher levels of fragmentation.

This stands in stark contrast to industry coverage, which suggests that users and developers of both closed and open platforms care a great deal about fragmentation, and demand that measures be put in place to address it. If this is true, then the relative fragmentation of open and closed platforms has an important impact on their competitive performance, and the Commission was wrong to reject comparisons between Google and its closed ecosystem rivals. 

3. Google’s revenue sharing agreements

The last part of the Commission’s case centered on revenue sharing agreements between Google and its OEMs/MNOs. Google paid these parties to exclusively place its search app on the homescreen of their devices. According to the Commission, these payments reduced OEMs and MNOs’ incentives to pre-install competing general search apps.

However, to reach this conclusion, the Commission had to make the critical (and highly dubious) assumption that rivals could not match Google’s payments.

To get to that point, it notably assumed that rival search engines would be unable to increase their share of mobile search results beyond their share of desktop search results. The underlying intuition appears to be that users who freely chose Google Search on desktop (Google Search & Chrome are not set as default on desktop PCs) could not be convinced to opt for a rival search engine on mobile.

But this ignores the possibility that rivals might offer an innovative app that swayed users away from their preferred desktop search engine. 

More importantly, this reasoning cuts against the Commission’s own claim that pre-installation and default placement were critical. If most users, dismiss their device’s default search app and search engine in favor of their preferred ones, then pre-installation and default placement are largely immaterial, and Google’s revenue sharing agreements could not possibly have thwarted competition (because they did not prevent users from independently installing their preferred search app). On the other hand, if users are easily swayed by default placement, then there is no reason to believe that rivals could not exceed their desktop market share on mobile phones.

The Commission was also wrong when it claimed that rival search engines were at a disadvantage because of the structure of Google’s revenue sharing payments. OEMs and MNOs allegedly lost all of their payments from Google if they exclusively placed a rival’s search app on the home screen of a single line of handsets.

The key question is the following: could Google automatically tilt the scales to its advantage by structuring the revenue sharing payments in this way? The answer appears to be no. 

For instance, it has been argued that exclusivity may intensify competition for distribution. Conversely, other scholars have claimed that exclusivity may deter entry in network industries. Unfortunately, the Commission did not examine whether Google’s revenue sharing agreements fell within this category. 

It thus provided insufficient evidence to support its conclusion that the revenue sharing agreements reduced OEMs’ (and MNOs’) incentives to pre-install competing general search apps, rather than merely increasing competition “for the market”.

4. Conclusion

To summarize, the Commission overestimated the effect that Google’s behavior might have on its rivals. It almost entirely ignored the justifications that Google put forward and relied heavily on statements made by its rivals. The result is a one-sided decision that puts undue strain on the Android Business model, while providing few, if any, benefits in return.

This is the second in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision (the first post can be found here). It draws on research from a soon-to-be published ICLE white paper.

(Left, Android 10 Website; Right, iOS 13 Website)

In a previous post, I argued that the Commission failed to adequately define the relevant market in its recently published Google Android decision

This improper market definition might not be so problematic if the Commission had then proceeded to undertake a detailed (and balanced) assessment of the competitive conditions that existed in the markets where Google operates (including the competitive constraints imposed by Apple). 

Unfortunately, this was not the case. The following paragraphs respond to some of the Commission’s most problematic arguments regarding the existence of barriers to entry, and the absence of competitive constraints on Google’s behavior.

The overarching theme is that the Commission failed to quantify its findings and repeatedly drew conclusions that did not follow from the facts cited. As a result, it was wrong to conclude that Google faced little competitive pressure from Apple and other rivals.

1. Significant investments and network effects ≠ barriers to entry

In its decision, the Commission notably argued that significant investments (millions of euros) are required to set up a mobile OS and App store. It also argued that market for licensable mobile operating systems gave rise to network effects. 

But contrary to the Commission’s claims, neither of these two factors is, in and of itself, sufficient to establish the existence of barriers to entry (even under EU competition law’s loose definition of the term, rather than Stigler’s more technical definition)

Take the argument that significant investments are required to enter the mobile OS market.

The main problem is that virtually every market requires significant investments on the part of firms that seek to enter. Not all of these costs can be seen as barriers to entry, or the concept would lose all practical relevance. 

For example, purchasing a Boeing 737 Max airplane reportedly costs at least $74 million. Does this mean that incumbents in the airline industry are necessarily shielded from competition? Of course not. 

Instead, the relevant question is whether an entrant with a superior business model could access the capital required to purchase an airplane and challenge the industry’s incumbents.

Returning to the market for mobile OSs, the Commission should thus have questioned whether as-efficient rivals could find the funds required to produce a mobile OS. If the answer was yes, then the investments highlighted by the Commission were largely immaterial. As it happens, several firms have indeed produced competing OSs, including CyanogenMod, LineageOS and Tizen.

The same is true of Commission’s conclusion that network effects shielded Google from competitors. While network effects almost certainly play some role in the mobile OS and app store markets, it does not follow that they act as barriers to entry in competition law terms. 

As Paul Belleflamme recently argued, it is a myth that network effects can never be overcome. And as I have written elsewhere, the most important question is whether users could effectively coordinate their behavior and switch towards a superior platform, if one arose (See also Dan Spulber’s excellent article on this point).

The Commission completely ignored this critical interrogation during its discussion of network effects.

2. The failure of competitors is not proof of barriers to entry

Just as problematically, the Commission wrongly concluded that the failure of previous attempts to enter the market was proof of barriers to entry. 

This is the epitome of the Black Swan fallacy (i.e. inferring that all swans are white because you have never seen a relatively rare, but not irrelevant, black swan).

The failure of rivals is equally consistent with any number of propositions: 

  • There were indeed barriers to entry; 
  • Google’s products were extremely good (in ways that rivals and the Commission failed to grasp); 
  • Google responded to intense competitive pressure by continuously improving its product (and rivals thus chose to stay out of the market); 
  • Previous rivals were persistently inept (to take the words of Oliver Williamson); etc. 

The Commission did not demonstrate that its own inference was the right one, nor did it even demonstrate any awareness that other explanations were at least equally plausible.

3. First mover advantage?

Much of the same can be said about the Commission’s observation that Google enjoyed a first mover advantage

The elephant in the room is that Google was not the first mover in the smartphone market (and even less so in the mobile phone industry). The Commission attempted to sidestep this uncomfortable truth by arguing that Google was the first mover in the Android app store market. It then concluded that Google had an advantage because users were familiar with Android’s app store.

To call this reasoning “naive” would be too kind. Maybe consumers are familiar with Google’s products today, but they certainly weren’t when Google entered the market. 

Why would something that did not hinder Google (i.e. users’ lack of familiarity with its products, as opposed to those of incumbents such as Nokia or Blackberry) have the opposite effect on its future rivals? 

Moreover, even if rivals had to replicate Android’s user experience (and that of its app store) to prove successful, the Commission did not show that there was anything that prevented them from doing so — a particularly glaring omission given the open-source nature of the Android OS.

The result is that, at best, the Commission identified a correlation but not causality. Google may arguably have been the first, and users might have been more familiar with its offerings, but this still does not prove that Android flourished (and rivals failed) because of this.

4. It does not matter that users “do not take the OS into account” when they purchase a device

The Commission also concluded that alternatives to Android (notably Apple’s iOS and App Store) exercised insufficient competitive constraints on Google. Among other things, it argued that this was because users do not take the OS into account when they purchase a smartphone (so Google could allegedly degrade Android without fear of losing users to Apple)..

In doing so, the Commission failed to grasp that buyers might base their purchases on a devices’ OS without knowing it.

Some consumers will simply follow the advice of a friend, family member or buyer’s guide. Acutely aware of their own shortcomings, they thus rely on someone else who does take the phone’s OS into account. 

But even when they are acting independently, unsavvy consumers may still be driven by technical considerations. They might rely on a brand’s reputation for providing cutting edge devices (which, per the Commission, is the most important driver of purchase decisions), or on a device’s “feel” when they try it in a showroom. In both cases, consumers’ choices could indirectly be influenced by a phone’s OS.

In more technical terms, a phone’s hardware and software are complementary goods. In these settings, it is extremely difficult to attribute overall improvements to just one of the two complements. For instance, a powerful OS and chipset are both equally necessary to deliver a responsive phone. The fact that consumers may misattribute a device’s performance to one of these two complements says nothing about their underlying contribution to a strong end-product (which, in turn, drives purchase decisions). Likewise, battery life is reportedly one of the most important features for users, yet few realize that a phone’s OS has a large impact on it.

Finally, if consumers were really indifferent to the phone’s operating system, then the Commission should have dropped at least part of its case against Google. The Commission’s claim that Google’s anti-fragmentation agreements harmed consumers (by reducing OS competition) has no purchase if Android is provided free of charge and consumers are indifferent to non-price parameters, such as the quality of a phone’s OS. 

5. Google’s users were not “captured”

Finally, the Commission claimed that consumers are loyal to their smartphone brand and that competition for first time buyers was insufficient to constrain Google’s behavior against its “captured” installed base.

It notably found that 82% of Android users stick with Android when they change phones (compared to 78% for Apple), and that 75% of new smartphones are sold to existing users. 

The Commission asserted, without further evidence, that these numbers proved there was little competition between Android and iOS.

But is this really so? In almost all markets consumers likely exhibit at least some loyalty to their preferred brand. At what point does this become an obstacle to interbrand competition? The Commission offered no benchmark mark against which to assess its claims.

And although inter-industry comparisons of churn rates should be taken with a pinch of salt, it is worth noting that the Commission’s implied 18% churn rate for Android is nothing out of the ordinary (see, e.g., here, here, and here), including for industries that could not remotely be called anticompetitive.

To make matters worse, the Commission’s own claimed figures suggest that a large share of sales remained contestable (roughly 39%).

Imagine that, every year, 100 devices are sold in Europe (75 to existing users and 25 to new users, according to the Commission’s figures). Imagine further that the installed base of users is split 76–24 in favor of Android. Under the figures cited by the Commission, it follows that at least 39% of these sales are contestable.

According to the Commission’s figures, there would be 57 existing Android users (76% of 75) and 18 Apple users (24% of 75), of which roughly 10 (18%) and 4 (22%), respectively, switch brands in any given year. There would also be 25 new users who, even according to the Commission, do not display brand loyalty. The result is that out of 100 purchasers, 25 show no brand loyalty and 14 switch brands. And even this completely ignores the number of consumers who consider switching but choose not to after assessing the competitive options.

Conclusion

In short, the preceding paragraphs argue that the Commission did not meet the requisite burden of proof to establish Google’s dominance. Of course, it is one thing to show that the Commission’s reasoning was unsound (it is) and another to establish that its overall conclusion was wrong.

At the very least, I hope these paragraphs will convey a sense that the Commission loaded the dice, so to speak. Throughout the first half of its lengthy decision, it interpreted every piece of evidence against Google, drew significant inferences from benign pieces of information, and often resorted to circular reasoning.

The following post in this blog series argues that these errors also permeate the Commission’s analysis of Google’s allegedly anticompetitive behavior.