Last week, I objected to Senator Warner relying on the flawed AOL/Time Warner merger conditions as a template for tech regulatory policy, but there is a much deeper problem contained in his proposals.  Although he does not explicitly say “big is bad” when discussing competition issues, the thrust of much of what he recommends would serve to erode the power of larger firms in favor of smaller firms without offering a justification for why this would result in a superior state of affairs. And he makes these recommendations without respect to whether those firms actually engage in conduct that is harmful to consumers.

In the Data Portability section, Warner says that “As platforms grow in size and scope, network effects and lock-in effects increase; consumers face diminished incentives to contract with new providers, particularly if they have to once again provide a full set of data to access desired functions.“ Thus, he recommends a data portability mandate, which would theoretically serve to benefit startups by providing them with the data that large firms possess. The necessary implication here is that it is a per se good that small firms be benefited and large firms diminished, as the proposal is not grounded in any evaluation of the competitive behavior of the firms to which such a mandate would apply.

Warner also proposes an “interoperability” requirement on “dominant platforms” (which I criticized previously) in situations where, “data portability alone will not produce procompetitive outcomes.” Again, the necessary implication is that it is a per se good that established platforms share their services with start ups without respect to any competitive analysis of how those firms are behaving. The goal is preemptively to “blunt their ability to leverage their dominance over one market or feature into complementary or adjacent markets or products.”

Perhaps most perniciously, Warner recommends treating large platforms as essential facilities in some circumstances. To this end he states that:

Legislation could define thresholds – for instance, user base size, market share, or level of dependence of wider ecosystems – beyond which certain core functions/platforms/apps would constitute ‘essential facilities’, requiring a platform to provide third party access on fair, reasonable and non-discriminatory (FRAND) terms and preventing platforms from engaging in self-dealing or preferential conduct.

But, as  i’ve previously noted with respect to imposing “essential facilities” requirements on tech platforms,

[T]he essential facilities doctrine is widely criticized, by pretty much everyone. In their respected treatise, Antitrust Law, Herbert Hovenkamp and Philip Areeda have said that “the essential facility doctrine is both harmful and unnecessary and should be abandoned”; Michael Boudin has noted that the doctrine is full of “embarrassing weaknesses”; and Gregory Werden has opined that “Courts should reject the doctrine.”

Indeed, as I also noted, “the Supreme Court declined to recognize the essential facilities doctrine as a distinct rule in Trinko, where it instead characterized the exclusionary conduct in Aspen Skiing as ‘at or near the outer boundary’ of Sherman Act § 2 liability.”

In short, it’s very difficult to know when access to a firm’s internal functions might be critical to the facilitation of a market. It simply cannot be true that a firm becomes bound under onerous essential facilities requirements (or classification as a public utility) simply because other firms find it more convenient to use its services than to develop their own.

The truth of what is actually happening in these cases, however, is that third-party firms are choosing to anchor their business to the processes of another firm which generates an “asset specificity” problem that they then seek the government to remedy:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

This is naturally a calculated risk that a firm may choose to make, but it is a risk. To pry open Google or Facebook for the benefit of competitors that choose to play to Google and Facebook’s user base, rather than opening markets of their own, punishes the large players for being successful while also rewarding behavior that shies away from innovation. Further, such a policy would punish the large platforms whenever they innovate with their services in any way that might frustrate third-party “integrators” (see, e.g., Foundem’s claims that Google’s algorithm updates meant to improve search quality for users harmed Foundem’s search rankings).  

Rather than encouraging innovation, blessing this form of asset specificity would have the perverse result of entrenching the status quo.

In all of these recommendations from Senator Warner, there is no claim that any of the targeted firms will have behaved anticompetitively, but merely that they are above a certain size. This is to say that, in some cases, big is bad.

Senator Warner’s policies would harm competition and innovation

As Geoffrey Manne and Gus Hurwitz have recently noted these views run completely counter to the last half-century or more of economic and legal learning that has occurred in antitrust law. From its murky, politically-motivated origins through the early 60’s when the Structure-Conduct-Performance (“SCP”) interpretive framework was ascendant, antitrust law was more or less guided by the gut feeling of regulators that big business necessarily harmed the competitive process.

Thus, at its height with SCP, “big is bad” antitrust relied on presumptions that large firms over a certain arbitrary threshold were harmful and should be subjected to more searching judicial scrutiny when merging or conducting business.

A paradigmatic example of this approach can be found in Von’s Grocery where the Supreme Court prevented the merger of two relatively small grocery chains. Combined, the two chains would have constitutes a mere 9 percent of the market, yet the Supreme Court, relying on the SCP aversion to concentration in itself, prevented the merger despite any procompetitive justifications that would have allowed the combined entity to compete more effectively in a market that was coming to be dominated by large supermarkets.

As Manne and Hurwitz observe: “this decision meant breaking up a merger that did not harm consumers, on the one hand, while preventing firms from remaining competitive in an evolving market by achieving efficient scale, on the other.” And this gets to the central defect of Senator Warner’s proposals. He ties his decisions to interfere in the operations of large tech firms to their size without respect to any demonstrable harm to consumers.

To approach antitrust this way — that is, to roll the clock back to a period before there was a well-defined and administrable standard for antitrust — is to open the door for regulation by political whim. But the value of the contemporary consumer welfare test is that it provides knowable guidance that limits both the undemocratic conduct of politically motivated enforcers as well as the opportunities for private firms to engage in regulatory capture. As Manne and Hurwitz observe:

Perhaps the greatest virtue of the consumer welfare standard is not that it is the best antitrust standard (although it is) — it’s simply that it is a standard. The story of antitrust law for most of the 20th century was one of standard-less enforcement for political ends. It was a tool by which any entrenched industry could harness the force of the state to maintain power or stifle competition.

While it is unlikely that Senator Warner intends to entrench politically powerful incumbents, or enable regulation by whim, those are the likely effects of his proposals.

Antitrust law has a rich set of tools for dealing with competitive harm. Introducing legislation to define arbitrary thresholds for limiting the potential power of firms will ultimately undermine the power of those tools and erode the welfare of consumers.

 

The Economist takes on “sin taxes” in a recent article, “‘Sin’ taxes—eg, on tobacco—are less efficient than they look.” The article has several lessons for policy makers eyeing taxes on e-cigarettes and other vapor products.

Historically, taxes had the key purpose of raising revenues. The “best” taxes would be on goods with few substitutes (i.e., inelastic demand) and on goods deemed to be luxuries. In Wealth of Nations Adam Smith notes:

Sugar, rum, and tobacco are commodities which are nowhere necessaries of life, which are become objects of almost universal consumption, and which are therefore extremely proper subjects of taxation.

The Economist notes in 1764, a fiscal crisis driven by wars in North America led Britain’s parliament began enforcing tariffs on sugar and molasses imported from outside the empire. In the U.S., from 1868 until 1913, 90 percent of all federal revenue came from taxes on liquor, beer, wine and tobacco.

Over time, the rationale for these taxes has shifted toward “sin taxes” designed to nudge consumers away from harmful or distasteful consumption. The Temperance movement in the U.S. argued for higher taxes to discourage alcohol consumption. Since the Surgeon General’s warning on the dangers of smoking, tobacco tax increases have been justified as a way to get smokers to quit. More recently, a perceived obesity epidemic has led several American cities as well as Thailand, Britain, Ireland, South Africa to impose taxes on sugar-sweetened beverages to reduce sugar consumption.

Because demand curves slope down, “sin taxes” do change behavior by reducing the quantity demanded. However, for many products subject to such taxes, demand is not especially responsive. For example, as shown in the figure below, a one percent increase in the price of tobacco is associated with a one-half of one percent decrease in sales.

Economist-Sin-Taxes

 

Substitutability is another consideration for tax policy. An increase in the tax on spirits will result in an increase in beer and wine purchases. A high toll on a road will divert traffic to untolled streets that may not be designed for increased traffic volumes. A spike in tobacco taxes in one state will result in a spike in sales in bordering states as well as increase illegal interstate sales or smuggling. The Economist reports:

After Berkeley introduced its tax, sales of sugary drinks rose by 6.9% in neighbouring cities. Denmark, which instituted a tax on fat-laden foods in 2011, ran into similar problems. The government got rid of the tax a year later when it discovered that many shoppers were buying butter in neighbouring Germany and Sweden.

Advocates of “sin” taxes on tobacco, alcohol, and sugar argue their use impose negative externalities on the public, since governments have to spend more to take care of sick people. With approximately one-third of the U.S. population covered by some form of government funded health insurance, such as Medicare or Medicaid, what were once private costs of healthcare have been transformed into a public cost.

According to Centers for Disease Control and Prevention in U.S., smoking-related illness in the U.S. costs more than $300 billion each year, including; (1) nearly $170 billion for direct medical care for adults and (2) more than $156 billion in lost productivity, including $5.6 billion in lost productivity due to secondhand smoke exposure.

On the other hand, The Economist points out:

Smoking, in contrast, probably saves taxpayers money. Lifelong smoking will bring forward a person’s death by about ten years, which means that smokers tend to die just as they would start drawing from state pensions. In a study published in 2002 Kip Viscusi, an economist at Vanderbilt University who has served as an expert witness on behalf of tobacco companies, estimated that even if tobacco were untaxed, Americans could still expect to save the government an average of 32 cents for every pack of cigarettes they smoke.

The CDC’s cost estimates raise important questions regarding who bears the burden of smoking related illness. For example, much of the direct cost is borne by private insurance, which charge steeper premiums for customers who smoke. In addition, the CDC estimates reflect costs imposed by people who have smoked for decades—many of whom have now quit. A proper accounting of the costs vis-à-vis tax policy should evaluate the discounted costs imposed by today’s smokers.

State and local governments in the U.S. collect more than $18 billion a year in tobacco taxes. While some jurisdictions earmark a portion of tobacco taxes for prevention and cessation efforts, in practice most tobacco taxes are treated by policymakers as general revenues to be spent in whatever way the legislative body determines. Thus, in practice, there is no clear nexus between taxes levied on tobacco and government’s use of the tax revenues on smoking related costs.

Most of the harm from smoking is caused by the inhalation of toxicants released through the combustion of tobacco. Public Health England and the American Cancer Society have concluded non-combustible tobacco products, such as e-cigarettes, “heat-not-burn” products, smokeless tobacco, are considerably less harmful than combustible products.

Many experts believe that the best option for smokers who are unable or unwilling to quit smoking is to switch to a less harmful alternative activity that has similar attributes, such as using non-combustible nicotine delivery products. Policies that encourage smokers to switch from more harmful combustible tobacco products to less harmful non-combustible products would be considered a form of “harm reduction.”

Nine U.S. states now have taxes on vapor products. In addition, several local jurisdictions have enacted taxes. Their methods and levels of taxation vary widely. Policy makers considering a tax on vapor products should account for the following factors.

  • The current market for e-cigarettes as well as heat-not-burn products in the range of 0-10 percent of the cigarette market. Given the relatively small size of the e-cigarette and heated tobacco product market, it is unlikely any level of taxation of e-cigarettes and heated tobacco products would generate significant tax revenues to the taxing jurisdiction. Moreover much of the current research likely represents early adopters and higher income consumer groups. As such, the current empirical data based on total market size and price/tax levels are likely to be far from indicative of the “actual” market for these products.
  • The demand for e-cigarettes is much more responsive to a change in price than the demand for combustible cigarettes. My review of the published research to date finds the median estimated own-price elasticity is -1.096, meaning something close to a 1-to-1 relationship: a tax resulting in a one percent increase in e-cigarette prices would be associated with one percent decline in e-cigarette sales. Many of those lost sales would be shifted to purchases of combustible cigarettes.
  • Research on the price responsiveness of vapor products is relatively new and sparse. There are fewer than a dozen published articles, and the first article was published in 2014. As a result, the literature reports a wide range of estimated elasticities that calls into question the reliability of published estimates, as shown in the figure below. As a relatively unformed area of research, the policy debate would benefit from additional research that involves larger samples with better statistical power, reflects the dynamic nature of this new product category, and accounts for the wide variety of vapor products.

 

With respect to taxation and pricing, policymakers would benefit from reliable information regarding the size of the vapor product market and the degree to which vapor products are substitutes for combustible tobacco products. It may turn out that a tax on vapor products may be, as The Economist notes, less efficient than they look.

Senator Mark Warner has proposed 20 policy prescriptions for bringing “big tech” to heel. The proposals — which run the gamut from policing foreign advertising on social networks to regulating feared competitive harms — provide much interesting material for Congress to consider.

On the positive side, Senator Warner introduces the idea that online platforms may be able to function as least-cost avoiders with respect to certain tortious behavior of their users. He advocates for platforms to implement technology that would help control the spread of content that courts have found violated certain rights of third-parties.

Yet, on other accounts — specifically the imposition of an “interoperability” mandate on platforms — his proposals risk doing more harm than good.

The interoperability mandate was included by Senator Warner in order to “blunt [tech platforms’] ability to leverage their dominance over one market or feature into complementary or adjacent markets or products.” According to Senator Warner, such a measure would enable startups to offset the advantages that arise from network effects on large tech platforms by building their services more easily on the backs of successful incumbents.

Whatever you think of the moats created by network effects, the example of “successful” previous regulation on this issue that Senator Warner relies upon is perplexing:

A prominent template for [imposing interoperability requirements] was in the AOL/Time Warner merger, where the FCC identified instant messaging as the ‘killer app’ – the app so popular and dominant that it would drive consumers to continue to pay for AOL service despite the existence of more innovative and efficient email and internet connectivity services. To address this, the FCC required AOL to make its instant messaging service (AIM, which also included a social graph) interoperable with at least one rival immediately and with two other rivals within 6 months.

But the AOL/Time Warner merger and the FCC’s conditions provide an example that demonstrates the exact opposite of what Senator Warner suggests. The much-feared 2001 megamerger prompted, as the Senator notes, fears that the new company would be able to leverage its dominance in the nascent instant messaging market to extend its influence into adjacent product markets.

Except, by 2003, despite it being unclear that AOL had developed interoperable systems, two large competitors had arisen that did not run interoperable IM networks (Yahoo! and Microsoft). In that same period, AOL’s previously 100% IM market share had declined by about half. By 2009, after eight years of heavy losses, Time Warner shed AOL, and by last year AIM was completely dead.

Not only was it not clear that AOL was able to make AIM interoperable, AIM was never able to catch up once better, rival services launched. What the conditions did do, however, was prevent AOL from launching competitive video chat services as it flailed about in the wake of the deal, thus forcing it to miss out on a market opportunity available to unencumbered competitors like Microsoft and Yahoo!

And all of this of course ignores the practical impossibility entailed in interfering in highly integrated technology platforms.

The AOL/Time Warner merger conditions are no template for successful tech regulation. Congress would be ill-advised to rely upon such templates for crafting policy around tech and innovation.

What to make of Wednesday’s decision by the European Commission alleging that Google has engaged in anticompetitive behavior? In this post, I contrast the European Commission’s (EC) approach to competition policy with US antitrust, briefly explore the history of smartphones and then discuss the ruling.

Asked about the EC’s decision the day it was announced, FTC Chairman Joseph Simons noted that, while the market is concentrated, Apple and Google “compete pretty heavily against each other” with their mobile operating systems, in stark contrast to the way the EC defined the market. Simons also stressed that for the FTC what matters is not the structure of the market per se but whether or not there is harm to the consumer. This again contrasts with the European Commission’s approach, which does not require harm to consumers. As Simons put it:

Once they [the European Commission] find that a company is dominant… that imposes upon the company kind of like a fairness obligation irrespective of what the effect is on the consumer. Our regulatory… our antitrust regime requires that there be a harm to consumer welfare — so the consumer has to be injured — so the two tests are a little bit different.

Indeed, and as the history below shows, the popularity of Apple’s iOS and Google’s Android operating systems arose because they were superior products — not because of anticompetitive conduct on the part of either Apple or Google. On the face of it, the conduct of both Apple and Google has led to consumer benefits, not harms. So, from the perspective of U.S. antitrust authorities, there is no reason to take action.

Moreover, there is a danger that by taking action as the EU has done, competition and innovation will be undermined — which would be a perverse outcome indeed. These concerns were reflected in a statement by Senator Mike Lee (R-UT):

Today’s decision by the European Commission to fine Google over $5 billion and require significant changes to its business model to satisfy EC bureaucrats has the potential to undermine competition and innovation in the United States,” Sen. Lee said. “Moreover, the decision further demonstrates the different approaches to competition policy between U.S. and EC antitrust enforcers. As discussed at the hearing held last December before the Senate’s Subcommittee on Antitrust, Competition Policy & Consumer Rights, U.S. antitrust agencies analyze business practices based on the consumer welfare standard. This analytical framework seeks to protect consumers rather than competitors. A competitive marketplace requires strong antitrust enforcement. However, appropriate competition policy should serve the interests of consumers and not be used as a vehicle by competitors to punish their successful rivals.

Ironically, the fundamental basis for the Commission’s decision is an analytical framework developed by economists at Harvard in the 1950s, which presumes that the structure of a market determines the conduct of the participants, which in turn presumptively affects outcomes for consumers. This “structure-conduct-performance” paradigm has been challenged both theoretically and empirically (and by “challenged,” I mean “demolished”).

Maintaining, as EC Commissioner Vestager has, that “What would serve competition is to have more players,” is to adopt a presumption regarding competition rooted in the structure of the market, without sufficient attention to the facts on the ground. As French economist Jean Tirole noted in his Nobel Prize lecture:

Economists accordingly have advocated a case-by-case or “rule of reason” approach to antitrust, away from rigid “per se” rules (which mechanically either allow or prohibit certain behaviors, ranging from price-fixing agreements to resale price maintenance). The economists’ pragmatic message however comes with a double social responsibility. First, economists must offer a rigorous analysis of how markets work, taking into account both the specificities of particular industries and what regulators do and do not know….

Second, economists must participate in the policy debate…. But of course, the responsibility here goes both ways. Policymakers and the media must also be willing to listen to economists.

In good Tirolean fashion, we begin with an analysis of how the market for smartphones developed. What quickly emerges is that the structure of the market is a function of intense competition, not its absence. And, by extension, mandating a different structure will likely impede competition, or, at the very least, will not likely contribute to it.

A brief history of smartphone competition

In 2006, Nokia’s N70 became the first smartphone to sell more than a million units. It was a beautiful device, with a simple touch screen interface and real push buttons for numbers. The following year, Apple released its first iPhone. It sold 7 million units — about the same as Nokia’s N95 and slightly less than LG’s Shine. Not bad, but paltry compared to the sales of Nokia’s 1200 series phones, which had combined sales of over 250 million that year — about twice the total of all smartphone sales in 2007.

By 2017, smartphones had come to dominate the market, with total sales of over 1.5 billion. At the same time, the structure of the market has changed dramatically. In the first quarter of 2018, Apple’s iPhone X and iPhone 8 were the two best-selling smartphones in the world. In total, Apple shipped just over 52 million phones, accounting for 14.5% of the global market. Samsung, which has a wider range of devices, sold even more: 78 million phones, or 21.7% of the market. At third and fourth place were Huawei (11%) and Xiaomi (7.5%). Nokia and LG didn’t even make it into the top 10, with market shares of only 3% and 1% respectively.

Several factors have driven this highly dynamic market. Dramatic improvements in cellular data networks have played a role. But arguably of greater importance has been the development of software that offers consumers an intuitive and rewarding experience.

Apple’s iOS and Google’s Android operating systems have proven to be enormously popular among both users and app developers. This has generated synergies — or what economists call network externalities — as more apps have been developed, so more people are attracted to the ecosystem and vice versa, leading to a virtuous circle that benefits both users and app developers.

By contrast, Nokia’s early smartphones, including the N70 and N95, ran Symbian, the operating system developed for Psion’s handheld devices, which had a clunkier user interface and was more difficult to code — so it was less attractive to both users and developers. In addition, Symbian lacked an effective means of solving the problem of fragmentation of the operating system across different devices, which made it difficult for developers to create apps that ran across the ecosystem — something both Apple (through its closed system) and Google (through agreements with carriers) were able to address. Meanwhile, Java’s MIDP used in LG’s Shine, and its successor J2ME imposed restrictions on developers (such as prohibiting access to files, hardware, and network connections) that seem to have made it less attractive than Android.

The relative superiority of their operating systems enabled Apple and the manufacturers of Android-based phones to steal a march on the early leaders in the smartphone revolution.

The fact that Google allows smartphone manufacturers to install Android for free, distributes Google Play and other apps in a free bundle, and pays such manufacturers for preferential treatment for Google Search, has also kept the cost of Android-based smartphones down. As a result, Android phones are the cheapest on the market, providing a powerful experience for as little as $50. It is reasonable to conclude from this that innovation, driven by fierce competition, has led to devices, operating systems, and apps that provide enormous benefits to consumers.

The Commission decision would harm device manufacturers, app developers and consumers

The EC’s decision seems to disregard the history of smartphone innovation and competition and their ongoing consequences. As Dirk Auer explains, the Open Handset Alliance (OHA) was created specifically to offer an effective alternative to Apple’s iPhone — and it worked. Indeed, it worked so spectacularly that Android is installed on about 80% of all new phones. This success was the result of several factors that the Commission now seeks to undermine:

First, in order to maintain order within the Android universe, and thereby ensure that apps developed for Android would function on the vast majority of Android devices, Google and the OHA sought to limit the extent to which Android “forks” could be created. (Apple didn’t face this problem because its source code is proprietary, so cannot be modified by third-party developers.) One way Google does this is by imposing restrictions on the licensing of its proprietary apps, such as the Google Play store (a repository of apps, similar to Apple’s App Store).

Device manufacturers that don’t conform to these restrictions may still build devices with their forked version of Android — but without those Google apps. Indeed, Amazon chooses to develop a non-conforming version of Android and built its own app repository for its Fire devices (though it is still possible to add the Google Play Store). That strategy seems to be working for Amazon in the tablet market; in 2017 it rose past Samsung to become the second biggest manufacturer of tablets worldwide, after Apple.

Second, in order to be able to offer Android for free to smartphone manufacturers, Google sought to develop unique revenue streams (because, although the software is offered for free, it turns out that software developers generally don’t work for free). The main way Google did this was by requiring manufacturers that choose to install Google Play also to install its browser (Chrome) and search tools, which generate revenue from advertising. At the same time, Google kept its platform open by permitting preloads of rivals’ apps and creating a marketplace where rivals can also reach scale. Mozilla’s Firefox browser, for example, has been downloaded over 100 million times on Android.

The importance of these factors to the success of Android is acknowledged by the EC. But instead of treating them as legitimate business practices that enabled the development of high-quality, low-cost smartphones and a universe of apps that benefits billions of people, the Commission simply asserts that they are harmful, anticompetitive practices.

For example, the Commission asserts that

In order to be able to pre-install on their devices Google’s proprietary apps, including the Play Store and Google Search, manufacturers had to commit not to develop or sell even a single device running on an Android fork. The Commission found that this conduct was abusive as of 2011, which is the date Google became dominant in the market for app stores for the Android mobile operating system.

This is simply absurd, to say nothing of ahistorical. As noted, the restrictions on Android forks plays an important role in maintaining the coherency of the Android ecosystem. If device manufacturers were able to freely install Google apps (and other apps via the Play Store) on devices running problematic Android forks that were unable to run the apps properly, consumers — and app developers — would be frustrated, Google’s brand would suffer, and the value of the ecosystem would be diminished. Extending this restriction to all devices produced by a specific manufacturer, regardless of whether they come with Google apps preinstalled, reinforces the importance of the prohibition to maintaining the coherency of the ecosystem.

It is ridiculous to say that something (efforts to rein in Android forking) that made perfect sense until 2011 and that was central to the eventual success of Android suddenly becomes “abusive” precisely because of that success — particularly when the pre-2011 efforts were often viewed as insufficient and unsuccessful (a January 2012 Guardian Technology Blog post, “How Google has lost control of Android,” sums it up nicely).

Meanwhile, if Google is unable to tie pre-installation of its search and browser apps to the installation of its app store, then it will have less financial incentive to continue to maintain the Android ecosystem. Or, more likely, it will have to find other ways to generate revenue from the sale of devices in the EU — such as charging device manufacturers for Android or Google Play. The result is that consumers will be harmed, either because the ecosystem will be degraded, or because smartphones will become more expensive.

The troubling absence of Apple from the Commission’s decision

In addition, the EC’s decision is troublesome in other ways. First, for its definition of the market. The ruling asserts that “Through its control over Android, Google is dominant in the worldwide market (excluding China) for licensable smart mobile operating systems, with a market share of more than 95%.” But “licensable smart mobile operating systems” is a very narrow definition, as it necessarily precludes operating systems that are not licensable — such as Apple’s iOS and RIM’s Blackberry OS. Since Apple has nearly 25% of the market share of smartphones in Europe, the European Commission has — through its definition of the market — presumed away the primary source of effective competition. As Pinar Akman has noted:

How can Apple compete with Google in the market as defined by the Commission when Apple allows only itself to use its operating system only on devices that Apple itself manufactures?

The EU then invents a series of claims regarding the lack of competition with Apple:

  • end user purchasing decisions are influenced by a variety of factors (such as hardware features or device brand), which are independent from the mobile operating system;

It is not obvious that this is evidence of a lack of competition. A better explanation is that the EU’s narrow definition of the market is defective. In fact, one could easily draw the opposite conclusion of that drawn by the Commission: the fact that purchasing decisions are driven by various factors suggests that there is substantial competition, with phone manufacturers seeking to design phones that offer a range of features, on a number of dimensions, to best capture diverse consumer preferences. They are able to do this in large part precisely because consumers are able to rely upon a generally similar operating system and continued access to the apps that they have downloaded. As Tim Cook likes to remind his investors, Apple is quite successful at targeting “Android switchers” to switch to iOS.

 

  • Apple devices are typically priced higher than Android devices and may therefore not be accessible to a large part of the Android device user base;

 

And yet, in the first quarter of 2018, Apple phones accounted for five of the top ten selling smartphones worldwide. Meanwhile, several competing phones, including the fifth and sixth best-sellers, Samsung’s Galaxy S9 and S9+, sell for similar prices to the most expensive iPhones. And a refurbished iPhone 6 can be had for less than $150.

 

  • Android device users face switching costs when switching to Apple devices, such as losing their apps, data and contacts, and having to learn how to use a new operating system;

 

This is, of course, true for any system switch. And yet the growing market share of Apple phones suggests that some users are willing to part with those sunk costs. Moreover, the increasing predominance of cloud-based and cross-platform apps, as well as Apple’s own “Move to iOS” Android app (which facilitates the transfer of users’ data from Android to iOS), means that the costs of switching border on trivial. As mentioned above, Tim Cook certainly believes in “Android switchers.”

 

  • even if end users were to switch from Android to Apple devices, this would have limited impact on Google’s core business. That’s because Google Search is set as the default search engine on Apple devices and Apple users are therefore likely to continue using Google Search for their queries.

 

This is perhaps the most bizarre objection of them all. The fact that Apple chooses to install Google search as the default demonstrates that consumers prefer that system over others. Indeed, this highlights a fundamental problem with the Commission’s own rationale, As Akman notes:

It is interesting that the case appears to concern a dominant undertaking leveraging its dominance from a market in which it is dominant (Google Play Store) into another market in which it is also dominant (internet search). As far as this author is aware, most (if not all?) cases of tying in the EU to date concerned tying where the dominant undertaking leveraged its dominance in one market to distort or eliminate competition in an otherwise competitive market.

Conclusion

As the foregoing demonstrates, the EC’s decision is based on a fundamental misunderstanding of the nature and evolution of the market for smartphones and associated applications. The statement by Commissioner Vestager quoted above — that “What would serve competition is to have more players” — belies this misunderstanding and highlights the erroneous assumptions underpinning the Commission’s analysis, which is wedded to a theory of market competition that was long ago thrown out by economists.

And, thankfully, it appears that the FTC Chairman is aware of at least some of the flaws in the EC’s conclusions.

Google will undoubtedly appeal the Commission’s decision. For the sakes of the millions of European consumers who rely on Android-based phones and the millions of software developers who provide Android apps, let’s hope that they succeed.

By Pinar Akman, Professor of Law, University of Leeds*

The European Commission’s decision in Google Android cuts a fine line between punishing a company for its success and punishing a company for falling afoul of the rules of the game. Which side of the line it actually falls on cannot be fully understood until the Commission publishes its full decision. Much depends on the intricate facts of the case. As the full decision may take months to come, this post offers merely the author’s initial thoughts on the decision on the basis of the publicly available information.

The eye-watering fine of $5.1 billion — which together with the fine of $2.7 billion in the Google Shopping decision from last year would (according to one estimate) suffice to fund for almost one year the additional yearly public spending necessary to eradicate world hunger by 2030 — will not be further discussed in this post. This is because the fine is assumed to have been duly calculated on the basis of the Commission’s relevant Guidelines, and, from a legal and commercial point of view, the absolute size of the fine is not as important as the infringing conduct and the remedy Google will need to adopt to comply with the decision.

First things first. This post proceeds on the premise that the aim of competition law is to prevent the exclusion of competitors that are (at least) as efficient as the dominant incumbent, whose exclusion would ultimately harm consumers.

Next, it needs to be noted that the Google Android case is a more conventional antitrust case than Google Shopping in the sense that one can at least envisage a potentially robust antitrust theory of harm in the former case. If a dominant undertaking ties its products together to exclude effective competition in some of these markets or if it pays off customers to exclude access by its efficient competitors to consumers, competition law intervention may be justified.

The central question in Google Android is whether on the available facts this appears to have happened.

What we know and market definition

The premise of the case is that Google used its dominance in the Google Play Store (which enables users to download apps onto their Android phones) to “cement Google’s dominant position in general internet search.”

It is interesting that the case appears to concern a dominant undertaking leveraging its dominance from a market in which it is dominant (Google Play Store) into another market in which it is also dominant (internet search). As far as this author is aware, most (if not all?) cases of tying in the EU to date concerned tying where the dominant undertaking leveraged its dominance in one market to distort or eliminate competition in an otherwise competitive market.

Thus, for example, in Microsoft (Windows Operating System —> media players), Hilti (patented cartridge strips —> nails), and Tetra Pak II (packaging machines —> non-aseptic cartons), the tied market was actually or potentially competitive, and this was why the tying was alleged to have eliminated competition. It will be interesting to see which case the Commission uses as precedent in its decision — more on that later.

Also noteworthy is that the Commission does not appear to have defined a separate mobile search market that would have been competitive but for Google’s alleged leveraging. The market has been defined as the general internet search market. So, according to the Commission, the Google Search App and Google Search engine appear to be one and the same thing, and desktop and mobile devices are equivalent (or substitutable).

Finding mobile and desktop devices to be equivalent to one another may have implications for other cases including the ongoing appeal in Google Shopping where, for example, the Commission found that “[m]obile [apps] are not a viable alternative for replacing generic search traffic from Google’s general search results pages” for comparison shopping services. The argument that mobile apps and mobile traffic are fundamental in Google Android but trivial in Google Shopping may not play out favourably for the Commission before the Court of Justice of the EU.

Another interesting market definition point is that the Commission has found Apple not to be a competitor to Google in the relevant market defined by the Commission: the market for “licensable smart mobile operating systems.” Apple does not fall within that market because Apple does not license its mobile operating system to anyone: Apple’s model eliminates all possibility of competition from the start and is by definition exclusive.

Although there is some internal logic in the Commission’s exclusion of Apple from the upstream market that it has defined, is this not a bit of a definitional stop? How can Apple compete with Google in the market as defined by the Commission when Apple allows only itself to use its operating system only on devices that Apple itself manufactures?

To be fair, the Commission does consider there to be some competition between Apple and Android devices at the level of consumers — just not sufficient to constrain Google at the upstream, manufacturer level.

Nevertheless, the implication of the Commission’s assessment that separates the upstream and downstream in this way is akin to saying that the world’s two largest corn producers that produce the corn used to make corn flakes do not compete with one another in the market for corn flakes because one of them uses its corn exclusively in its own-brand cereal.

Although the Commission cabins the use of supply-side substitutability in market definition, its own guidance on the topic notes that

Supply-side substitutability may also be taken into account when defining markets in those situations in which its effects are equivalent to those of demand substitution in terms of effectiveness and immediacy. This means that suppliers are able to switch production to the relevant products and market them in the short term….

Apple could — presumably — rather immediately and at minimal cost produce and market a version of iOS for use on third-party device makers’ devices. By the Commission’s own definition, it would seem to make sense to include Apple in the relevant market. Nevertheless, it has apparently not done so here.

The message that the Commission sends with the finding is that if Android had not been open source and freely available, and if Google competed with Apple with its own version of a walled-garden built around exclusivity, it is possible that none of its practices would have raised any concerns. Or, should Apple be expecting a Statement of Objections next from the EU Commission?

Is Microsoft really the relevant precedent?

Given that Google Android appears to revolve around the idea of tying and leveraging, the EU Commission’s infringement decision against Microsoft, which found an abusive tie in Microsoft’s tying of Windows Operating System with Windows Media Player, appears to be the most obvious precedent, at least for the tying part of the case.

There are, however, potentially important factual differences between the two cases. To take just a few examples:

  • Microsoft charged for the Windows Operating System, whereas Google does not;
  • Microsoft tied the setting of Windows Media Player as the default to OEMs’ licensing of the operating system (Windows), whereas Google ties the setting of Search as the default to device makers’ use of other Google apps, while allowing them to use the operating system (Android) without any Google apps; and
  • Downloading competing media players was difficult due to download speeds and lack of user familiarity, whereas it is trivial and commonplace for users to download apps that compete with Google’s.

Moreover, there are also some conceptual hurdles in finding the conduct to be that of tying.

First, the difference between “pre-installed,” “default,” and “exclusive” matters a lot in establishing whether effective competition has been foreclosed. The Commission’s Press Release notes that to pre-install Google Play, manufacturers have to also pre-install Google Search App and Google Chrome. It also states that Google Search is the default search engine on Google Chrome. The Press Release does not indicate that Google Search App has to be the exclusive or default search app. (It is worth noting, however, that the Statement of Objections in Google Android did allege that Google violated EU competition rules by requiring Search to be installed as the default. We will have to await the decision itself to see if this was dropped from the case or simply not mentioned in the Press Release).

In fact, the fact that the other infringement found is that of Google’s making payments to manufacturers in return for exclusively pre-installing the Google Search App indirectly suggests that not every manufacturer pre-installs Google Search App as the exclusive, pre-installed search app. This means that any other search app (provider) can also (request to) be pre-installed on these devices. The same goes for the browser app.

Of course, regardless, even if the manufacturer does not pre-install competing apps, the consumer is free to download any other app — for search or browsing — as they wish, and can do so in seconds.

In short, pre-installation on its own does not necessarily foreclose competition, and thus may not constitute an illegal tie under EU competition law. This is particularly so when download speeds are fast (unlike the case at the time of Microsoft) and consumers regularly do download numerous apps.

What may, however, potentially foreclose effective competition is where a dominant undertaking makes payments to stop its customers, as a practical matter, from selling its rivals’ products. Intel, for example, was found to have abused its dominant position through payments to a computer retailer in return for its not selling computers with its competitor AMD’s chips, and to computer manufacturers in return for delaying the launch of computers with AMD chips.

In Google Android, the exclusivity provision that would require manufacturers to pre-install Google Search App exclusively in return for financial incentives may be deemed to be similar to this.

Having said that, unlike in Intel where a given computer can have a CPU from only one given manufacturer, even the exclusive pre-installation of the Google Search App would not have prevented consumers from downloading competing apps. So, again, in theory effective competition from other search apps need not have been foreclosed.

It must also be noted that just because a Google app is pre-installed does not mean that it generates any revenue to Google — consumers have to actually choose to use that app as opposed to another one that they might prefer in order for Google to earn any revenue from it. The Commission seems to place substantial weight on pre-installation which it alleges to create “a status quo bias.”

The concern with this approach is that it is not possible to know whether those consumers who do not download competing apps do so out of a preference for Google’s apps or, instead, for other reasons that might indicate competition not to be working. Indeed, one hurdle as regards conceptualising the infringement as tying is that it would require establishing that a significant number of phone users would actually prefer to use Google Play Store (the tying product) without Google Search App (the tied product).

This is because, according to the Commission’s Guidance Paper, establishing tying starts with identifying two distinct products, and

[t]wo products are distinct if, in the absence of tying or bundling, a substantial number of customers would purchase or would have purchased the tying product without also buying the tied product from the same supplier.

Thus, if a substantial number of customers would not want to use Google Play Store without also preferring to use Google Search App, this would cause a conceptual problem for making out a tying claim.

In fact, the conduct at issue in Google Android may be closer to a refusal to supply type of abuse.

Refusal to supply also seems to make more sense regarding the prevention of the development of Android forks being found to be an abuse. In this context, it will be interesting to see how the Commission overcomes the argument that Android forks can be developed freely and Google may have legitimate business reasons in wanting to associate its own, proprietary apps only with a certain, standardised-quality version of the operating system.

More importantly, the possible underlying theory in this part of the case is that the Google apps — and perhaps even the licensed version of Android — are a “must-have,” which is close to an argument that they are an essential facility in the context of Android phones. But that would indeed require a refusal to supply type of abuse to be established, which does not appear to be the case.

What will happen next?

To answer the question raised in the title of this post — whether the Google Android decision will benefit consumers — one needs to consider what Google may do in order to terminate the infringing conduct as required by the Commission, whilst also still generating revenue from Android.

This is because unbundling Google Play Store, Google Search App and Google Chrome (to allow manufacturers to pre-install Google Play Store without the latter two) will disrupt Google’s main revenue stream (i.e., ad revenue generated through the use of Google Search App or Google Search within the Chrome app) which funds the free operating system. This could lead Google to start charging for the operating system, and limiting to whom it licenses the operating system under the Commission’s required, less-restrictive terms.

As the Commission does not seem to think that Apple constrains Google when it comes to dealings with device manufacturers, in theory, Google should be able to charge up to the monopoly level licensing fee to device manufacturers. If that happens, the price of Android smartphones may go up. It is possible that there is a new competitor lurking in the woods that will grow and constrain that exercise of market power, but how this will all play out for consumers — as well as app developers who may face increasing costs due to the forking of Android — really remains to be seen.

 

* Pinar Akman is Professor of Law, Director of Centre for Business Law and Practice, University of Leeds, UK. This piece has not been commissioned or funded by any entity. The author has not been involved in the Google Android case in any capacity. In the past, the author wrote a piece on the Commission’s Google Shopping case, ‘The Theory of Abuse in Google Search: A Positive and Normative Assessment under EU Competition Law,’ supported by a research grant from Google. The author would like to thank Peter Whelan, Konstantinos Stylianou, and Geoffrey Manne for helpful comments. All errors remain her own. The author can be contacted here.

Today the European Commission launched its latest salvo against Google, issuing a decision in its three-year antitrust investigation into the company’s agreements for distribution of the Android mobile operating system. The massive fine levied by the Commission will dominate the headlines, but the underlying legal theory and proposed remedies are just as notable — and just as problematic.

The nirvana fallacy

It is sometimes said that the most important question in all of economics is “compared to what?” UCLA economist Harold Demsetz — one of the most important regulatory economists of the past century — coined the term “nirvana fallacy” to critique would-be regulators’ tendency to compare messy, real-world economic circumstances to idealized alternatives, and to justify policies on the basis of the discrepancy between them. Wishful thinking, in other words.

The Commission’s Android decision falls prey to the nirvana fallacy. It conjures a world in which Google offers its Android operating system on unrealistic terms, prohibits it from doing otherwise, and neglects the actual consequences of such a demand.

The idea at the core of the Commission’s decision is that by making its own services (especially Google Search and Google Play Store) easier to access than competing services on Android devices, Google has effectively foreclosed rivals from effective competition. In order to correct that claimed defect, the Commission demands that Google refrain from engaging in practices that favor its own products in its Android licensing agreements:

At a minimum, Google has to stop and to not re-engage in any of the three types of practices. The decision also requires Google to refrain from any measure that has the same or an equivalent object or effect as these practices.

The basic theory is straightforward enough, but its application here reflects a troubling departure from the underlying economics and a romanticized embrace of industrial policy that is unsupported by the realities of the market.

In a recent interview, European Commission competition chief, Margrethe Vestager, offered a revealing insight into her thinking about her oversight of digital platforms, and perhaps the economy in general: “My concern is more about whether we get the right choices,” she said. Asked about Facebook, for example, she specified exactly what she thinks the “right” choice looks like: “I would like to have a Facebook in which I pay a fee each month, but I would have no tracking and advertising and the full benefits of privacy.”

Some consumers may well be sympathetic with her preference (and even share her specific vision of what Facebook should offer them). But what if competition doesn’t result in our — or, more to the point, Margrethe Vestager’s — prefered outcomes? Should competition policy nevertheless enact the idiosyncratic consumer preferences of a particular regulator? What if offering consumers the “right” choices comes at the expense of other things they value, like innovation, product quality, or price? And, if so, can antitrust enforcers actually engineer a better world built around these preferences?

Android’s alleged foreclosure… that doesn’t really foreclose anything

The Commission’s primary concern is with the terms of Google’s deal: In exchange for royalty-free access to Android and a set of core, Android-specific applications and services (like Google Search and Google Maps) Google imposes a few contractual conditions.

Google allows manufacturers to use the Android platform — in which the company has invested (and continues to invest) billions of dollars — for free. It does not require device makers to include any of its core, Google-branded features. But if a manufacturer does decide to use any of them, it must include all of them, and make Google Search the device default. In another (much smaller) set of agreements, Google also offers device makers a small share of its revenue from Search if they agree to pre-install only Google Search on their devices (although users remain free to download and install any competing services they wish).

Essentially, that’s it. Google doesn’t allow device makers to pick and choose between parts of the ecosystem of Google products, free-riding on Google’s brand and investments. But manufacturers are free to use the Android platform and to develop their own competing brand built upon Google’s technology.

Other apps may be installed in addition to Google’s core apps. Google Search need not be the exclusive search service, but it must be offered out of the box as the default. Google Play and Chrome must be made available to users, but other app stores and browsers may be pre-installed and even offered as the default. And device makers who choose to do so may share in Search revenue by pre-installing Google Search exclusively — but users can and do install a different search service.

Alternatives to all of Google’s services (including Search) abound on the Android platform. It’s trivial both to install them and to set them as the default. Meanwhile, device makers regularly choose to offer these apps alongside Google’s services, and some, like Samsung, have developed entire customized app suites of their own. Still others, like Amazon, pre-install no Google apps and use Android without any of these constraints (and whose Google-free tablets are regularly ranked as the best-rated and most popular in Europe).

By contrast, Apple bundles its operating system with its devices, bypasses third-party device makers entirely, and offers consumers access to its operating system only if they pay (lavishly) for one of the very limited number of devices the company offers, as well. It is perhaps not surprising — although it is enlightening — that Apple earns more revenue in an average quarter from iPhone sales than Google is reported to have earned in total from Android since it began offering it in 2008.

Reality — and the limits it imposes on efforts to manufacture nirvana

The logic behind Google’s approach to Android is obvious: It is the extension of Google’s “advertisers pay” platform strategy to mobile. Rather than charging device makers (and thus consumers) directly for its services, Google earns its revenue by charging advertisers for targeted access to users via Search. Remove Search from mobile devices and you remove the mechanism by which Google gets paid.

It’s true that most device makers opt to offer Google’s suite of services to European users, and that most users opt to keep Google Search as the default on their devices — that is, indeed, the hoped-for effect, and necessary to ensure that Google earns a return on its investment.

That users often choose to keep using Google services instead of installing alternatives, and that device makers typically choose to engineer their products around the Google ecosystem, isn’t primarily the result of a Google-imposed mandate; it’s the result of consumer preferences for Google’s offerings in lieu of readily available alternatives.

The EU decision against Google appears to imagine a world in which Google will continue to develop Android and allow device makers to use the platform and Google’s services for free, even if the likelihood of recouping its investment is diminished.

The Commission also assessed in detail Google’s arguments that the tying of the Google Search app and Chrome browser were necessary, in particular to allow Google to monetise its investment in Android, and concluded that these arguments were not well founded. Google achieves billions of dollars in annual revenues with the Google Play Store alone, it collects a lot of data that is valuable to Google’s search and advertising business from Android devices, and it would still have benefitted from a significant stream of revenue from search advertising without the restrictions.

For the Commission, Google’s earned enough [trust me: you should follow the link. It’s my favorite joke…].

But that world in which Google won’t alter its investment decisions based on a government-mandated reduction in its allowable return on investment doesn’t exist; it’s a fanciful Nirvana.

Google’s real alternatives to the status quo are charging for the use of Android, closing the Android platform and distributing it (like Apple) only on a fully integrated basis, or discontinuing Android.

In reality, and compared to these actual alternatives, Google’s restrictions are trivial. Remember, Google doesn’t insist that Google Search be exclusive, only that it benefit from a “leg up” by being pre-installed as the default. And on this thin reed Google finances the development and maintenance of the (free) Android operating system and all of the other (free) apps from which Google otherwise earns little or no revenue.

It’s hard to see how consumers, device makers, or app developers would be made better off without Google’s restrictions, but in the real world in which the alternative is one of the three manifestly less desirable options mentioned above.

Missing the real competition for the trees

What’s more, while ostensibly aimed at increasing competition, the Commission’s proposed remedy — like the conduct it addresses — doesn’t relate to Google’s most significant competitors at all.

Facebook, Instagram, Firefox, Amazon, Spotify, Yelp, and Yahoo, among many others, are some of the most popular apps on Android phones, including in Europe. They aren’t foreclosed by Google’s Android distribution terms, and it’s even hard to imagine that they would be more popular if only Android phones didn’t come with, say, Google Search pre-installed.

It’s a strange anticompetitive story that has Google allegedly foreclosing insignificant competitors while apparently ignoring its most substantial threats.

The primary challenges Google now faces are from Facebook drawing away the most valuable advertising and Amazon drawing away the most valuable product searches (and increasingly advertising, as well). The fact that Google’s challenged conduct has never shifted in order to target these competitors as their threat emerged, and has had no apparent effect on these competitive dynamics, says all one needs to know about the merits of the Commission’s decision and the value of its proposed remedy.

In reality, as Demsetz suggested, Nirvana cannot be designed by politicians, especially in complex, modern technology markets. Consumers’ best hope for something close — continued innovation, low prices, and voluminous choice — lies in the evolution of markets spurred by consumer demand, not regulators’ efforts to engineer them.

Regardless of which standard you want to apply to competition law – consumer welfare, total welfare, hipster, or redneck antitrust – it’s never good when competition/antitrust agencies are undermining innovation. Yet, this is precisely what the European Commission is doing.

Today, the agency announced a €4.34 billion fine against Alphabet (Google). It represents more than 30% of what the company invests annually in R&D (based on 2017 figures). This is more than likely to force Google to cut its R&D investments, or, at least, to slow them down.

In fact, the company says in a recent 10-K filing with the SEC that it is uncertain as to the impact of these sanctions on its financial stability. It follows that the European Commission necessarily is ignorant of such concerns, as well, which is thus clearly not reflected in the calculation of its fine.

One thing is for sure, however: In the end, consumers will suffer if the failure to account for the fine’s effect on innovation will lead to less of it from Google.

And Google is not alone in this situation. In a paper just posted by the International Center for Law & Economics, I conduct an empirical study comparing all the fines imposed by the European Commission on the basis of Article 102 TFEU over the period 2004 to 2018 (Android included) with the annual R&D investments by the targeted companies.

The results are indisputable: The European Commission’s fines are disproportionate in this regard and have the probable effect of slowing down the innovation of numerous sanctioned companies.

For this reason, an innovation protection mechanism should be incorporated into the calculation of the EU’s Article 102 fines. I propose doing so by introducing a new limit that caps Article 102 fines at a certain percentage of companies’ investment in R&D.

The full paper is available here.

Our story begins on the morning of January 9, 2007. Few people knew it at the time, but the world of wireless communications was about to change forever. Steve Jobs walked on stage wearing his usual turtleneck, and proceeded to reveal the iPhone. The rest, as they say, is history. The iPhone moved the wireless communications industry towards a new paradigm. No more physical keyboards, clamshell bodies, and protruding antennae. All of these were replaced by a beautiful black design, a huge touchscreen (3.5” was big for that time), a rear-facing camera, and (a little bit later) a revolutionary new way to consume applications: the App Store. Sales soared and Apple’s stock started an upward trajectory that would see it become one of the world’s most valuable companies.

The story could very well have ended there. If it had, we might all be using iPhones today. However, years before, Google had commenced its own march into the wireless communications space by purchasing a small startup called Android. A first phone had initially been slated for release in late 2007. But Apple’s iPhone announcement sent Google back to the drawing board. It took Google and its partners until 2010 to come up with a competitive answer – the Google Nexus One produced by HTC.

Understanding the strategy that Google put in place during this three year timespan is essential to understanding the European Commission’s Google Android decision.

How to beat one of the great innovations?

In order to overthrow — or even merely just compete with — the iPhone, Google faced the same dilemma that most second-movers have to contend with: imitate or differentiate. Its solution was a mix of both. It took the touchscreen, camera, and applications, but departed on one key aspect. Whereas Apple controls the iPhone from end-to-end, Google opted for a licensed, open-source operating system that substitutes a more-decentralized approach for Apple’s so-called “walled garden.”

Google and a number of partners founded the Open Handset Alliance (“OHA”) in November 2007. This loose association of network operators, software companies and handset manufacturers became the driving force behind the Android OS. Through the OHA, Google and its partners have worked to develop minimal specifications for OHA-compliant Android devices in order to ensure that all levels of the device ecosystem — from device makers to app developers — function well together. As its initial press release boasts, through the OHA:

Handset manufacturers and wireless operators will be free to customize Android in order to bring to market innovative new products faster and at a much lower cost. Developers will have complete access to handset capabilities and tools that will enable them to build more compelling and user-friendly services, bringing the Internet developer model to the mobile space. And consumers worldwide will have access to less expensive mobile devices that feature more compelling services, rich Internet applications and easier-to-use interfaces — ultimately creating a superior mobile experience.

The open source route has a number of advantages — notably the improved division of labor — but it is not without challenges. One key difficulty lies in coordinating and incentivizing the dozens of firms that make up the alliance. Google must not only keep the diverse Android ecosystem directed toward a common, compatible goal, it also has to monetize a product that, by its very nature, is given away free of charge. It is Google’s answers to these two problems that set off the Commission’s investigation.

The first problem is a direct consequence of Android’s decentralization. Whereas there are only a small number of iPhones (the couple of models which Apple markets at any given time) running the same operating system, Android comes in a jaw-dropping array of flavors. Some devices are produced by Google itself, others are the fruit of high-end manufacturers such as Samsung and LG, there are also so-called “flagship killers” like OnePlus, and budget phones from the likes of Motorola and Honor (one of Huawei’s brands). The differences don’t stop there. Manufacturers, like Samsung, Xiaomi and LG (to name but a few) have tinkered with the basic Android setup. Samsung phones heavily incorporate its Bixby virtual assistant, while Xiaomi packs in a novel user interface. The upshot is that the Android marketplace is tremendously diverse.

Managing this variety is challenging, to say the least (preventing projects from unravelling into a myriad of forks is always an issue for open source projects). Google and the OHA have come up with an elegant solution. The alliance penalizes so-called “incompatible” devices — that is, handsets whose software or hardware stray too far from a predetermined series of specifications. When this is the case, Google may refuse to license its proprietary applications (most notably the Play Store). This minimum level of uniformity ensures that apps will run smoothly on all devices. It also provides users with a consistent experience (thereby protecting the Android brand) and reduces the cost of developing applications for Android. Unsurprisingly, Android developers have lauded these “anti-fragmentation” measures, branding the Commission’s case a disaster.

A second important problem stems from the fact that the Android OS is an open source project. Device manufacturers can thus license the software free of charge. This is no small advantage. It shaves precious dollars from the price of Android smartphones, thus opening-up the budget end of the market. Although there are numerous factors at play, it should be noted that a top of the range Samsung Galaxy S9+ is roughly 30% cheaper ($819) than its Apple counterpart, the iPhone X ($1165).

Offering a competitive operating system free of charge might provide a fantastic deal for consumers, but it poses obvious business challenges. How can Google and other members of the OHA earn a return on the significant amounts of money poured into developing, improving, and marketing and Android devices? As is often the case with open source projects, they essentially rely on complementarities. Google produces the Android OS in the hope that it will boost users’ consumption of its profitable, ad-supported services (Google Search in particular). This is sometimes referred to as a loss leader or complementary goods strategy.

Google uses two important sets of contractual provisions to cement this loss leader strategy. First, it seemingly bundles a number of proprietary applications together. Manufacturers must pre-load the Google Search and Chrome apps in order to obtain the Play Store app (the lynchpin on which the Android ecosystem sits). Second, Google has concluded a number of “revenue sharing” deals with manufacturers and network operators. These companies receive monetary compensation when the Google Search is displayed prominently on a user’s home screen. In effect, they are receiving a cut of the marginal revenue that the use of this search bar generates for Google. Both of these measures ultimately nudge users — but do not force them, as neither prevents users from installing competing apps — into using Google’s most profitable services.

Readers would be forgiven for thinking that this is a win-win situation. Users get a competitive product free of charge, while Google and other members of the OHA earn enough money to compete against Apple.

The Commission is of another mind, however.

Commission’s hubris

The European Commission believes that Google is hurting competition. Though the text of the decision is not yet available, the thrust of its argument is that Google’s anti-fragmentation measures prevent software developers from launching competing OSs, while the bundling and revenue sharing both thwart rival search engines.

This analysis runs counter to some rather obvious facts:

  • For a start, the Android ecosystem is vibrant. Numerous firms have launched forked versions of Android, both with and without Google’s apps. Amazon’s Fire line of devices is a notable example.
  • Second, although Google’s behavior does have an effect on the search engine market, there is nothing anticompetitive about it. Yahoo could very well have avoided its high-profile failure if, way back in 2005, it had understood the importance of the mobile internet. At the time, it still had a 30% market share, compared to Google’s 36%. Firms that fail to seize upon business opportunities will fall out of the market. This is not a bug; it is possibly the most important feature of market economies. It reveals the products that consumers prefer and stops resources from being allocated to less valuable propositions.
  • Last but not least, Google’s behavior does not prevent other search engines from placing their own search bars or virtual assistants on smartphones. This is essentially what Samsung has done by ditching Google’s assistant in favor of its Bixby service. In other words, Google is merely competing with other firms to place key apps on or near the home screen of devices.

Even if the Commission’s reasoning where somehow correct, the competition watchdog is using a sledgehammer to crack a nut. The potential repercussions for Android, the software industry, and European competition law are great:

  • For a start, the Commission risks significantly weakening Android’s competitive position relative to Apple. Android is a complex ecosystem. The idea that it is possible to bring incremental changes to its strategy without threatening the viability of the whole is a sign of the Commission’s hubris.
  • More broadly, the harsh treatment of Google could have significant incentive effects for other tech platforms. As others have already pointed out, the Commission’s decision rests on the idea that dominant firms should not be allowed to favor their own services compared to those of rivals. Taken a face value, this anti-discrimination policy will push firms to design closed platforms. If rivals are excluded from the very start, there is no one against whom to discriminate. Antitrust watchdogs are thus kept at bay (and thus the Commission is acting against Google’s marginal preference for its own services, rather than Apple’s far-more-substantial preferencing of its own services). Moving to a world of only walled gardens might harm users and innovators alike.

Over the next couple of days and weeks, many will jump to the Commission’s defense. They will see its action as a necessary step against the abstract “power” of Silicon Valley’s tech giants. Rivals will feel vindicated. But when all is done and dusted, there seems to be little doubt that the decision is misguided. The Commission will have struck a blow to the heart of the most competitive offering in the smartphone space. And consumers will be the biggest losers.

This is not what the competition laws were intended to achieve.

Ours is not an age of nuance.  It’s an age of tribalism, of teams—“Yer either fer us or agin’ us!”  Perhaps I should have been less surprised, then, when I read the unfavorable review of my book How to Regulate in, of all places, the Federalist Society Review.

I had expected some positive feedback from reviewer J. Kennerly Davis, a contributor to the Federalist Society’s Regulatory Transparency Project.  The “About” section of the Project’s website states:

In the ultra-complex and interconnected digital age in which we live, government must issue and enforce regulations to protect public health and safety.  However, despite the best of intentions, government regulation can fail, stifle innovation, foreclose opportunity, and harm the most vulnerable among us.  It is for precisely these reasons that we must be diligent in reviewing how our policies either succeed or fail us, and think about how we might improve them.

I might not have expressed these sentiments in such pro-regulation terms.  For example, I don’t think government should regulate, even “to protect public health and safety,” absent (1) a market failure and (2) confidence that systematic governmental failures won’t cause the cure to be worse than the disease.  I agree, though, that regulation is sometimes appropriate, that government interventions often fail (in systematic ways), and that regulatory policies should regularly be reviewed with an eye toward reducing the combined costs of market and government failures.

Those are, in fact, the central themes of How to Regulate.  The book sets forth an overarching goal for regulation (minimize the sum of error and decision costs) and then catalogues, for six oft-cited bases for regulating, what regulatory tools are available to policymakers and how each may misfire.  For every possible intervention, the book considers the potential for failure from two sources—the knowledge problem identified by F.A. Hayek and public choice concerns (rent-seeking, regulatory capture, etc.).  It ends up arguing:

  • for property rights-based approaches to environmental protection (versus the command-and-control status quo);
  • for increased reliance on the private sector to produce public goods;
  • that recognizing property rights, rather than allocating usage, is the best way to address the tragedy of the commons;
  • that market-based mechanisms, not shareholder suits and mandatory structural rules like those imposed by Sarbanes-Oxley and Dodd-Frank, are the best way to constrain agency costs in the corporate context;
  • that insider trading restrictions should be left to corporations themselves;
  • that antitrust law should continue to evolve in the consumer welfare-focused direction Robert Bork recommended;
  • against the FCC’s recently abrogated net neutrality rules;
  • that occupational licensure is primarily about rent-seeking and should be avoided;
  • that incentives for voluntary disclosure will usually obviate the need for mandatory disclosure to correct information asymmetry;
  • that the claims of behavioral economics do not justify paternalistic policies to protect people from themselves; and
  • that “libertarian-paternalism” is largely a ruse that tends to morph into hard paternalism.

Given the congruence of my book’s prescriptions with the purported aims of the Regulatory Transparency Project—not to mention the laundry list of specific market-oriented policies the book advocates—I had expected a generally positive review from Mr. Davis (whom I sincerely thank for reading and reviewing the book; book reviews are a ton of work).

I didn’t get what I’d expected.  Instead, Mr. Davis denounced my book for perpetuating “progressive assumptions about state and society” (“wrongheaded” assumptions, the editor’s introduction notes).  He responded to my proposed methodology with a “meh,” noting that it “is not clearly better than the status quo.”  His one compliment, which I’ll gladly accept, was that my discussion of economic theory was “generally accessible.”

Following are a few thoughts on Mr. Davis’s critiques.

Are My Assumptions Progressive?

According to Mr. Davis, my book endorses three progressive concepts:

(i) the idea that market based arrangements among private parties routinely misallocate resources, (ii) the idea that government policymakers are capable of formulating executive directives that can correct private ordering market failures and optimize the allocation of resources, and (iii) the idea that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.

I agree with Mr. Davis that these are progressive ideas.  If my book embraced them, it might be fair to label it “progressive.”  But it doesn’t.  Not one of them.

  1. Market Failure

Nothing in my book suggests that “market based arrangements among private parties routinely misallocate resources.”  I do say that “markets sometimes fail to work well,” and I explain how, in narrow sets of circumstances, market failures may emerge.  Understanding exactly what may happen in those narrow sets of circumstances helps to identify the least restrictive option for addressing problems and would thus would seem a pre-requisite to effective policymaking for a conservative or libertarian.  My mere invocation of the term “market failure,” however, was enough for Mr. Davis to kick me off the team.

Mr. Davis ignored altogether the many points where I explain how private ordering fixes situations that could lead to poor market performance.  At the end of the information asymmetry chapter, for example, I write,

This chapter has described information asymmetry as a problem, and indeed it is one.  But it can also present an opportunity for profit.  Entrepreneurs have long sought to make money—and create social value—by developing ways to correct informational imbalances and thereby facilitate transactions that wouldn’t otherwise occur.

I then describe the advent of companies like Carfax, AirBnb, and Uber, all of which offer privately ordered solutions to instances of information asymmetry that might otherwise create lemons problems.  I conclude:

These businesses thrive precisely because of information asymmetry.  By offering privately ordered solutions to the problem, they allow previously under-utilized assets to generate heretofore unrealized value.  And they enrich the people who created and financed them.  It’s a marvelous thing.

That theme—that potential market failures invite privately ordered solutions that often obviate the need for any governmental fix—permeates the book.  In the public goods chapter, I spend a great deal of time explaining how privately ordered devices like assurance contracts facilitate the production of amenities that are non-rivalrous and non-excludable.  In discussing the tragedy of the commons, I highlight Elinor Ostrom’s work showing how “groups of individuals have displayed a remarkable ability to manage commons goods effectively without either privatizing them or relying on government intervention.”  In the chapter on externalities, I spend a full seven pages explaining why Coasean bargains are more likely than most people think to prevent inefficiencies from negative externalities.  In the chapter on agency costs, I explain why privately ordered solutions like the market for corporate control would, if not precluded by some ill-conceived regulations, constrain agency costs better than structural rules from the government.

Disregarding all this, Mr. Davis chides me for assuming that “markets routinely fail.”  And, for good measure, he explains that government interventions are often a bigger source of failure, a point I repeatedly acknowledge, as it is a—perhaps the—central theme of the book.

  1. Trust in Experts

In what may be the strangest (and certainly the most misleading) part of his review, Mr. Davis criticizes me for placing too much confidence in experts by giving short shrift to the Hayekian knowledge problem and the insights of public choice.

          a.  The Knowledge Problem

According to Mr. Davis, the approach I advocate “is centered around fully functioning experts.”  He continues:

This progressive trust in experts is misplaced.  It is simply false to suppose that government policymakers are capable of formulating executive directives that effectively improve upon private arrangements and optimize the allocation of resources.  Friedrich Hayek and other classical liberals have persuasively argued, and everyday experience has repeatedly confirmed, that the information needed to allocate resources efficiently is voluminous and complex and widely dispersed.  So much so that government experts acting through top down directives can never hope to match the efficiency of resource allocation made through countless voluntary market transactions among private parties who actually possess the information needed to allocate the resources most efficiently.

Amen and hallelujah!  I couldn’t agree more!  Indeed, I said something similar when I came to the first regulatory tool my book examines (and criticizes), command-and-control pollution rules.  I wrote:

The difficulty here is an instance of a problem that afflicts regulation generally.  At the end of the day, regulating involves centralized economic planning:  A regulating “planner” mandates that productive resources be allocated away from some uses and toward others.  That requires the planner to know the relative value of different resource uses.  But such information, in the words of Nobel laureate F.A. Hayek, “is not given to anyone in its totality.”  The personal preferences of thousands or millions of individuals—preferences only they know—determine whether there should be more widgets and fewer gidgets, or vice-versa.  As Hayek observed, voluntary trading among resource owners in a free market generates prices that signal how resources should be allocated (i.e., toward the uses for which resource owners may command the highest prices).  But centralized economic planners—including regulators—don’t allocate resources on the basis of relative prices.  Regulators, in fact, generally assume that prices are wrong due to the market failure the regulators are seeking to address.  Thus, the so-called knowledge problem that afflicts regulation generally is particularly acute for command-and-control approaches that require regulators to make refined judgments on the basis of information about relative costs and benefits.

That was just the first of many times I invoked the knowledge problem to argue against top-down directives and in favor of market-oriented policies that would enable individuals to harness local knowledge to which regulators would not be privy.  The index to the book includes a “knowledge problem” entry with no fewer than nine sub-entries (e.g., “with licensure regimes,” “with Pigouvian taxes,” “with mandatory disclosure regimes”).  There are undoubtedly more mentions of the knowledge problem than those listed in the index, for the book assesses the degree to which the knowledge problem creates difficulties for every regulatory approach it considers.

Mr. Davis does mention one time where I “acknowledge[] the work of Hayek” and “recognize[] that context specific information is vitally important,” but he says I miss the point:

Having conceded these critical points [about the importance of context-specific information], Professor Lambert fails to follow them to the logical conclusion that private ordering arrangements are best for regulating resources efficiently.  Instead, he stops one step short, suggesting that policymakers defer to the regulator most familiar with the regulated party when they need context-specific information for their analysis.  Professor Lambert is mistaken.  The best information for resource allocation is not to be found in the regional office of the regulator.  It resides with the persons who have long been controlled and directed by the progressive regulatory system.  These are the ones to whom policymakers should defer.

I was initially puzzled by Mr. Davis’s description of how my approach would address the knowledge problem.  It’s inconsistent with the way I described the problem (the “regional office of the regulator” wouldn’t know people’s personal preferences, etc.), and I couldn’t remember ever suggesting that regulatory devolution—delegating decisions down toward local regulators—was the solution to the knowledge problem.

When I checked the citation in the sentences just quoted, I realized that Mr. Davis had misunderstood the point I was making in the passage he cited (my own fault, no doubt, not his).  The cited passage was at the very end of the book, where I was summarizing the book’s contributions.  I claimed to have set forth a plan for selecting regulatory approaches that would minimize the sum of error and decision costs.  I wanted to acknowledge, though, the irony of promulgating a generally applicable plan for regulating in a book that, time and again, decries top-down imposition of one-size-fits-all rules.  Thus, I wrote:

A central theme of this book is that Hayek’s knowledge problem—the fact that no central planner can possess and process all the information needed to allocate resources so as to unlock their greatest possible value—applies to regulation, which is ultimately a set of centralized decisions about resource allocation.  The very knowledge problem besetting regulators’ decisions about what others should do similarly afflicts pointy-headed academics’ efforts to set forth ex ante rules about what regulators should do.  Context-specific information to which only the “regulator on the spot” is privy may call for occasional departures from the regulatory plan proposed here.

As should be obvious, my point was not that the knowledge problem can generally be fixed by regulatory devolution.  Rather, I was acknowledging that the general regulatory approach I had set forth—i.e., the rules policymakers should follow in selecting among regulatory approaches—may occasionally misfire and should thus be implemented flexibly.

           b.  Public Choice Concerns

A second problem with my purported trust in experts, Mr. Davis explains, stems from the insights of public choice:

Actual policymakers simply don’t live up to [Woodrow] Wilson’s ideal of the disinterested, objective, apolitical, expert technocrat.  To the contrary, a vast amount of research related to public choice theory has convincingly demonstrated that decisions of regulatory agencies are frequently shaped by politics, institutional self-interest and the influence of the entities the agencies regulate.

Again, huzzah!  Those words could have been lifted straight out of the three full pages of discussion I devoted to public choice concerns with the very first regulatory intervention the book considered.  A snippet from that discussion:

While one might initially expect regulators pursuing the public interest to resist efforts to manipulate regulation for private gain, that assumes that government officials are not themselves rational, self-interest maximizers.  As scholars associated with the “public choice” economic tradition have demonstrated, government officials do not shed their self-interested nature when they step into the public square.  They are often receptive to lobbying in favor of questionable rules, especially since they benefit from regulatory expansions, which tend to enhance their job status and often their incomes.  They also tend to become “captured” by powerful regulatees who may shower them with personal benefits and potentially employ them after their stints in government have ended.

That’s just a slice.  Elsewhere in those three pages, I explain (1) how the dynamic of concentrated benefits and diffuse costs allows inefficient protectionist policies to persist, (2) how firms that benefit from protectionist regulation are often assisted by “pro-social” groups that will make a public interest case for the rules (Bruce Yandle’s Bootleggers and Baptists syndrome), and (3) the “[t]wo types of losses [that] result from the sort of interest-group manipulation public choice predicts.”  And that’s just the book’s initial foray into public choice.  The entry for “public choice concerns” in the book’s index includes eight sub-entries.  As with the knowledge problem, I addressed the public choice issues that could arise from every major regulatory approach the book considered.

For Mr. Davis, though, that was not enough to keep me out of the camp of Wilsonian progressives.  He explains:

Professor Lambert devotes a good deal of attention to the problem of “agency capture” by regulated entities.  However, he fails to acknowledge that a symbiotic relationship between regulators and regulated is not a bug in the regulatory system, but an inherent feature of a system defined by extensive and continuing government involvement in the allocation of resources.

To be honest, I’m not sure what that last sentence means.  Apparently, I didn’t recite some talismanic incantation that would indicate that I really do believe public choice concerns are a big problem for regulation.  I did say this in one of the book’s many discussions of public choice:

A regulator that has both regular contact with its regulatees and significant discretionary authority over them is particularly susceptible to capture.  The regulator’s discretionary authority provides regulatees with a strong motive to win over the regulator, which has the power to hobble the regulatee’s potential rivals and protect its revenue stream.  The regular contact between the regulator and the regulatee provides the regulatee with better access to those in power than that available to parties with opposing interests.  Moreover, the regulatee’s preferred course of action is likely (1) to create concentrated benefits (to the regulatee) and diffuse costs (to consumers generally), and (2) to involve an expansion of the regulator’s authority.  The upshot is that that those who bear the cost of the preferred policy are less likely to organize against it, and regulators, who benefit from turf expansion, are more likely to prefer it.  Rate-of-return regulation thus involves the precise combination that leads to regulatory expansion at consumer expense: broad and discretionary government power, close contact between regulators and regulatees, decisions that generally involve concentrated benefits and diffuse costs, and regular opportunities to expand regulators’ power and prestige.

In light of this combination of features, it should come as no surprise that the history of rate-of-return regulation is littered with instances of agency capture and regulatory expansion.

Even that was not enough to convince Mr. Davis that I reject the Wilsonian assumption of “disinterested, objective, apolitical, expert technocrat[s].”  I don’t know what more I could have said.

  1. Social Welfare

Mr. Davis is right when he says, “Professor Lambert’s ultimate goal for his book is to provide policymakers with a resource that will enable them to make regulatory decisions that produce greater social welfare.”  But nowhere in my book do I suggest, as he says I do, “that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.”  What I mean by “social welfare” is the aggregate welfare of all the individuals in a society.  And I’m careful to point out that only they know what makes them better off.  (At one point, for example, I write that “[g]overnment planners have no way of knowing how much pleasure regulatees derive from banned activities…or how much displeasure they experience when they must comply with an affirmative command…. [W]ith many paternalistic policies and proposals…government planners are really just guessing about welfare effects.”)

I agree with Mr. Davis that “[t]here is no single generally accepted methodology that anyone can use to determine objectively how and to what extent the welfare of society will be affected by a particular regulatory directive.”  For that reason, nowhere in the book do I suggest any sort of “metes and bounds” measurement of social welfare.  (I certainly do not endorse the use of GDP, which Mr. Davis rightly criticizes; that term appears nowhere in the book.)

Rather than prescribing any sort of precise measurement of social welfare, my book operates at the level of general principles:  We have reasons to believe that inefficiencies may arise when conditions are thus; there is a range of potential government responses to this situation—from doing nothing, to facilitating a privately ordered solution, to mandating various actions; based on our experience with these different interventions, the likely downsides of each (stemming from, for example, the knowledge problem and public choice concerns) are so-and-so; all things considered, the aggregate welfare of the individuals within this group will probably be greatest with policy x.

It is true that the thrust of the book is consequentialist, not deontological.  But it’s a book about policy, not ethics.  And its version of consequentialism is rule, not act, utilitarianism.  Is a consequentialist approach to policymaking enough to render one a progressive?  Should we excise John Stuart Mill’s On Liberty from the classical liberal canon?  I surely hope not.

Is My Proposed Approach an Improvement?

Mr. Davis’s second major criticism of my book—that what it proposes is “just the status quo”—has more bite.  By that, I mean two things.  First, it’s a more painful criticism to receive.  It’s easier for an author to hear “you’re saying something wrong” than “you’re not saying anything new.”

Second, there may be more merit to this criticism.  As Mr. Davis observes, I noted in the book’s introduction that “[a]t times during the drafting, I … wondered whether th[e] book was ‘original’ enough.”  I ultimately concluded that it was because it “br[ought] together insights of legal theorists and economists of various stripes…and systematize[d] their ideas into a unified, practical approach to regulating.”  Mr. Davis thinks I’ve overstated the book’s value, and he may be right.

The current regulatory landscape would suggest, though, that my book’s approach to selecting among potential regulatory policies isn’t “just the status quo.”  The approach I recommend would generate the specific policies catalogued at the outset of this response (in the bullet points).  The fact that those policies haven’t been implemented under the existing regulatory approach suggests that what I’m recommending must be something different than the status quo.

Mr. Davis observes—and I acknowledge—that my recommended approach resembles the review required of major executive agency regulations under Executive Order 12866, President Clinton’s revised version of President Reagan’s Executive Order 12291.  But that order is quite limited in its scope.  It doesn’t cover “minor” executive agency rules (those with expected costs of less than $100 million) or rules from independent agencies or from Congress or from courts or at the state or local level.  Moreover, I understand from talking to a former administrator of the Office of Information and Regulatory Affairs, which is charged with implementing the order, that it has actually generated little serious consideration of less restrictive alternatives, something my approach emphasizes.

What my book proposes is not some sort of governmental procedure; indeed, I emphasize in the conclusion that the book “has not addressed … how existing regulatory institutions should be reformed to encourage the sort of analysis th[e] book recommends.”  Instead, I propose a way to think through specific areas of regulation, one that is informed by a great deal of learning about both market and government failures.  The best audience for the book is probably law students who will someday find themselves influencing public policy as lawyers, legislators, regulators, or judges.  I am thus heartened that the book is being used as a text at several law schools.  My guess is that few law students receive significant exposure to Hayek, public choice, etc.

So, who knows?  Perhaps the book will make a difference at the margin.  Or perhaps it will amount to sound and fury, signifying nothing.  But I don’t think a classical liberal could fairly say that the analysis it counsels “is not clearly better than the status quo.”

A Truly Better Approach to Regulating

Mr. Davis ends his review with a stirring call to revamp the administrative state to bring it “in complete and consistent compliance with the fundamental law of our republic embodied in the Constitution, with its provisions interpreted to faithfully conform to their original public meaning.”  Among other things, he calls for restoring the separation of powers, which has been erased in agencies that combine legislative, executive, and judicial functions, and for eliminating unchecked government power, which results when the legislature delegates broad rulemaking and adjudicatory authority to politically unaccountable bureaucrats.

Once again, I concur.  There are major problems—constitutional and otherwise—with the current state of administrative law and procedure.  I’d be happy to tear down the existing administrative state and begin again on a constitutionally constrained tabula rasa.

But that’s not what my book was about.  I deliberately set out to write a book about the substance of regulation, not the process by which rules should be imposed.  I took that tack for two reasons.  First, there are numerous articles and books, by scholars far more expert than I, on the structure of the administrative state.  I could add little value on administrative process.

Second, the less-addressed substantive question—what, as a substantive matter, should a policy addressing x do?—would exist even if Mr. Davis’s constitutionally constrained regulatory process were implemented.  Suppose that we got rid of independent agencies, curtailed delegations of rulemaking authority to the executive branch, and returned to a system in which Congress wrote all rules, the executive branch enforced them, and the courts resolved any disputes.  Someone would still have to write the rule, and that someone (or group of people) should have some sense of the pros and cons of one approach over another.  That is what my book seeks to provide.

A hard core Hayekian—one who had immersed himself in Law, Legislation, and Liberty—might respond that no one should design regulation (purposive rules that Hayek would call thesis) and that efficient, “purpose-independent” laws (what Hayek called nomos) will just emerge as disputes arise.  But that is not Mr. Davis’s view.  He writes:

A system of governance or regulation based on the rule of law attains its policy objectives by proscribing actions that are inconsistent with those objectives.  For example, this type of regulation would prohibit a regulated party from discharging a pollutant in any amount greater than the limiting amount specified in the regulation.  Under this proscriptive approach to regulation, any and all actions not specifically prohibited are permitted.

Mr. Davis has thus contemplated a purposive rule, crafted by someone.  That someone should know the various policy options and the upsides and downsides of each.  How to Regulate could help.

Conclusion

I’m not sure why Mr. Davis viewed my book as no more than dressed-up progressivism.  Maybe he was triggered by the book’s cover art, which he says “is faithful to the progressive tradition,” resembling “the walls of public buildings from San Francisco to Stalingrad.”  Maybe it was a case of Sunstein Derangement Syndrome.  (Progressive legal scholar Cass Sunstein had nice things to say about the book, despite its criticisms of a number of his ideas.)  Or perhaps it was that I used the term “market failure.”  Many conservatives and libertarians fear, with good reason, that conceding the existence of market failures invites all sorts of government meddling.

At the end of the day, though, I believe we classical liberals should stop pretending that market outcomes are always perfect, that pure private ordering is always and everywhere the best policy.  We should certainly sing markets’ praises; they usually work so well that people don’t even notice them, and we should point that out.  We should continually remind people that government interventions also fail—and in systematic ways (e.g., the knowledge problem and public choice concerns).  We should insist that a market failure is never a sufficient condition for a governmental fix; one must always consider whether the cure will be worse than the disease.  In short, we should take and promote the view that government should operate “under a presumption of error.”

That view, economist Aaron Director famously observed, is the essence of laissez faire.  It’s implicit in the purpose statement of the Federalist Society’s Regulatory Transparency Project.  And it’s the central point of How to Regulate.

So let’s go easy on the friendly fire.

This has been a big year for business in the courts. A U.S. district court approved the AT&T-Time Warner merger, the Supreme Court upheld Amex’s agreements with merchants, and a circuit court pushed back on the Federal Trade Commission’s vague and heavy handed policing of companies’ consumer data safeguards.

These three decisions mark a new era in the intersection of law and economics.

AT&T-Time Warner

AT&T-Time Warner is a vertical merger, a combination of firms with a buyer-seller relationship. Time Warner creates and broadcasts content via outlets such as HBO, CNN, and TNT. AT&T distributes content via services such as DirecTV.

Economists see little risk to competition from vertical mergers, although there are some idiosyncratic circumstances in which competition could be harmed. Nevertheless, the U.S. Department of Justice went to court to block the merger.

The last time the goverment sued to block a merger was more than 40 years ago, and the government lost. Since then, the government relied on the threat of litigation to extract settlements from the merging parties. For example, in the 1996 merger between Time Warner and Turner, the FTC required limits on how the new company could bundle HBO with less desirable channels and eliminated agreements that allowed TCI (a cable company that partially owned Turner) to carry Turner channels at preferential rates.

With AT&T-Time Warner, the government took a big risk, and lost. It was a big risk because (1) it’s a vertical merger, and (2) the case against the merger was weak. The government’s expert argued consumers would face an extra 45 cents a month on their cable bills if the merger went through, but under cross-examination, conceded it might be as little as 13 cents a month. That’s a big difference and raised big questions about the reliability of the expert’s model.

Judge Richard J. Leon’s 170+ page ruling agreed that the government’s case was weak and its expert was not credible. While it’s easy to cheer a victory of big business over big government, the real victory was the judge’s heavy reliance on facts, data, and analysis rather than speculation over the potential for consumer harm. That’s a big deal and may make the way for more vertical mergers.

Ohio v. American Express

The Supreme Court’s ruling in Amex may seem obscure. The court backed American Express Co.’s policy of preventing retailers from offering customers incentives to pay with cheaper cards.

Amex charges higher fees to merchants than do other cards, such as Visa, MasterCard, and Discover. Amex cardholders also have higher incomes and tend to spend more at stores than those associated with other networks. And, Amex offers its cardholders better benefits, services, and rewards than the other cards. Merchants don’t like Amex because of the higher fees, customers prefer Amex because of the card’s perks.

Amex, and other card companies, operate in what is known as a two-sided market. Put simply, they have two sets of customers: merchants who pay swipe fees, and consumers who pay fees and interest.

Part of Amex’s agreement with merchants is an “anti-steering” provision that bars merchants from offering discounts for using non-Amex cards. The U.S. Justice Department and a group of states sued the company, alleging the Amex rules limited merchants’ ability to reduce their costs from accepting credit cards, which meant higher retail prices. Amex argued that the higher prices charged to merchants were kicked back to its cardholders in the form of more and better perks.

The Supreme Court found that the Justice Department and states focused exclusively on one side (merchant fees) of the two-sided market. The courts says the government can’t meet its burden by showing some effect on some part of the market. Instead, they must demonstrate, “increased cost of credit card transactions … reduced number of credit card transactions, or otherwise stifled competition.” The government could not prove any of those things.

We live in a world two-sided markets. Amazon may be the biggest two-sided market in the history of the world, linking buyers and sellers. Smartphones such as iPhones and Android devices are two-sided markets, linking consumers with app developers. The Supreme Court’s ruling in Amex sets a standard for how antitrust law should treat the economics of two-sided markets.

LabMD

LabMD is another matter that seems obscure, but could have big impacts on the administrative state.

Since the early 2000s, the FTC has brought charges against more than 150 companies alleging they had bad security or privacy practices. LabMD was one of them, when its computer system was compromised by professional hackers in 2008. The FTC claimed that LabMD’s failure to adequately protect customer data was an “unfair” business practice.

Challenging the FTC can get very expensive and the agency used the threat of litigation to secure settlements from dozens of companies. It then used those settlements to convince everyone else that those settlements constituted binding law and enforceable security standards.

Because no one ever forced the FTC to defend what it was doing in court, the FTC’s assertion of legal authority became a self-fulfilling prophecy. LabMD, however, chose to challege the FTC. The fight drove LabMD out of business, but public interest law firm Cause of Action and lawyers at Ropes & Gray took the case on a pro bono basis.

The 11th Circuit Court of Appeals ruled the FTC’s approach to developing security standards violates basic principles of due process. The court said the FTC’s basic approach—in which the FTC tries to improve general security practices by suing companies that experience security breaches—violates the basic legal principle that the government can’t punish someone for conduct that the government hasn’t previously explained is problematic.

My colleague at ICLE observes the lesson to learn from LabMD isn’t about the illegitimacy of the FTC’s approach to internet privacy and security. Instead, it says legality of the administrative state is premised on courts placing a check on abusive regulators.

The lessons learned from these three recent cases reflect a profound shift in thinkging about the laws governing economic activity:

  • AT&T-Time Warner indicates that facts matter. Mere speculation of potential harms will not satisfy the court.
  • Amex highlights the growing role two-sided markets play in our economy and provides framework for evaluating competition in these markets.
  • LabMD is a small step in reining in the administrative state. Regulations must be scrutinized before they are imposed and enforced.

In some ways none of these decisions are revolutionary. Instead, they reflect an evolution toward greater transparency in how the law is to be applied and greater scrutiny over how the regulations are imposed.