Archives For administrative

Last week, I objected to Senator Warner relying on the flawed AOL/Time Warner merger conditions as a template for tech regulatory policy, but there is a much deeper problem contained in his proposals.  Although he does not explicitly say “big is bad” when discussing competition issues, the thrust of much of what he recommends would serve to erode the power of larger firms in favor of smaller firms without offering a justification for why this would result in a superior state of affairs. And he makes these recommendations without respect to whether those firms actually engage in conduct that is harmful to consumers.

In the Data Portability section, Warner says that “As platforms grow in size and scope, network effects and lock-in effects increase; consumers face diminished incentives to contract with new providers, particularly if they have to once again provide a full set of data to access desired functions.“ Thus, he recommends a data portability mandate, which would theoretically serve to benefit startups by providing them with the data that large firms possess. The necessary implication here is that it is a per se good that small firms be benefited and large firms diminished, as the proposal is not grounded in any evaluation of the competitive behavior of the firms to which such a mandate would apply.

Warner also proposes an “interoperability” requirement on “dominant platforms” (which I criticized previously) in situations where, “data portability alone will not produce procompetitive outcomes.” Again, the necessary implication is that it is a per se good that established platforms share their services with start ups without respect to any competitive analysis of how those firms are behaving. The goal is preemptively to “blunt their ability to leverage their dominance over one market or feature into complementary or adjacent markets or products.”

Perhaps most perniciously, Warner recommends treating large platforms as essential facilities in some circumstances. To this end he states that:

Legislation could define thresholds – for instance, user base size, market share, or level of dependence of wider ecosystems – beyond which certain core functions/platforms/apps would constitute ‘essential facilities’, requiring a platform to provide third party access on fair, reasonable and non-discriminatory (FRAND) terms and preventing platforms from engaging in self-dealing or preferential conduct.

But, as  i’ve previously noted with respect to imposing “essential facilities” requirements on tech platforms,

[T]he essential facilities doctrine is widely criticized, by pretty much everyone. In their respected treatise, Antitrust Law, Herbert Hovenkamp and Philip Areeda have said that “the essential facility doctrine is both harmful and unnecessary and should be abandoned”; Michael Boudin has noted that the doctrine is full of “embarrassing weaknesses”; and Gregory Werden has opined that “Courts should reject the doctrine.”

Indeed, as I also noted, “the Supreme Court declined to recognize the essential facilities doctrine as a distinct rule in Trinko, where it instead characterized the exclusionary conduct in Aspen Skiing as ‘at or near the outer boundary’ of Sherman Act § 2 liability.”

In short, it’s very difficult to know when access to a firm’s internal functions might be critical to the facilitation of a market. It simply cannot be true that a firm becomes bound under onerous essential facilities requirements (or classification as a public utility) simply because other firms find it more convenient to use its services than to develop their own.

The truth of what is actually happening in these cases, however, is that third-party firms are choosing to anchor their business to the processes of another firm which generates an “asset specificity” problem that they then seek the government to remedy:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

This is naturally a calculated risk that a firm may choose to make, but it is a risk. To pry open Google or Facebook for the benefit of competitors that choose to play to Google and Facebook’s user base, rather than opening markets of their own, punishes the large players for being successful while also rewarding behavior that shies away from innovation. Further, such a policy would punish the large platforms whenever they innovate with their services in any way that might frustrate third-party “integrators” (see, e.g., Foundem’s claims that Google’s algorithm updates meant to improve search quality for users harmed Foundem’s search rankings).  

Rather than encouraging innovation, blessing this form of asset specificity would have the perverse result of entrenching the status quo.

In all of these recommendations from Senator Warner, there is no claim that any of the targeted firms will have behaved anticompetitively, but merely that they are above a certain size. This is to say that, in some cases, big is bad.

Senator Warner’s policies would harm competition and innovation

As Geoffrey Manne and Gus Hurwitz have recently noted these views run completely counter to the last half-century or more of economic and legal learning that has occurred in antitrust law. From its murky, politically-motivated origins through the early 60’s when the Structure-Conduct-Performance (“SCP”) interpretive framework was ascendant, antitrust law was more or less guided by the gut feeling of regulators that big business necessarily harmed the competitive process.

Thus, at its height with SCP, “big is bad” antitrust relied on presumptions that large firms over a certain arbitrary threshold were harmful and should be subjected to more searching judicial scrutiny when merging or conducting business.

A paradigmatic example of this approach can be found in Von’s Grocery where the Supreme Court prevented the merger of two relatively small grocery chains. Combined, the two chains would have constitutes a mere 9 percent of the market, yet the Supreme Court, relying on the SCP aversion to concentration in itself, prevented the merger despite any procompetitive justifications that would have allowed the combined entity to compete more effectively in a market that was coming to be dominated by large supermarkets.

As Manne and Hurwitz observe: “this decision meant breaking up a merger that did not harm consumers, on the one hand, while preventing firms from remaining competitive in an evolving market by achieving efficient scale, on the other.” And this gets to the central defect of Senator Warner’s proposals. He ties his decisions to interfere in the operations of large tech firms to their size without respect to any demonstrable harm to consumers.

To approach antitrust this way — that is, to roll the clock back to a period before there was a well-defined and administrable standard for antitrust — is to open the door for regulation by political whim. But the value of the contemporary consumer welfare test is that it provides knowable guidance that limits both the undemocratic conduct of politically motivated enforcers as well as the opportunities for private firms to engage in regulatory capture. As Manne and Hurwitz observe:

Perhaps the greatest virtue of the consumer welfare standard is not that it is the best antitrust standard (although it is) — it’s simply that it is a standard. The story of antitrust law for most of the 20th century was one of standard-less enforcement for political ends. It was a tool by which any entrenched industry could harness the force of the state to maintain power or stifle competition.

While it is unlikely that Senator Warner intends to entrench politically powerful incumbents, or enable regulation by whim, those are the likely effects of his proposals.

Antitrust law has a rich set of tools for dealing with competitive harm. Introducing legislation to define arbitrary thresholds for limiting the potential power of firms will ultimately undermine the power of those tools and erode the welfare of consumers.

 

The Economist takes on “sin taxes” in a recent article, “‘Sin’ taxes—eg, on tobacco—are less efficient than they look.” The article has several lessons for policy makers eyeing taxes on e-cigarettes and other vapor products.

Historically, taxes had the key purpose of raising revenues. The “best” taxes would be on goods with few substitutes (i.e., inelastic demand) and on goods deemed to be luxuries. In Wealth of Nations Adam Smith notes:

Sugar, rum, and tobacco are commodities which are nowhere necessaries of life, which are become objects of almost universal consumption, and which are therefore extremely proper subjects of taxation.

The Economist notes in 1764, a fiscal crisis driven by wars in North America led Britain’s parliament began enforcing tariffs on sugar and molasses imported from outside the empire. In the U.S., from 1868 until 1913, 90 percent of all federal revenue came from taxes on liquor, beer, wine and tobacco.

Over time, the rationale for these taxes has shifted toward “sin taxes” designed to nudge consumers away from harmful or distasteful consumption. The Temperance movement in the U.S. argued for higher taxes to discourage alcohol consumption. Since the Surgeon General’s warning on the dangers of smoking, tobacco tax increases have been justified as a way to get smokers to quit. More recently, a perceived obesity epidemic has led several American cities as well as Thailand, Britain, Ireland, South Africa to impose taxes on sugar-sweetened beverages to reduce sugar consumption.

Because demand curves slope down, “sin taxes” do change behavior by reducing the quantity demanded. However, for many products subject to such taxes, demand is not especially responsive. For example, as shown in the figure below, a one percent increase in the price of tobacco is associated with a one-half of one percent decrease in sales.

Economist-Sin-Taxes

 

Substitutability is another consideration for tax policy. An increase in the tax on spirits will result in an increase in beer and wine purchases. A high toll on a road will divert traffic to untolled streets that may not be designed for increased traffic volumes. A spike in tobacco taxes in one state will result in a spike in sales in bordering states as well as increase illegal interstate sales or smuggling. The Economist reports:

After Berkeley introduced its tax, sales of sugary drinks rose by 6.9% in neighbouring cities. Denmark, which instituted a tax on fat-laden foods in 2011, ran into similar problems. The government got rid of the tax a year later when it discovered that many shoppers were buying butter in neighbouring Germany and Sweden.

Advocates of “sin” taxes on tobacco, alcohol, and sugar argue their use impose negative externalities on the public, since governments have to spend more to take care of sick people. With approximately one-third of the U.S. population covered by some form of government funded health insurance, such as Medicare or Medicaid, what were once private costs of healthcare have been transformed into a public cost.

According to Centers for Disease Control and Prevention in U.S., smoking-related illness in the U.S. costs more than $300 billion each year, including; (1) nearly $170 billion for direct medical care for adults and (2) more than $156 billion in lost productivity, including $5.6 billion in lost productivity due to secondhand smoke exposure.

On the other hand, The Economist points out:

Smoking, in contrast, probably saves taxpayers money. Lifelong smoking will bring forward a person’s death by about ten years, which means that smokers tend to die just as they would start drawing from state pensions. In a study published in 2002 Kip Viscusi, an economist at Vanderbilt University who has served as an expert witness on behalf of tobacco companies, estimated that even if tobacco were untaxed, Americans could still expect to save the government an average of 32 cents for every pack of cigarettes they smoke.

The CDC’s cost estimates raise important questions regarding who bears the burden of smoking related illness. For example, much of the direct cost is borne by private insurance, which charge steeper premiums for customers who smoke. In addition, the CDC estimates reflect costs imposed by people who have smoked for decades—many of whom have now quit. A proper accounting of the costs vis-à-vis tax policy should evaluate the discounted costs imposed by today’s smokers.

State and local governments in the U.S. collect more than $18 billion a year in tobacco taxes. While some jurisdictions earmark a portion of tobacco taxes for prevention and cessation efforts, in practice most tobacco taxes are treated by policymakers as general revenues to be spent in whatever way the legislative body determines. Thus, in practice, there is no clear nexus between taxes levied on tobacco and government’s use of the tax revenues on smoking related costs.

Most of the harm from smoking is caused by the inhalation of toxicants released through the combustion of tobacco. Public Health England and the American Cancer Society have concluded non-combustible tobacco products, such as e-cigarettes, “heat-not-burn” products, smokeless tobacco, are considerably less harmful than combustible products.

Many experts believe that the best option for smokers who are unable or unwilling to quit smoking is to switch to a less harmful alternative activity that has similar attributes, such as using non-combustible nicotine delivery products. Policies that encourage smokers to switch from more harmful combustible tobacco products to less harmful non-combustible products would be considered a form of “harm reduction.”

Nine U.S. states now have taxes on vapor products. In addition, several local jurisdictions have enacted taxes. Their methods and levels of taxation vary widely. Policy makers considering a tax on vapor products should account for the following factors.

  • The current market for e-cigarettes as well as heat-not-burn products in the range of 0-10 percent of the cigarette market. Given the relatively small size of the e-cigarette and heated tobacco product market, it is unlikely any level of taxation of e-cigarettes and heated tobacco products would generate significant tax revenues to the taxing jurisdiction. Moreover much of the current research likely represents early adopters and higher income consumer groups. As such, the current empirical data based on total market size and price/tax levels are likely to be far from indicative of the “actual” market for these products.
  • The demand for e-cigarettes is much more responsive to a change in price than the demand for combustible cigarettes. My review of the published research to date finds the median estimated own-price elasticity is -1.096, meaning something close to a 1-to-1 relationship: a tax resulting in a one percent increase in e-cigarette prices would be associated with one percent decline in e-cigarette sales. Many of those lost sales would be shifted to purchases of combustible cigarettes.
  • Research on the price responsiveness of vapor products is relatively new and sparse. There are fewer than a dozen published articles, and the first article was published in 2014. As a result, the literature reports a wide range of estimated elasticities that calls into question the reliability of published estimates, as shown in the figure below. As a relatively unformed area of research, the policy debate would benefit from additional research that involves larger samples with better statistical power, reflects the dynamic nature of this new product category, and accounts for the wide variety of vapor products.

 

With respect to taxation and pricing, policymakers would benefit from reliable information regarding the size of the vapor product market and the degree to which vapor products are substitutes for combustible tobacco products. It may turn out that a tax on vapor products may be, as The Economist notes, less efficient than they look.

Senator Mark Warner has proposed 20 policy prescriptions for bringing “big tech” to heel. The proposals — which run the gamut from policing foreign advertising on social networks to regulating feared competitive harms — provide much interesting material for Congress to consider.

On the positive side, Senator Warner introduces the idea that online platforms may be able to function as least-cost avoiders with respect to certain tortious behavior of their users. He advocates for platforms to implement technology that would help control the spread of content that courts have found violated certain rights of third-parties.

Yet, on other accounts — specifically the imposition of an “interoperability” mandate on platforms — his proposals risk doing more harm than good.

The interoperability mandate was included by Senator Warner in order to “blunt [tech platforms’] ability to leverage their dominance over one market or feature into complementary or adjacent markets or products.” According to Senator Warner, such a measure would enable startups to offset the advantages that arise from network effects on large tech platforms by building their services more easily on the backs of successful incumbents.

Whatever you think of the moats created by network effects, the example of “successful” previous regulation on this issue that Senator Warner relies upon is perplexing:

A prominent template for [imposing interoperability requirements] was in the AOL/Time Warner merger, where the FCC identified instant messaging as the ‘killer app’ – the app so popular and dominant that it would drive consumers to continue to pay for AOL service despite the existence of more innovative and efficient email and internet connectivity services. To address this, the FCC required AOL to make its instant messaging service (AIM, which also included a social graph) interoperable with at least one rival immediately and with two other rivals within 6 months.

But the AOL/Time Warner merger and the FCC’s conditions provide an example that demonstrates the exact opposite of what Senator Warner suggests. The much-feared 2001 megamerger prompted, as the Senator notes, fears that the new company would be able to leverage its dominance in the nascent instant messaging market to extend its influence into adjacent product markets.

Except, by 2003, despite it being unclear that AOL had developed interoperable systems, two large competitors had arisen that did not run interoperable IM networks (Yahoo! and Microsoft). In that same period, AOL’s previously 100% IM market share had declined by about half. By 2009, after eight years of heavy losses, Time Warner shed AOL, and by last year AIM was completely dead.

Not only was it not clear that AOL was able to make AIM interoperable, AIM was never able to catch up once better, rival services launched. What the conditions did do, however, was prevent AOL from launching competitive video chat services as it flailed about in the wake of the deal, thus forcing it to miss out on a market opportunity available to unencumbered competitors like Microsoft and Yahoo!

And all of this of course ignores the practical impossibility entailed in interfering in highly integrated technology platforms.

The AOL/Time Warner merger conditions are no template for successful tech regulation. Congress would be ill-advised to rely upon such templates for crafting policy around tech and innovation.

What to make of Wednesday’s decision by the European Commission alleging that Google has engaged in anticompetitive behavior? In this post, I contrast the European Commission’s (EC) approach to competition policy with US antitrust, briefly explore the history of smartphones and then discuss the ruling.

Asked about the EC’s decision the day it was announced, FTC Chairman Joseph Simons noted that, while the market is concentrated, Apple and Google “compete pretty heavily against each other” with their mobile operating systems, in stark contrast to the way the EC defined the market. Simons also stressed that for the FTC what matters is not the structure of the market per se but whether or not there is harm to the consumer. This again contrasts with the European Commission’s approach, which does not require harm to consumers. As Simons put it:

Once they [the European Commission] find that a company is dominant… that imposes upon the company kind of like a fairness obligation irrespective of what the effect is on the consumer. Our regulatory… our antitrust regime requires that there be a harm to consumer welfare — so the consumer has to be injured — so the two tests are a little bit different.

Indeed, and as the history below shows, the popularity of Apple’s iOS and Google’s Android operating systems arose because they were superior products — not because of anticompetitive conduct on the part of either Apple or Google. On the face of it, the conduct of both Apple and Google has led to consumer benefits, not harms. So, from the perspective of U.S. antitrust authorities, there is no reason to take action.

Moreover, there is a danger that by taking action as the EU has done, competition and innovation will be undermined — which would be a perverse outcome indeed. These concerns were reflected in a statement by Senator Mike Lee (R-UT):

Today’s decision by the European Commission to fine Google over $5 billion and require significant changes to its business model to satisfy EC bureaucrats has the potential to undermine competition and innovation in the United States,” Sen. Lee said. “Moreover, the decision further demonstrates the different approaches to competition policy between U.S. and EC antitrust enforcers. As discussed at the hearing held last December before the Senate’s Subcommittee on Antitrust, Competition Policy & Consumer Rights, U.S. antitrust agencies analyze business practices based on the consumer welfare standard. This analytical framework seeks to protect consumers rather than competitors. A competitive marketplace requires strong antitrust enforcement. However, appropriate competition policy should serve the interests of consumers and not be used as a vehicle by competitors to punish their successful rivals.

Ironically, the fundamental basis for the Commission’s decision is an analytical framework developed by economists at Harvard in the 1950s, which presumes that the structure of a market determines the conduct of the participants, which in turn presumptively affects outcomes for consumers. This “structure-conduct-performance” paradigm has been challenged both theoretically and empirically (and by “challenged,” I mean “demolished”).

Maintaining, as EC Commissioner Vestager has, that “What would serve competition is to have more players,” is to adopt a presumption regarding competition rooted in the structure of the market, without sufficient attention to the facts on the ground. As French economist Jean Tirole noted in his Nobel Prize lecture:

Economists accordingly have advocated a case-by-case or “rule of reason” approach to antitrust, away from rigid “per se” rules (which mechanically either allow or prohibit certain behaviors, ranging from price-fixing agreements to resale price maintenance). The economists’ pragmatic message however comes with a double social responsibility. First, economists must offer a rigorous analysis of how markets work, taking into account both the specificities of particular industries and what regulators do and do not know….

Second, economists must participate in the policy debate…. But of course, the responsibility here goes both ways. Policymakers and the media must also be willing to listen to economists.

In good Tirolean fashion, we begin with an analysis of how the market for smartphones developed. What quickly emerges is that the structure of the market is a function of intense competition, not its absence. And, by extension, mandating a different structure will likely impede competition, or, at the very least, will not likely contribute to it.

A brief history of smartphone competition

In 2006, Nokia’s N70 became the first smartphone to sell more than a million units. It was a beautiful device, with a simple touch screen interface and real push buttons for numbers. The following year, Apple released its first iPhone. It sold 7 million units — about the same as Nokia’s N95 and slightly less than LG’s Shine. Not bad, but paltry compared to the sales of Nokia’s 1200 series phones, which had combined sales of over 250 million that year — about twice the total of all smartphone sales in 2007.

By 2017, smartphones had come to dominate the market, with total sales of over 1.5 billion. At the same time, the structure of the market has changed dramatically. In the first quarter of 2018, Apple’s iPhone X and iPhone 8 were the two best-selling smartphones in the world. In total, Apple shipped just over 52 million phones, accounting for 14.5% of the global market. Samsung, which has a wider range of devices, sold even more: 78 million phones, or 21.7% of the market. At third and fourth place were Huawei (11%) and Xiaomi (7.5%). Nokia and LG didn’t even make it into the top 10, with market shares of only 3% and 1% respectively.

Several factors have driven this highly dynamic market. Dramatic improvements in cellular data networks have played a role. But arguably of greater importance has been the development of software that offers consumers an intuitive and rewarding experience.

Apple’s iOS and Google’s Android operating systems have proven to be enormously popular among both users and app developers. This has generated synergies — or what economists call network externalities — as more apps have been developed, so more people are attracted to the ecosystem and vice versa, leading to a virtuous circle that benefits both users and app developers.

By contrast, Nokia’s early smartphones, including the N70 and N95, ran Symbian, the operating system developed for Psion’s handheld devices, which had a clunkier user interface and was more difficult to code — so it was less attractive to both users and developers. In addition, Symbian lacked an effective means of solving the problem of fragmentation of the operating system across different devices, which made it difficult for developers to create apps that ran across the ecosystem — something both Apple (through its closed system) and Google (through agreements with carriers) were able to address. Meanwhile, Java’s MIDP used in LG’s Shine, and its successor J2ME imposed restrictions on developers (such as prohibiting access to files, hardware, and network connections) that seem to have made it less attractive than Android.

The relative superiority of their operating systems enabled Apple and the manufacturers of Android-based phones to steal a march on the early leaders in the smartphone revolution.

The fact that Google allows smartphone manufacturers to install Android for free, distributes Google Play and other apps in a free bundle, and pays such manufacturers for preferential treatment for Google Search, has also kept the cost of Android-based smartphones down. As a result, Android phones are the cheapest on the market, providing a powerful experience for as little as $50. It is reasonable to conclude from this that innovation, driven by fierce competition, has led to devices, operating systems, and apps that provide enormous benefits to consumers.

The Commission decision would harm device manufacturers, app developers and consumers

The EC’s decision seems to disregard the history of smartphone innovation and competition and their ongoing consequences. As Dirk Auer explains, the Open Handset Alliance (OHA) was created specifically to offer an effective alternative to Apple’s iPhone — and it worked. Indeed, it worked so spectacularly that Android is installed on about 80% of all new phones. This success was the result of several factors that the Commission now seeks to undermine:

First, in order to maintain order within the Android universe, and thereby ensure that apps developed for Android would function on the vast majority of Android devices, Google and the OHA sought to limit the extent to which Android “forks” could be created. (Apple didn’t face this problem because its source code is proprietary, so cannot be modified by third-party developers.) One way Google does this is by imposing restrictions on the licensing of its proprietary apps, such as the Google Play store (a repository of apps, similar to Apple’s App Store).

Device manufacturers that don’t conform to these restrictions may still build devices with their forked version of Android — but without those Google apps. Indeed, Amazon chooses to develop a non-conforming version of Android and built its own app repository for its Fire devices (though it is still possible to add the Google Play Store). That strategy seems to be working for Amazon in the tablet market; in 2017 it rose past Samsung to become the second biggest manufacturer of tablets worldwide, after Apple.

Second, in order to be able to offer Android for free to smartphone manufacturers, Google sought to develop unique revenue streams (because, although the software is offered for free, it turns out that software developers generally don’t work for free). The main way Google did this was by requiring manufacturers that choose to install Google Play also to install its browser (Chrome) and search tools, which generate revenue from advertising. At the same time, Google kept its platform open by permitting preloads of rivals’ apps and creating a marketplace where rivals can also reach scale. Mozilla’s Firefox browser, for example, has been downloaded over 100 million times on Android.

The importance of these factors to the success of Android is acknowledged by the EC. But instead of treating them as legitimate business practices that enabled the development of high-quality, low-cost smartphones and a universe of apps that benefits billions of people, the Commission simply asserts that they are harmful, anticompetitive practices.

For example, the Commission asserts that

In order to be able to pre-install on their devices Google’s proprietary apps, including the Play Store and Google Search, manufacturers had to commit not to develop or sell even a single device running on an Android fork. The Commission found that this conduct was abusive as of 2011, which is the date Google became dominant in the market for app stores for the Android mobile operating system.

This is simply absurd, to say nothing of ahistorical. As noted, the restrictions on Android forks plays an important role in maintaining the coherency of the Android ecosystem. If device manufacturers were able to freely install Google apps (and other apps via the Play Store) on devices running problematic Android forks that were unable to run the apps properly, consumers — and app developers — would be frustrated, Google’s brand would suffer, and the value of the ecosystem would be diminished. Extending this restriction to all devices produced by a specific manufacturer, regardless of whether they come with Google apps preinstalled, reinforces the importance of the prohibition to maintaining the coherency of the ecosystem.

It is ridiculous to say that something (efforts to rein in Android forking) that made perfect sense until 2011 and that was central to the eventual success of Android suddenly becomes “abusive” precisely because of that success — particularly when the pre-2011 efforts were often viewed as insufficient and unsuccessful (a January 2012 Guardian Technology Blog post, “How Google has lost control of Android,” sums it up nicely).

Meanwhile, if Google is unable to tie pre-installation of its search and browser apps to the installation of its app store, then it will have less financial incentive to continue to maintain the Android ecosystem. Or, more likely, it will have to find other ways to generate revenue from the sale of devices in the EU — such as charging device manufacturers for Android or Google Play. The result is that consumers will be harmed, either because the ecosystem will be degraded, or because smartphones will become more expensive.

The troubling absence of Apple from the Commission’s decision

In addition, the EC’s decision is troublesome in other ways. First, for its definition of the market. The ruling asserts that “Through its control over Android, Google is dominant in the worldwide market (excluding China) for licensable smart mobile operating systems, with a market share of more than 95%.” But “licensable smart mobile operating systems” is a very narrow definition, as it necessarily precludes operating systems that are not licensable — such as Apple’s iOS and RIM’s Blackberry OS. Since Apple has nearly 25% of the market share of smartphones in Europe, the European Commission has — through its definition of the market — presumed away the primary source of effective competition. As Pinar Akman has noted:

How can Apple compete with Google in the market as defined by the Commission when Apple allows only itself to use its operating system only on devices that Apple itself manufactures?

The EU then invents a series of claims regarding the lack of competition with Apple:

  • end user purchasing decisions are influenced by a variety of factors (such as hardware features or device brand), which are independent from the mobile operating system;

It is not obvious that this is evidence of a lack of competition. A better explanation is that the EU’s narrow definition of the market is defective. In fact, one could easily draw the opposite conclusion of that drawn by the Commission: the fact that purchasing decisions are driven by various factors suggests that there is substantial competition, with phone manufacturers seeking to design phones that offer a range of features, on a number of dimensions, to best capture diverse consumer preferences. They are able to do this in large part precisely because consumers are able to rely upon a generally similar operating system and continued access to the apps that they have downloaded. As Tim Cook likes to remind his investors, Apple is quite successful at targeting “Android switchers” to switch to iOS.

 

  • Apple devices are typically priced higher than Android devices and may therefore not be accessible to a large part of the Android device user base;

 

And yet, in the first quarter of 2018, Apple phones accounted for five of the top ten selling smartphones worldwide. Meanwhile, several competing phones, including the fifth and sixth best-sellers, Samsung’s Galaxy S9 and S9+, sell for similar prices to the most expensive iPhones. And a refurbished iPhone 6 can be had for less than $150.

 

  • Android device users face switching costs when switching to Apple devices, such as losing their apps, data and contacts, and having to learn how to use a new operating system;

 

This is, of course, true for any system switch. And yet the growing market share of Apple phones suggests that some users are willing to part with those sunk costs. Moreover, the increasing predominance of cloud-based and cross-platform apps, as well as Apple’s own “Move to iOS” Android app (which facilitates the transfer of users’ data from Android to iOS), means that the costs of switching border on trivial. As mentioned above, Tim Cook certainly believes in “Android switchers.”

 

  • even if end users were to switch from Android to Apple devices, this would have limited impact on Google’s core business. That’s because Google Search is set as the default search engine on Apple devices and Apple users are therefore likely to continue using Google Search for their queries.

 

This is perhaps the most bizarre objection of them all. The fact that Apple chooses to install Google search as the default demonstrates that consumers prefer that system over others. Indeed, this highlights a fundamental problem with the Commission’s own rationale, As Akman notes:

It is interesting that the case appears to concern a dominant undertaking leveraging its dominance from a market in which it is dominant (Google Play Store) into another market in which it is also dominant (internet search). As far as this author is aware, most (if not all?) cases of tying in the EU to date concerned tying where the dominant undertaking leveraged its dominance in one market to distort or eliminate competition in an otherwise competitive market.

Conclusion

As the foregoing demonstrates, the EC’s decision is based on a fundamental misunderstanding of the nature and evolution of the market for smartphones and associated applications. The statement by Commissioner Vestager quoted above — that “What would serve competition is to have more players” — belies this misunderstanding and highlights the erroneous assumptions underpinning the Commission’s analysis, which is wedded to a theory of market competition that was long ago thrown out by economists.

And, thankfully, it appears that the FTC Chairman is aware of at least some of the flaws in the EC’s conclusions.

Google will undoubtedly appeal the Commission’s decision. For the sakes of the millions of European consumers who rely on Android-based phones and the millions of software developers who provide Android apps, let’s hope that they succeed.

This has been a big year for business in the courts. A U.S. district court approved the AT&T-Time Warner merger, the Supreme Court upheld Amex’s agreements with merchants, and a circuit court pushed back on the Federal Trade Commission’s vague and heavy handed policing of companies’ consumer data safeguards.

These three decisions mark a new era in the intersection of law and economics.

AT&T-Time Warner

AT&T-Time Warner is a vertical merger, a combination of firms with a buyer-seller relationship. Time Warner creates and broadcasts content via outlets such as HBO, CNN, and TNT. AT&T distributes content via services such as DirecTV.

Economists see little risk to competition from vertical mergers, although there are some idiosyncratic circumstances in which competition could be harmed. Nevertheless, the U.S. Department of Justice went to court to block the merger.

The last time the goverment sued to block a merger was more than 40 years ago, and the government lost. Since then, the government relied on the threat of litigation to extract settlements from the merging parties. For example, in the 1996 merger between Time Warner and Turner, the FTC required limits on how the new company could bundle HBO with less desirable channels and eliminated agreements that allowed TCI (a cable company that partially owned Turner) to carry Turner channels at preferential rates.

With AT&T-Time Warner, the government took a big risk, and lost. It was a big risk because (1) it’s a vertical merger, and (2) the case against the merger was weak. The government’s expert argued consumers would face an extra 45 cents a month on their cable bills if the merger went through, but under cross-examination, conceded it might be as little as 13 cents a month. That’s a big difference and raised big questions about the reliability of the expert’s model.

Judge Richard J. Leon’s 170+ page ruling agreed that the government’s case was weak and its expert was not credible. While it’s easy to cheer a victory of big business over big government, the real victory was the judge’s heavy reliance on facts, data, and analysis rather than speculation over the potential for consumer harm. That’s a big deal and may make the way for more vertical mergers.

Ohio v. American Express

The Supreme Court’s ruling in Amex may seem obscure. The court backed American Express Co.’s policy of preventing retailers from offering customers incentives to pay with cheaper cards.

Amex charges higher fees to merchants than do other cards, such as Visa, MasterCard, and Discover. Amex cardholders also have higher incomes and tend to spend more at stores than those associated with other networks. And, Amex offers its cardholders better benefits, services, and rewards than the other cards. Merchants don’t like Amex because of the higher fees, customers prefer Amex because of the card’s perks.

Amex, and other card companies, operate in what is known as a two-sided market. Put simply, they have two sets of customers: merchants who pay swipe fees, and consumers who pay fees and interest.

Part of Amex’s agreement with merchants is an “anti-steering” provision that bars merchants from offering discounts for using non-Amex cards. The U.S. Justice Department and a group of states sued the company, alleging the Amex rules limited merchants’ ability to reduce their costs from accepting credit cards, which meant higher retail prices. Amex argued that the higher prices charged to merchants were kicked back to its cardholders in the form of more and better perks.

The Supreme Court found that the Justice Department and states focused exclusively on one side (merchant fees) of the two-sided market. The courts says the government can’t meet its burden by showing some effect on some part of the market. Instead, they must demonstrate, “increased cost of credit card transactions … reduced number of credit card transactions, or otherwise stifled competition.” The government could not prove any of those things.

We live in a world two-sided markets. Amazon may be the biggest two-sided market in the history of the world, linking buyers and sellers. Smartphones such as iPhones and Android devices are two-sided markets, linking consumers with app developers. The Supreme Court’s ruling in Amex sets a standard for how antitrust law should treat the economics of two-sided markets.

LabMD

LabMD is another matter that seems obscure, but could have big impacts on the administrative state.

Since the early 2000s, the FTC has brought charges against more than 150 companies alleging they had bad security or privacy practices. LabMD was one of them, when its computer system was compromised by professional hackers in 2008. The FTC claimed that LabMD’s failure to adequately protect customer data was an “unfair” business practice.

Challenging the FTC can get very expensive and the agency used the threat of litigation to secure settlements from dozens of companies. It then used those settlements to convince everyone else that those settlements constituted binding law and enforceable security standards.

Because no one ever forced the FTC to defend what it was doing in court, the FTC’s assertion of legal authority became a self-fulfilling prophecy. LabMD, however, chose to challege the FTC. The fight drove LabMD out of business, but public interest law firm Cause of Action and lawyers at Ropes & Gray took the case on a pro bono basis.

The 11th Circuit Court of Appeals ruled the FTC’s approach to developing security standards violates basic principles of due process. The court said the FTC’s basic approach—in which the FTC tries to improve general security practices by suing companies that experience security breaches—violates the basic legal principle that the government can’t punish someone for conduct that the government hasn’t previously explained is problematic.

My colleague at ICLE observes the lesson to learn from LabMD isn’t about the illegitimacy of the FTC’s approach to internet privacy and security. Instead, it says legality of the administrative state is premised on courts placing a check on abusive regulators.

The lessons learned from these three recent cases reflect a profound shift in thinkging about the laws governing economic activity:

  • AT&T-Time Warner indicates that facts matter. Mere speculation of potential harms will not satisfy the court.
  • Amex highlights the growing role two-sided markets play in our economy and provides framework for evaluating competition in these markets.
  • LabMD is a small step in reining in the administrative state. Regulations must be scrutinized before they are imposed and enforced.

In some ways none of these decisions are revolutionary. Instead, they reflect an evolution toward greater transparency in how the law is to be applied and greater scrutiny over how the regulations are imposed.

 

Big is bad, part 1: Kafka, Coase, and Brandeis walk into a bar … There’s a quip in a well-known textbook that Nobel laureate Ronald Coase said he’d grown weary of antitrust because when prices went up, the judges said it was monopoly; when the prices went down, they said it was predatory pricing; and when they stayed the same, they said it was tacit collusion. ICLE’s Geoffrey Manne and Gus Hurwitz worry that with the rise of the neo-Brandeisians, not much has changed since Coase’s time:

[C]ompetition, on its face, is virtually indistinguishable from anticompetitive behavior. Every firm strives to undercut its rivals, to put its rivals out of business, to increase its rivals’ costs, or to steal its rivals’ customers. The consumer welfare standard provides courts with a concrete mechanism for distinguishing between good and bad conduct, based not on the effect on rival firms but on the effect on consumers. Absent such a standard, any firm could potentially be deemed to violate the antitrust laws for any act it undertakes that could impede its competitors.

Big is bad, part 2. A working paper published by researchers from Denmark and the University of California at Berkeley suggest that companies such as Google, Apple, Facebook, and Nike are taking advantage of so-called “tax havens” to cause billions of dollars of income go “missing.” There’s a lot of mumbo jumbo in this one, but it’s getting lots of attention.

We show theoretically and empirically that in the current international tax system, tax authorities of high-tax countries do not have incentives to combat profit shifting to tax havens. They instead focus their enforcement effort on relocating profits booked in other high-tax places—in effect stealing revenue from each other.

Big is bad, part 3: Can any country survive with debt-to-GDP of more than 100 percent? Apparently, the answer is “yes.” The U.K. went 80 years, from 1779 to 1858. Then, it went 47 years from 1916 to 1962. Tim Harford has a fascinating story about an effort to clear the country’s debt in that second run.

In 1928, an anonymous donor resolved to clear the UK’s national debt and gave £500,000 with that end in mind. It was a tidy sum — almost £30m at today’s prices — but not nearly enough to pay off the debt. So it sat in trust, accumulating interest, for nearly a century.

How do you make a small fortune? Begin with a big one. A lesson from Johnny Depp.

Will we ever stop debating the Trolley Problem? Apparently the answer is “no.” Also, TIL there’s a field of research that relies on “notions.”

For so long, moral psychology has relied on the notion that you can extrapolate from people’s decisions in hypothetical thought experiments to infer something meaningful about how they would behave morally in the real world. These new findings challenge that core assumption of the field.

 

The week that was on Truth on the Market

LabMD.

[T]argets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action  is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.

Google Android.

Thus, had Google opted instead to create a separate walled garden of its own on the Apple model, everything it had done would have otherwise been fine. This means that Google is now subject to an antitrust investigation for attempting to develop a more open platform.

AT&T-Time Warner. First this:

The government’s contention that, after the merger, AT&T and rival Comcast could coordinate to restrict access to popular Time Warner and NBC content to harm emerging competitors was always a weak argument.

Then this:

Doing no favors to its case, the government turned to a seemingly contradictory argument that AT&T and Comcast would coordinate to demand virtual providers take too much content.

 

 

The EC’s Android decision is expected sometime in the next couple of weeks. Current speculation is that the EC may issue a fine exceeding last year’s huge 2.4B EU fine for Google’s alleged antitrust violations related to the display of general search results. Based on the statement of objections (“SO”), I expect the Android decision will be a muddle of legal theory that not only fails to connect to facts and marketplace realities, but also will  perversely incentivize platform operators to move toward less open ecosystems.

As has been amply demonstrated (see, e.g., here and here), the Commission has made fundamental errors with its market definition analysis in this case. Chief among its failures is the EC’s incredible decision to treat the relevant market as licensable mobile operating systems, which notably excludes the largest smartphone player by revenue, Apple.

This move, though perhaps expedient for the EC, leads the Commission to view with disapproval an otherwise competitively justifiable set of licensing requirements that Google imposes on its partners. This includes anti-fragmentation and app-bundling provisions (“Provisions”) in the agreements that partners sign in order to be able to distribute Google Mobile Services (“GMS”) with their devices. Among other things, the Provisions guarantee that a basic set of Google’s apps and services will be non-exclusively featured on partners’ devices.

The Provisions — when viewed in a market in which Apple is a competitor — are clearly procompetitive. The critical mass of GMS-flavored versions of Android (as opposed to vanilla Android Open Source Project (“AOSP”) devices) supplies enough predictability to an otherwise unruly universe of disparate Android devices such that software developers will devote the sometimes considerable resources necessary for launching successful apps on Android.

Open source software like AOSP is great, but anyone with more than a passing familiarity with Linux recognizes that the open source movement often fails to produce consumer-friendly software. In order to provide a critical mass of users that attract developers to Android, Google provides a significant service to the Android market as a whole by using the Provisions to facilitate a predictable user (and developer) experience.

Generativity on platforms is a complex phenomenon

To some extent, the EC’s complaint is rooted in a bias that Android act as a more “generative” platform such that third-party developers are relatively better able to reach users of Android devices. But this effort by the EC to undermine the Provisions will be ultimately self-defeating as it will likely push mobile platform providers to converge on similar, relatively more closed business models that provide less overall consumer choice.

Even assuming that the Provisions somehow prevent third-party app installs or otherwise develop a kind of path-dependency among users such that they never seek out new apps (which the data clearly shows is not happening), focusing on third-party developers as the sole or primary source of innovation on Android is a mistake.

The control that platform operators like Apple and Google exert over their respective ecosystems does not per se create more or less generativity on the platforms. As Gus Hurwitz has noted, “literature and experience amply demonstrate that ‘open’ platforms, or general-purpose technologies generally, can promote growth and increase social welfare, but they also demonstrate that open platforms can also limit growth and decrease welfare.” Conversely, tighter vertical integration (the Apple model) can also produce more innovation than open platforms.

What is important is the balance between control and freedom, and the degree to which third-party developers are able to innovate within the context of a platform’s constraints. The existence of constraints — either Apple’s more tightly controlled terms, or Google’s more generous Provisions — themselves facilitate generativity.

In short, it is overly simplistic to view generativity as something that happens at the edges without respect to structural constraints at the core. The interplay between platform and developer is complex and complementary, and needs to be viewed as a dynamic process.

Whither platform diversity?

I love Apple’s devices and I am quite happy living within its walled garden. But I certainly do not believe that Apple’s approach is the only one that makes sense. Yet, in its SO, the EC blesses Apple’s approach as the proper way to manage a mobile ecosystem. It explicitly excluded Apple from a competitive analysis, and attacked Google on the basis that it imposed restrictions in the context of licensing its software. Thus, had Google opted instead to create a separate walled garden of its own on the Apple model, everything it had done would have otherwise been fine. This means that Google is now subject to an antitrust investigation for attempting to develop a more open platform.

With this SO, the EC is basically asserting that Google is anticompetitively bundling without being able to plausibly assert foreclosure (because, again, third-party app installs are easy to do and are easily shown to number in the billions). I’m sure Google doesn’t want to move in the direction of having a more closed system, but the lesson of this case will loom large for tomorrow’s innovators.

In the face of eager antitrust enforcers like those in the EU, the easiest path for future innovators will be to keep everything tightly controlled so as to prevent both fragmentation and misguided regulatory intervention.

The Eleventh Circuit’s LabMD opinion came out last week and has been something of a rorschach test for those of us who study consumer protection law.

Neil Chilson found the result to be a disturbing sign of slippage in Congress’s command that the FTC refrain from basing enforcement on “public policy.” Berin Szóka, on the other hand, saw the ruling as a long-awaited rebuke against the FTC’s expansive notion of its “unfairness” authority. Whereas Daniel Solove and Woodrow Hartzog described the decision as “quite narrow and… far from crippling,” in part, because “[t]he opinion says very little about the FTC’s general power to enforce Section 5 unfairness.” Even among the ICLE crew, our understandings of the opinion reflect our priors, from it being best understood as expressing due process concerns about injury-based enforcement of Section 5, on the one hand, to being about the meaning of Section 5(n)’s causation requirement, on the other.

You can expect to hear lots more about these and other LabMD-related issues from us soon, but for now we want to write about the only thing more exciting than dueling histories of the FTC’s 1980 Unfairness Statement: administrative law.

While most of those watching the LabMD case come from some nexus of FTC watchers, data security specialists, and privacy lawyers, the reality is that the case itself is mostly about administrative law (the law that governs how federal agencies are given and use their power). And the court’s opinion is best understood from a primarily administrative law perspective.

From that perspective, the case should lead to some significant introspection at the Commission. While the FTC may find ways to comply with the letter of the opinion without substantially altering its approach to data security cases, it will likely face difficulty defending that approach before the courts. True compliance with this decision will require the FTC to define what makes certain data security practices unfair in a more-coherent and far-more-readily ascertainable fashion.

The devil is in the (well-specified) details

The actual holding in the case comes in Part III of the 11th Circuit’s opinion, where the court finds for LabMD on the ground that, owing to a fatal lack of specificity in the FTC’s proposed order, “the Commission’s cease and desist order is itself unenforceable.”  This is the punchline of the opinion, to which we will return. But it is worth spending some time on the path that the court takes to get there.

It should be stressed at the outset that Part II of the opinion — in which the Court walks through the conceptual and statutory framework that supports an “unfairness” claim — is surprisingly unimportant to the court’s ultimate holding. This was the meat of the case for FTC watchers and privacy and data security lawyers, and it is a fascinating exposition. Doubtless it will be the focus of most analysis of the opinion.

But, for purposes of the court’s disposition of the case, it’s of (perhaps-frustratingly) scant importance. In short, the court assumes, arguendo, that the FTC has sufficient basis to make out an unfairness claim against LabMD before moving on to Part III of the opinion analyzing the FTC’s order given that assumption.

It’s not clear why the court took this approach — and it is dangerous to assume any particular explanation (although it is and will continue to be the subject of much debate). There are several reasonable explanations for the approach, ranging from the court thinking it obvious that the FTC’s unfairness analysis was correct, to it side-stepping the thorny question of how to define injury under Section 5, to the court avoiding writing a decision that could call into question the fundamental constitutionality of a significant portion of the FTC’s legal portfolio. Regardless — and regardless of its relative lack of importance to the ultimate holding — the analysis offered in Part II bears, and will receive, significant attention.

The FTC has two basic forms of consumer protection authority: It can take action against 1) unfair acts or practices and 2) deceptive acts or practices. The FTC’s case against LabMD was framed in terms of unfairness. Unsurprisingly, “unfairness” is a broad, ambiguous concept — one that can easily grow into an amorphous blob of ill-defined enforcement authority.

As discussed by the court (as well as by us, ad nauseum), in the 1970s the FTC made very aggressive use of its unfairness authority to regulate the advertising industry, effectively usurping Congress’ authority to legislate in that area. This over-aggressive enforcement didn’t sit well with Congress, of course, and led it to shut down the FTC for a period of time until the agency adopted a more constrained understanding of the meaning of its unfairness authority. This understanding was communicated to Congress in the FTC’s 1980 Unfairness Statement. That statement was subsequently codified by Congress, in slightly modified form, as Section 5(n) of the FTC Act.

Section 5(n) states that

The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

The meaning of Section 5(n) has been the subject of intense debate for years (for example, here, here and here). In particular, it is unclear whether Section 5(n) defines a test for what constitutes unfair conduct (that which “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition”) or whether instead imposes a necessary, but not necessarily sufficient, condition on the extent of the FTC’s authority to bring cases. The meaning of “cause” under 5(n) is also unclear because, unlike causation in traditional legal contexts, Section 5(n) also targets conduct that is “likely to cause” harm.

Section 5(n) concludes with an important, but also somewhat inscrutable, discussion of the role of “public policy” in the Commission’s unfairness enforcement, indicating that that Commission is free to consider “established public policies” as evidence of unfair conduct, but may not use such considerations “as a primary basis” for its unfairness enforcement.

Just say no to public policy

Section 5 empowers and directs the FTC to police unfair business practices, and there is little reason to think that bad data security practices cannot sometimes fall under its purview. But the FTC’s efforts with respect to data security (and, for that matter, privacy) over the past nearly two decades have focused extensively on developing what it considers to be a comprehensive jurisprudence to address data security concerns. This creates a distinct impression that the FTC has been using its unfairness authority to develop a new area of public policy — to legislate data security standards, in other words — as opposed to policing data security practices that are unfair under established principles of unfairness.

This is a subtle distinction — and there is frankly little guidance for understanding when the agency is acting on the basis of public policy versus when it is proscribing conduct that falls within the meaning of unfairness.

But it is an important distinction. If it is the case — or, more precisely, if the courts think that it is the case — that the FTC is acting on the basis of public policy, then the FTC’s data security efforts are clearly problematic under Section 5(n)’s prohibition on the use of public policy as the primary basis for unfairness actions.

And this is where the Commission gets itself into trouble. The Commission’s efforts to develop its data security enforcement program looks an awful lot like something being driven by public policy, and not so much as merely enforcing existing policy as captured by, in the LabMD court’s words (echoing the FTC’s pre-Section 5(n) unfairness factors), “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.”

The distinction between effecting public policy and enforcing legal norms is… not very clear. Nonetheless, exploring and respecting that distinction is an important task for courts and agencies.

Unfortunately, this case does not well describe how to make that distinction. The opinion is more than a bit muddled and difficult to clearly interpret. Nonetheless, reading the court’s dicta in Part II is instructive. It’s clearly the case that some bad security practices, in some contexts, can be unfair practices. So the proper task for the FTC is to discover how to police “unfairness” within data security cases rather than setting out to become a first-order data security enforcement agency.

How does public policy become well-established law?

Part II of the Eleventh Circuit’s opinion — even if dicta — is important for future interpretations of Section 5 cases. The court goes to great lengths to demonstrate, based on the FTC’s enforcement history and related Congressional rebukes, that the Commission may not rely upon vague “public policy” standards for bringing “unfairness” actions.

But this raises a critical question about the nature of the FTC’s unfairness authority. The Commission was created largely to police conduct that could not readily be proscribed by statute or simple rules. In some cases this means conduct that is hard to label or describe in text with any degree of precision — “I know it when I see it” kinds of acts and practices. In other cases, it may refer to novel or otherwise unpredictable conduct that could not be foreseen by legislators or regulators. In either case, the very purpose of the FTC is to be able to protect consumers from conduct that is not necessarily proscribed elsewhere.

This means that the Commission must have some ability to take action against “unfair” conduct that has not previously been enshrined as “unfair” in “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.” But that ability is not unbounded, of course.

The court explained that the Commission could expound upon what acts fall within the meaning of “unfair” in one of two ways: It could use its rulemaking authority to issue Congressionally reviewable rules, or it could proceed on a case-by-case basis.

In either case, the court’s discussion of how the Commission is to determine what is “unfair” within the constraints of Section 5(n) is frustratingly vague. The earlier parts of the opinion tell us that unfairness is to be adjudged based upon “well-established legal standards,” but here the court tells us that the scope of unfairness can be altered — that is, those well-established legal standards can be changed — through adjudication. It is difficult to square what the court means by this. Regardless, it is the guidance that we have been given by the court.

This is Admin Law 101

And yet perhaps there is some resolution to this conundrum in administrative law. For administrative law scholars, the 11th Circuit’s discussion of the permissibility of agencies developing binding legal norms using either rulemaking or adjudication procedures, is straight out of Chenery II.

Chenery II is a bedrock case of American administrative law, standing broadly for the proposition (as echoed by the 11th Circuit) that agencies can generally develop legal rules through either rulemaking or adjudication, that there may be good reasons to use either in any given case, and that (assuming Congress has empowered the agency to use both) it is primarily up to the agency to determine which approach is preferable in any given case.

But, while Chenery II certainly allows agencies to proceed on a case-by-case basis, that permission is not a broad license to eschew the development of determinate legal standards. And the reason is fairly obvious: if an agency develops rules that are difficult to know ex ante, they can hardly provide guidance for private parties as they order their affairs.

Chenery II places an important caveat on the use of case-by-case adjudication. Much like the judges in the LabMD opinion, the Chenery II court was concerned with specificity and clarity, and tells us that agencies may not rely on vague bases for their rules or enforcement actions and expect courts to “chisel” out the details. Rather:

If the administrative action is to be tested by the basis upon which it purports to rest, that basis must be set forth with such clarity as to be understandable. It will not do for a court to be compelled to guess at the theory underlying the agency’s action; nor can a court be expected to chisel that which must be precise from what the agency has left vague and indecisive. In other words, ‘We must know what a decision means before the duty becomes ours to say whether it is right or wrong.’ (emphasis added)

The parallels between the 11th Circuit’s opinion in LabMD and the Supreme Court’s opinion in Chenery II 70 years earlier are uncanny. It is also not very surprising that the 11th Circuit opinion would reflect the principles discussed in Chenery II, nor that it would do so without reference to Chenery II: these are, after all, bedrock principles of administrative law.  

The principles set out in Chenery II, of course, do not answer the data-security law question whether the FTC properly exercised its authority in this (or any) case under Section 5. But they do provide an intelligible basis for the court sidestepping this question, and asking whether the FTC sufficiently defined what it was doing in the first place.  

Conclusion

The FTC’s data security mission has been, in essence, a voyage of public policy exploration. Its method of case-by-case adjudication, based on ill-defined consent decrees, non-binding guidance documents, and broadly-worded complaints creates the vagueness that the Court in Chenery II rejected, and that the 11th Circuit held results in unenforceable remedies.

Even in its best light, the Commission’s public materials are woefully deficient as sources of useful (and legally-binding) guidance. In its complaints the FTC does typically mention some of the facts that led it to investigate, and presents some rudimentary details of how those facts relate to its Section 5 authority. Yet the FTC issues complaints based merely on its “reason to believe” that an unfair act has taken place. This is a far different standard than that faced in district court, and undoubtedly leads the Commission to construe facts liberally in its own favor.

Moreover, targets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action  is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.

So, while the court’s opinion in this case was narrowly focused on the FTC’s proposed order, the underlying legal analysis that supports its holding should be troubling to the Commission.

The specificity the 11th Circuit demands in the remedial order must exist no less in the theories of harm the Commission alleges against targets. And those theories cannot be based on mere public policy preferences. Courts that follow the Eleventh Circuit’s approach — which indeed Section 5(n) reasonably seems to require — will look more deeply into the Commission’s allegations of “unreasonable” data security in order to determine if it is actually attempting to pursue harms by proving something like negligence, or is instead simply ascribing “unfairness” to certain conduct that the Commission deems harmful.

The FTC may find ways to comply with the letter of this particular opinion without substantially altering its overall approach — but that seems unlikely. True compliance with this decision will require the FTC to respect real limits on its authority and to develop ascertainable data security requirements out of much more than mere consent decrees and kitchen-sink complaints.

AT&T’s merger with Time Warner has lead to one of the most important, but least interesting, antitrust trials in recent history.

The merger itself is somewhat unimportant to consumers. It’s about a close to a “pure” vertical merger as we can get in today’s world and would not lead to a measurable increase in prices paid by consumers. At the same time, Richard J. Leon’s decision to approve the merger may have sent a signal regarding how the anticipated Fox-Disney (or Comcast), CVS-Aetna, and Cigna-Express Scripts mergers might proceed.

Judge Leon of the United States District Court in Washington, said the U.S. Department of Justice had not proved that AT&T’s acquisition of Time Warner would lead to fewer choices for consumers and higher prices for television and internet services.

As shown in the figure below, there is virtually no overlap in services provided by Time Warner (content creation and broadcasting) and AT&T (content distribution). We say “virtually” because, through it’s ownership of DirecTV, AT&T has an ownership stake in several channels such as the Game Show Network, the MLB Network, and Root Sports. So, not a “pure” vertical merger, but pretty close. Besides no one seems to really care about GSN, MLB, or Root.

Infographic: What's at Stake in the Proposed AT&T - Time Warner Merger | Statista

The merger trial was one of the least interesting because the government’s case opposing the merger was so weak.

The Justice Department’s economic expert, University of California, Berkeley, professor Carl Shapiro, argued the merger would harm consumers and competition in three ways:

  1. AT&T would raise the price of content to other cable companies, driving up their costs which would be passed on consumers.
  2. Across more than 1,000 subscription television markets, AT&T could benefit by drawing customers away from rival content distributors in the event of a “blackout,” in which the distributor chooses not to carry Time Warner content over a pricing dispute. In addition, AT&T could also use its control over Time Warner content to retain customers by discouraging consumers from switching to providers that don’t carry the Time Warner content. Those two factors, according to Shapiro, could cause rival cable companies to lose between 9 and 14 percent of their subscribers over the long term.
  3. AT&T and competitor Comcast could coordinate to restrict access to popular Time Warner and NBC content in ways that could stifle competition from online cable alternatives such as Dish Network’s Sling TV or Sony’s PlayStation Vue. Even tacit coordination of this type would impair consumer choices, Shapiro opined.

Price increases and blackouts

Shapiro initially indicated the merger would cause consumers to pay an additional $436 million year, which amounts to an average of 45 cents a month per customer, or a 0.4 percent increase. At trial, he testified the amount might be closer to 27 cents a month and conceded it could be a low as 13 cents a month.

The government’s “blackout” arguments seemed to get lost in the shifting sands of shifting survey results. Blackouts mattered, according to Shapiro, because “Even though they don’t happen very much, that’s the key to leverage.” His testimony on the potential for price hikes relied heavily on a study commissioned by Charter Communications Inc., which opposes the merger. Stefan Bewley, a director at consulting firm Altman Vilandrie & Co., which produced the study, testified the report predicted Charter would lose 9 percent of its subscribers if it lost access to Turner programming.

Under cross-examination by AT&T’s lawyer, Bewley acknowledged what was described as a “final” version of the study presented to Charter in April last year put the subscriber loss estimate at 5 percent. When confronted with his own emails about the change to 9 percent, Bewley said he agreed to the update after meeting with Charter. At the time of the change from 5 percent to 9 percent, Charter was discussing its opposition to the merger with the Justice Department.

Bewley noted that the change occurred because he saw that some of the figures his team had gathered about Turner networks were outliers, with a range of subcriber losses of 5 percent on the low end and 14 percent on the high end. He indicated his team came up with a “weighted average” of 9 percent.

This 5/9/14 percent distinction seems to be critical to the government’s claim the merger would raise consumer prices. Referring to Shapiro’s analysis, AT&T-Time Warner’s lead counsel, Daniel Petrocelli, asked Bewley: “Are you aware that if he’d used 5 percent there would have been a price increase of zero?” Bewley said he was not aware.

At trial, AT&T and Turner executives testified that they couldn’t credibly threaten to withhold Turner programming from rivals because the networks’ profitability depends on wide distribution. In addition, one of AT&T’s expert witnesses, University of California, Berkeley business and economics professor Michael Katz, testified about what he said were the benefits of AT&T’s offer to use “baseball style” arbitration with rival pay TV distributors if the two sides couldn’t agree on what fees to pay for Time Warner’s Turner networks. With baseball style arbitration, both sides submit their final offer to an arbitrator, who determines which of the two offers is most appropriate.

Under the terms of the arbitration offer, AT&T has agreed not to black out its networks for the duration of negotiations with distributors. Dennis Carlton, an economics professor at the University of Chicago, said Shapiro’s model was unreliable because he didn’t account for that. Shapiro conceded he did not factor that into his study, saying that he would need to use an entirely different model to study how the arbitration agreement would affect the merger.

Coordination with Comcast/NBCUniversal

The government’s contention that, after the merger, AT&T and rival Comcast could coordinate to restrict access to popular Time Warner and NBC content to harm emerging competitors was always a weak argument.

At trial, the Justice Department seemed to abandon any claim that the merged company would unilaterally restrict access to online “virtual MVPDs.” The government’s case, made by its expert Shapiro, ended up being there would be a “risk” and “danger” that AT&T and Comcast would “coordinate” to withhold programming in a way to harm emerging online multichannel distributors. However, under cross examination, he conceded that his opinions were not based on a “quantifiable model.” Shapiro testified that he had no opinion whether the odds of such coordination would be greater than 1 percent.

Doing no favors to its case, the government turned to a seemingly contradictory argument that AT&T and Comcast would coordinate to demand virtual providers take too much content. Emerging online multichannel distributors pitch their offerings as “skinny bundles” with a limited selection of the more popular channels. By forcing these providers to take more channels, the government argued, the skinny bundle business model is undermined in a version of raising rivals costs. This theory did not get much play at trial, but seems to suggest the government was trying to have its cake and eat it, too.

Except in this case, as with much of the government’s case in this matter, the cake was not completely baked.

 

A few weeks ago I posted a preliminary assessment of the relative antitrust risk of a Comcast vs Disney purchase of 21st Century Fox assets. (Also available in pdf as an ICLE Issue brief, here). On the eve of Judge Leon’s decision in the AT&T/Time Warner merger case, it seems worthwhile to supplement that assessment by calling attention to Assistant Attorney General Makan Delrahim’s remarks at The Deal’s Corporate Governance Conference last week. Somehow these remarks seem to have passed with little notice, but, given their timing, they deserve quite a bit more attention.

In brief, Delrahim spent virtually the entirety of his short remarks making and remaking the fundamental point at the center of my own assessment of the antitrust risk of a possible Comcast/Fox deal: The DOJ’s challenge of the AT&T/Time Warner merger tells you nothing about the likelihood that the agency would challenge a Comcast/Fox merger.

To begin, in my earlier assessment I pointed out that most vertical mergers are approved by antitrust enforcers, and I quoted Bruce Hoffman, Director of the FTC’s Bureau of Competition, who noted that:

[V]ertical merger enforcement is still a small part of our merger workload….

* * *

Where horizontal mergers reduce competition on their face — though that reduction could be minimal or more than offset by benefits — vertical mergers do not…. [T]here are plenty of theories of anticompetitive harm from vertical mergers. But the problem is that those theories don’t generally predict harm from vertical mergers; they simply show that harm is possible under certain conditions.

I may not have made it very clear in that post, but, of course, most horizontal mergers are approved by enforcers, as well.

Well, now we have the head of the DOJ Antitrust Division making the same point:

I’d say 95 or 96 percent of mergers — horizontal or vertical — are cleared — routinely…. Most mergers — horizontal or vertical — are procompetitive, or have no adverse effect.

Delrahim reinforced the point in an interview with The Street in advance of his remarks. Asked by a reporter, “what are your concerns with vertical mergers?,” Delrahim quickly corrected the questioner: “Well, I don’t have any concerns with most vertical mergers….”

But Delrahim went even further, noting that nothing about the Division’s approach to vertical mergers has changed since the AT&T/Time Warner case was brought — despite the efforts of some reporters to push a different narrative:

I understand that some journalists and observers have recently expressed concern that the Antitrust Division no longer believes that vertical mergers can be efficient and beneficial to competition and consumers. Some point to our recent decision to challenge some aspects of the AT&T/Time Warner merger as a supposed bellwether for a new vertical approach. Rest assured: These concerns are misplaced…. We have long recognized that vertical integration can and does generate efficiencies that benefit consumers. Indeed, most vertical mergers are procompetitive or competitively neutral. The same is of course true in horizontal transactions. To the extent that any recent action points to a closer review of vertical mergers, it’s not new…. [But,] to reiterate, our approach to vertical mergers has not changed, and our recent enforcement efforts are consistent with the Division’s long-standing, bipartisan approach to analyzing such mergers. We’ll continue to recognize that vertical mergers, in general, can yield significant economic efficiencies and benefit to competition.

Delrahim concluded his remarks by criticizing those who assume that the agency’s future enforcement decisions can be inferred from past cases with different facts, stressing that the agency employs an evidence-based, case-by-case approach to merger review:

Lumping all vertical transactions under the same umbrella, by comparison, obscures the reality that we conduct a vigorous investigation, aided by over 50 PhD economists in these markets, to make sure that we as lawyers don’t steer too far without the benefits of their views in each of these instances.

Arguably this was a rebuke directed at those, like Disney and Fox’s board, who are quick to ascribe increased regulatory risk to a Comcast/Fox tie-up because the DOJ challenged the AT&T/Time Warner merger. Recall that, in its proxy statement, the Fox board explained that it rejected Comcast’s earlier bid in favor of Disney’s in part because of “the regulatory risks presented by the DOJ’s unanticipated opposition to the proposed vertical integration of the AT&T / Time Warner transaction.”

I’ll likely have more to add once the AT&T/Time Warner decision is out. But in the meantime (and with apologies to Mark Twain), the takeaway is clear: Reports of the death of vertical mergers have been greatly exaggerated.