A screenshot of a cell phone

Description automatically generated

This is the first in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision. It draws on research from a soon-to-be published ICLE white paper.

The European Commission’s recent Google Android decision will surely go down as one of the most important competition proceedings of the past decade. And yet, an in-depth reading of the 328 page decision should leave attentive readers with a bitter taste.

One of the Commission’s most significant findings is that the Android operating system and Apple’s iOS are not in the same relevant market, along with the related conclusion that Apple’s App Store and Google Play are also in separate markets.

This blog post points to a series of flaws that undermine the Commission’s reasoning on this point. As a result, the Commission’s claim that Google and Apple operate in separate markets is mostly unsupported.

1. Everyone but the European Commission thinks that iOS competes with Android

Surely the assertion that the two predominant smartphone ecosystems in Europe don’t compete with each other will come as a surprise to… anyone paying attention: 

A screenshot of a cell phone

Description automatically generated

Apple 10-K:

The Company believes the availability of third-party software applications and services for its products depends in part on the developers’ perception and analysis of the relative benefits of developing, maintaining and upgrading such software and services for the Company’s products compared to competitors’ platforms, such as Android for smartphones and tablets and Windows for personal computers.

Google 10-K:

We face competition from: Companies that design, manufacture, and market consumer electronics products, including businesses that have developed proprietary platforms.

This leads to a critical question: Why did the Commission choose to depart from the instinctive conclusion that Google and Apple compete vigorously against each other in the smartphone and mobile operating system market? 

As explained below, its justifications for doing so were deeply flawed.

2. It does not matter that OEMs cannot license iOS (or the App Store)

One of the main reasons why the Commission chose to exclude Apple from the relevant market is that OEMs cannot license Apple’s iOS or its App Store.

But is it really possible to infer that Google and Apple do not compete against each other because their products are not substitutes from OEMs’ point of view? 

The answer to this question is likely no.

Relevant markets, and market shares, are merely a proxy for market power (which is the appropriate baseline upon which build a competition investigation). As Louis Kaplow puts it:

[T]he entire rationale for the market definition process is to enable an inference about market power.

If there is a competitive market for Android and Apple smartphones, then it is somewhat immaterial that Google is the only firm to successfully offer a licensable mobile operating system (as opposed to Apple and Blackberry’s “closed” alternatives).

By exercising its “power” against OEMs by, for instance, degrading the quality of Android, Google would, by the same token, weaken its competitive position against Apple. Google’s competition with Apple in the smartphone market thus constrains Google’s behavior and limits its market power in Android-specific aftermarkets (on this topic, see Borenstein et al., and Klein).

This is not to say that Apple’s iOS (and App Store) is, or is not, in the same relevant market as Google Android (and Google Play). But the fact that OEMs cannot license iOS or the App Store is mostly immaterial for market  definition purposes.

 3. Google would find itself in a more “competitive” market if it decided to stop licensing the Android OS

The Commission’s reasoning also leads to illogical outcomes from a policy standpoint. 

Google could suddenly find itself in a more “competitive” market if it decided to stop licensing the Android OS and operated a closed platform (like Apple does). The direct purchasers of its products – consumers – would then be free to switch between Apple and Google’s products.

As a result, an act that has no obvious effect on actual market power — and that could have a distinctly negative effect on consumers — could nevertheless significantly alter the outcome of competition proceedings on the Commission’s theory. 

One potential consequence is that firms might decide to close their platforms (or refuse to open them in the first place) in order to avoid competition scrutiny (because maintaining a closed platform might effectively lead competition authorities to place them within a wider relevant market). This might ultimately reduce product differentiation among mobile platforms (due to the disappearance of open ecosystems) – the exact opposite of what the Commission sought to achieve with its decision.

This is, among other things, what Antonin Scalia objected to in his Eastman Kodak dissent: 

It is quite simply anomalous that a manufacturer functioning in a competitive equipment market should be exempt from the per se rule when it bundles equipment with parts and service, but not when it bundles parts with service [when the manufacturer has a high share of the “market” for its machines’ spare parts]. This vast difference in the treatment of what will ordinarily be economically similar phenomena is alone enough to call today’s decision into question.

4. Market shares are a poor proxy for market power, especially in narrowly defined markets

Finally, the problem with the Commission’s decision is not so much that it chose to exclude Apple from the relevant markets, but that it then cited the resulting market shares as evidence of Google’s alleged dominance:

(440) Google holds a dominant position in the worldwide market (excluding China) for the licensing of smart mobile OSs since 2011. This conclusion is based on: 

(1) the market shares of Google and competing developers of licensable smart mobile OSs […]

In doing so, the Commission ignored one of the critical findings of the law & economics literature on market definition and market power: Although defining a narrow relevant market may not itself be problematic, the market shares thus adduced provide little information about a firm’s actual market power. 

For instance, Richard Posner and William Landes have argued that:

If instead the market were defined narrowly, the firm’s market share would be larger but the effect on market power would be offset by the higher market elasticity of demand; when fewer substitutes are included in the market, substitution of products outside of the market is easier. […]

If all the submarket approach signifies is willingness in appropriate cases to call a narrowly defined market a relevant market for antitrust purposes, it is unobjectionable – so long as appropriately less weight is given to market shares computed in such a market.

Likewise, Louis Kaplow observes that:

In choosing between a narrower and a broader market (where, as mentioned, we are supposing that the truth lies somewhere in between), one would ask whether the inference from the larger market share in the narrower market overstates market power by more than the inference from the smaller market share in the broader market understates market power. If the lesser error lies with the former choice, then the narrower market is the relevant market; if the latter minimizes error, then the broader market is best.

The Commission failed to heed these important findings.

5. Conclusion

The upshot is that Apple should not have been automatically excluded from the relevant market. 

To be clear, the Commission did discuss this competition from Apple later in the decision. And it also asserted that its findings would hold even if Apple were included in the OS and App Store markets, because Android’s share of devices sold would have ranged from 45% to 79%, depending on the year (although this ignores other potential metrics such as the value of devices sold or Google’s share of advertising revenue

However, by gerrymandering the market definition (which European case law likely permitted it to do), the Commission ensured that Google would face an uphill battle, starting from a very high market share and thus a strong presumption of dominance. 

Moreover, that it might reach the same result by adopting a more accurate market definition is no excuse for adopting a faulty one and resting its case (and undertaking its entire analysis) on it. In fact, the Commission’s choice of a faulty market definition underpins its entire analysis, and is far from a “harmless error.” 

I shall discuss the consequences of this error in an upcoming blog post. Stay tuned.

The Economists' Hour

John Maynard Keynes wrote in his famous General Theory that “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” 

This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society,  New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning. 

Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.  

Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.” 

Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s. 

Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.

In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.

First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.

The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.

In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.

Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.

Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,

“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”

This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.

Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.

In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data. 

As explained in a recent blog post on Truth on the Market by ICLE’s chief economist Eric Fruits: 

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration. 

In fact, one recent study, titled Are legacy airline mergers pro- or anti-competitive? Evidence from recent U.S. airline mergers takes it a step further. Data from legacy U.S. airline mergers appears to show they have resulted in pro-consumer benefits once quality-adjusted fares are taken into account:

Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger… 

One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.

In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.

Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:

U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.

Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).

Country Model 1 Model 2 Model 3 Model 4
Price Rank Price Rank Price Rank Price Rank
Australia $78.30 28 $82.81 27 $102.63 26 $84.45 23
Austria $48.04 17 $60.59 15 $73.17 11 $74.02 17
Belgium $46.82 16 $66.62 21 $75.29 13 $81.09 22
Canada $69.66 27 $74.99 25 $92.73 24 $76.57 19
Chile $33.42 8 $73.60 23 $83.81 20 $88.97 25
Czech Republic $26.83 3 $49.18 6 $69.91 9 $60.49 6
Denmark $43.46 14 $52.27 8 $69.37 8 $63.85 8
Estonia $30.65 6 $56.91 12 $81.68 19 $69.06 12
Finland $35.00 9 $37.95 1 $57.49 2 $51.61 1
France $30.12 5 $44.04 4 $61.96 4 $54.25 3
Germany $36.00 12 $53.62 10 $75.09 12 $66.06 11
Greece $35.38 10 $64.51 19 $80.72 17 $78.66 21
Iceland $65.78 25 $73.96 24 $94.85 25 $90.39 26
Ireland $56.79 22 $62.37 16 $76.46 14 $64.83 9
Italy $29.62 4 $48.00 5 $68.80 7 $59.00 5
Japan $40.12 13 $53.58 9 $81.47 18 $72.12 15
Latvia $20.29 1 $42.78 3 $63.05 5 $52.20 2
Luxembourg $56.32 21 $54.32 11 $76.83 15 $72.51 16
Mexico $35.58 11 $91.29 29 $120.40 29 $109.64 29
Netherlands $44.39 15 $63.89 18 $89.51 21 $77.88 20
New Zealand $59.51 24 $81.42 26 $90.55 22 $76.25 18
Norway $88.41 29 $71.77 22 $103.98 27 $96.95 27
Portugal $30.82 7 $58.27 13 $72.83 10 $71.15 14
South Korea $25.45 2 $42.07 2 $52.01 1 $56.28 4
Spain $54.95 20 $87.69 28 $115.51 28 $106.53 28
Sweden $52.48 19 $52.16 7 $61.08 3 $70.41 13
Switzerland $66.88 26 $65.01 20 $91.15 23 $84.46 24
United Kingdom $50.77 18 $63.75 17 $79.88 16 $65.44 10
United States $58.00 23 $59.84 14 $64.75 6 $62.94 7
Average $46.55 $61.70 $80.24 $73.73

Model 1: Unadjusted for demographics and content quality

Model 2: Adjusted for demographics but not content quality

Model 3: Adjusted for demographics and data usage

Model 4: Adjusted for demographics and content quality

Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:

The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing. 

In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE. 

Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition. 

In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.

Conclusion

At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway.  For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors. 

So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”

For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in The Economists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.

Today, Reuters reports that Germany-based ThyssenKrupp has received bids from three bidding groups for a majority stake in the firm’s elevator business. Finland’s Kone teamed with private equity firm CVC to bid on the company. Private equity firms Blackstone and Carlyle joined with the Canada Pension Plan Investment Board to submit a bid. A third bid came from Advent, Cinven, and the Abu Dhabi Investment Authority.

Also today — in anticipation of the long-rumored and much-discussed sale of ThyssenKrupp’s elevator business — the International Center for Law & Economics released The Antitrust Risks of Four To Three Mergers: Heightened Scrutiny of a Potential ThyssenKrupp/Kone Merger, by Eric Fruits and Geoffrey A. Manne. This study examines the heightened scrutiny of four to three mergers by competition authorities in the current regulatory environment, using a potential ThyssenKrupp/Kone merger as a case study. 

In recent years, regulators have become more aggressive in merger enforcement in response to populist criticisms that lax merger enforcement has led to the rise of anticompetitive “big business.” In this environment, it is easy to imagine regulators intensely scrutinizing and challenging or conditioning nearly any merger that substantially increases concentration. 

This potential deal provides an opportunity to highlight the likely challenges, complexity, and cost that regulatory scrutiny of such mergers actually entails — and it is likely to be a far cry from the lax review and permissive decisionmaking of antitrust critics’ imagining.

In the case of a potential ThyssenKrupp/Kone merger, the combined entity would face lengthy, costly, and duplicative review in multiple jurisdictions, any one of which could effectively block the merger or impose onerous conditions. It would face the automatic assumption of excessive concentration in several of these, including the US, EU, and Canada. In the US, the deal would also face heightened scrutiny based on political considerations, including the perception that the deal would strengthen a foreign firm at the expense of a domestic supplier. It would also face the risk of politicized litigation from state attorneys general, and potentially the threat of extractive litigation by competitors and customers.

Whether the merger would actually entail anticompetitive risk may, unfortunately, be of only secondary importance in determining the likelihood and extent of a merger challenge or the imposition of onerous conditions.

A “highly concentrated” market

In many jurisdictions, the four to three merger would likely trigger a “highly concentrated” market designation. With the merging firms having a dominant share of the market for elevators, the deal would be viewed as problematic in several areas:

  • The US (share > 35%, HHI > 3,000, HHI increase > 700), 
  • Canada (share of approximately 50%, HHI > 2,900, HHI increase of 1,000), 
  • Australia (share > 40%, HHI > 3,100, HHI increase > 500), 
  • Europe (shares of 33–65%, HHIs in excess of 2,700, and HHI increases of 270 or higher in Sweden, Finland, Netherlands, Austria, France, and Luxembourg).

As with most mergers, a potential ThyssenKrupp/Kone merger would likely generate “hot docs” that would be used to support the assumption of anticompetitive harm from the increase in concentration, especially in light of past allegations of price fixing in the industry and a decision by the European Commission in 2007 to fine certain companies in the industry for alleged anticompetitive conduct.

Political risks

The merger would also surely face substantial political risks in the US and elsewhere from the perception the deal would strengthen a foreign firm at the expense of a domestic supplier. President Trump’s administration has demonstrated a keen interest in protecting what it sees as US interests vis-à-vis foreign competition. As a high-rise and hotel developer who has shown a willingness to intervene in antitrust enforcement to protect his interests, President Trump may have a heightened personal interest in a ThyssenKrupp/Kone merger. 

To the extent that US federal, state, and local governments purchase products from the merging parties, the deal would likely be subjected to increased attention from federal antitrust regulators as well as states’ attorneys general. Indeed, the US Department of Justice (DOJ) has created a “Procurement Collusion Strike Force” focused on “deterring, detecting, investigating and prosecuting antitrust crimes . . . which undermine competition in government procurement. . . .”

The deal may also face scrutiny from EC, UK, Canadian, and Australian competition authorities, each of which has exhibited increased willingness to thwart such mergers. For example, the EU recently blocked a proposed merger between the transport (rail) services of EU firms, Siemens and Alstom. The UK recently blocked a series of major deals that had only limited competitive effects on the UK. In one of these, Thermo Fisher Scientific’s proposed acquisition of Roper Technologies’ Gatan subsidiary was not challenged in the US, but the deal was abandoned after the UK CMA decided to block the deal despite its limited connections to the UK.

Economic risks

In addition to the structural and political factors that may lead to blocking a four to three merger, several economic factors may further exacerbate the problem. While these, too, may be wrongly deemed problematic in particular cases by reviewing authorities, they are — relatively at least — better-supported by economic theory in the abstract. Moreover, even where wrongly applied, they are often impossible to refute successfully given the relevant standards. And such alleged economic concerns can act as an effective smokescreen for blocking a merger based on the sorts of political and structural considerations discussed above. Some of these economic factors include:

  • Barriers to entry. IBISWorld identifies barriers to entry to include economies of scale, long-standing relationships with existing buyers, as well as long records of safety and reliability. Strictly speaking, these are not costs borne only by a new entrant, and thus should not be deemed competitively-relevant entry barriers. Yet merger review authorities the world over fail to recognize this distinction, and routinely scuttle mergers based simply on the costs faced by additional competitors entering the market.
  • Potential unilateral effects. The extent of direct competition between the products and services sold by the merging parties is a key part of the evaluation of unilateral price effects. Competition authorities would likely consider a significant range of information to evaluate the extent of direct competition between the products and services sold by ThyssenKrupp and its merger partner. In addition to “hot docs,” this information could include won/lost bid reports as well as evidence from discount approval processes and customer switching patterns. Because the purchase of elevator and escalator products and services involves negotiation by sophisticated and experienced buyers, it is likely that this type of bid information would be readily available for review.
  • A history of coordinated conduct involving ThyssenKrupp and Kone. Competition authorities will also consider the risk that a four to three merger will increase the ability and likelihood for the remaining, smaller number of firms to collude. In 2007 the European Commission imposed a €992 million cartel fine on five elevator firms: ThyssenKrupp, Kone, Schindler, United Technologies, and Mitsubishi. At the time, it was the largest-ever cartel fine. Several companies, including Kone and UTC, admitted wrongdoing.

Conclusion

As “populist” antitrust gains more traction among enforcers aiming to stave off criticisms of lax enforcement, superficial and non-economic concerns have increased salience. The simple benefit of a resounding headline — “The US DOJ challenges increased concentration that would stifle the global construction boom” — signaling enforcers’ efforts to thwart further increases in concentration and save blue collar jobs is likely to be viewed by regulators as substantial. 

Coupled with the arguably more robust, potential economic arguments involving unilateral and coordinated effects arising from such a merger, a four to three merger like a potential ThyssenKrupp/Kone transaction would be sure to attract significant scrutiny and delay. Any arguments that such a deal might actually decrease prices and increase efficiency are — even if valid — less likely to gain as much traction in today’s regulatory environment.

This guest post is by Jonathan M. Barnett, Torrey H. Webb Professor Law, University of Southern California Gould School of Law.

It has become virtual received wisdom that antitrust law has been subdued by economic analysis into a state of chronic underenforcement. Following this line of thinking, many commentators applauded the Antitrust Division’s unsuccessful campaign to oppose the acquisition of Time-Warner by AT&T and some (unsuccessfully) urged the Division to take stronger action against the acquisition of most of Fox by Disney. The arguments in both cases followed a similar “big is bad” logic. Consolidating control of a large portfolio of creative properties (Fox plus Disney) or integrating content production and distribution capacities (Time-Warner plus AT&T) would exacerbate market concentration, leading to reduced competition and some combination of higher prices and reduced product for consumers. 

Less than 18 months after the closing of both transactions, those concerns seem to have been largely unwarranted. 

Far from precipitating any decline in product output or variety, both transactions have been followed by a vigorous burst of competition in the digital streaming market. In place of the Amazon plus Netflix bottleneck (with Hulu trailing behind), consumers now, or in 2020 will, have a choice of at least four new streaming services with original content, Disney+, AT&T’s “HBO Max”, Apple’s “Apple TV+” and Comcast’s NBCUniversal “Peacock” services. Critically, each service relies on a formidable combination of creative, financing and technological capacities that can only be delivered by a firm of sufficiently large size and scale.  As modern antitrust law has long recognized, it turns out that “big” is sometimes not bad.

Where’s the Harm?

At present, it is hard to see any net consumer harm arising from the concurrence of increased size and increased competition. 

On the supply side, this is just the next episode in the ongoing “Golden Age of Television” in which content producers have enjoyed access to exceptional funding to support high-value productions.  It has been reported that Apple TV+’s new “Morning Show” series will cost $15 million per episode while similar estimates are reported for hit shows such as HBO’s “Game of Thrones” and Netflix’s “The Crown.”  Each of those services is locked in a fierce competition to gain and retain sufficient subscribers to earn a return on those investments, which leads directly to the next happy development.

On the demand side, consumers enjoy a proliferating array of streaming services, ranging from free ad-supported services to subscription ad-free services. Consumers can now easily “cut the cord” and assemble a customized bundle of preferred content from multiple services, each of which is less costly than a traditional cable package and can generally be cancelled at any time.  Current market performance does not plausibly conform to the declining output, limited variety or increasing prices that are the telltale symptoms of a less than competitive market.

Real-World v. Theoretical Markets

The market’s favorable trajectory following these two controversial transactions should not be surprising. When scrutinized against the actual characteristics of real-world digital content markets, rather than stylized theoretical models or antiquated pre-digital content markets, the arguments leveled against these transactions never made much sense. There were two fundamental and related errors. 

Error #1: Content is Scarce

Advocates for antitrust intervention assumed that entry barriers into the content market were high, in which case it followed that the owner of an especially valuable creative portfolio could exert pricing power to consumers’ detriment. Yet, in reality, funding for content production is plentiful and even a service that has an especially popular show is unlikely to have sustained pricing power in the face of a continuous flow of high-value productions being released by formidable competitors. The amounts being spent on content in 2019 by leading streaming services are unprecedented, ranging from a reported $15 billion for Netflix to an estimated $6 billion for Amazon and Apple TV+ to an estimated $3.9 billion for AT&T’s HBO Max. It is also important to note that a hit show is often a mobile asset that a streaming or other video distribution service has licensed from independent production companies and other rights holders. Once the existing deal expires, those rights are available for purchase by the highest bidder. For example, in 2019, Netflix purchased the streaming rights to “Seinfeld”, Viacom purchased the cable rights to “Seinfeld”, and HBO Max purchased the streaming rights to “South Park.” Similarly, the producers behind a hit show are always free to take their talents to competitors once any existing agreement terminates.

Error #2: Home Pay-TV is a “Monopoly”

Advocates of antitrust action were looking at the wrong market—or more precisely, the market as it existed about a decade ago. The theory that AT&T’s acquisition of Time-Warner’s creative portfolio would translate into pricing power in the home pay-TV market mighthave been plausible when consumers had no reasonable alternative to the local cable provider. But this argument makes little sense today when consumers are fleeing bulky home pay-TV bundles for cheaper cord-cutting options that deliver more targeted content packages to a mobile device.  In 2019, a “home” pay-TV market is fast becoming an anachronism and hence a home pay-TV “monopoly” largely reduces to a formalism that, with the possible exception of certain live programming, is unlikely to translate into meaningful pricing power. 

Wait a Second! What About the HBO Blackout?

A skeptical reader might reasonably object that this mostly rosy account of the post-merger home video market is unpersuasive since it does not address the ongoing blackout of HBO (now an AT&T property) on the Dish satellite TV service. Post-merger commentary that remains skeptical of the AT&T/Time-Warner merger has focused on this dispute, arguing that it “proves” that the government was right since AT&T is purportedly leveraging its new ownership of HBO to disadvantage one of its competitors in the pay-TV market. This interpretation tends to miss the forest for the trees (or more precisely, a tree).  

The AT&T/Dish dispute over HBO is only one of over 200 “carriage” disputes resulting in blackouts that have occurred this year, which continues an upward trend since approximately 2011. Some of those include Dish’s dispute with Univision (settled in March 2019 after a nine-month blackout) and AT&T’s dispute (as pay-TV provider) with Nexstar (settled in August 2019 after a nearly two-month blackout). These disputes reflect the fact that the flood of subscriber defections from traditional pay-TV to mobile streaming has made it difficult for pay-TV providers to pass on the fees sought by content owners. As a result, some pay-TV providers adopt the negotiating tactic of choosing to drop certain content until the terms improve, just as AT&T, in its capacity as a pay-TV provider, dropped CBS for three weeks in July and August 2019 pending renegotiation of licensing terms. It is the outward shift in the boundaries of the economically relevant market (from home to home-plus-mobile video delivery), rather than market power concerns, that best accounts for periodic breakdowns in licensing negotiations.  This might even be viewed positively from an antitrust perspective since it suggests that the “over the top” market is putting pressure on the fees that content owners can extract from providers in the traditional pay-TV market.

Concluding Thoughts

It is common to argue today that antitrust law has become excessively concerned about “false positives”– that is, the possibility of blocking a transaction or enjoining a practice that would have benefited consumers. Pending future developments, this early post-mortem on the regulatory and judicial treatment of these two landmark media transactions suggests that there are sometimes good reasons to stay the hand of the court or regulator. This is especially the case when a generational market shift is in progress and any regulator’s or judge’s foresight is likely to be guesswork. Antitrust law’s “failure” to stop these transactions may turn out to have been a ringing success.

This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.

In the spring of 1669 a “flying coach” transported six passengers from Oxford to London in a single day. Within a few years similar carriage services connected many major towns to the capital.

“As usual,” Lord Macaulay wrote in his history of England, “many persons” were “disposed to clamour against the innovation, simply because it was an innovation.” They objected that the express rides would corrupt traditional horsemanship, throw saddlers and boatmen out of work, bankrupt the roadside taverns, and force travelers to sit with children and the disabled. “It was gravely recommended,” reported Macaulay, by various towns and companies, that “no public coach should be permitted to have more than four horses, to start oftener that once a week, or to go more than thirty miles a day.”

Macaulay used the episode to offer his contemporaries a warning. Although “we smile at these things,” he said, “our descendants, when they read the history of the opposition offered by cupidity and prejudice to the improvements of the nineteenth century, may smile in their turn.” Macaulay wanted the smart set to take a wider view of history.

They rarely do. It is not in their nature. As Schumpeter understood, the “intellectual group” cannot help attacking “the foundations of capitalist society.” “It lives on criticism and its whole position depends on criticism that stings.”

An aspiring intellectual would do well to avoid restraint or good cheer. Better to build on a foundation of panic and indignation. Want to sell books and appear on television? Announce the “death” of this or a “crisis” over that. Want to seem fashionable among other writers, artists, and academics? Denounce greed and rail against “the system.”

New technology is always a good target. When a lantern inventor obtained a patent to light London, observed Macaulay, “the cause of darkness was not left undefended.” The learned technophobes have been especially vexed lately. The largest tech companies, they protest, are manipulating us.

Facebook, The New Republic declares, “remade the internet in its hideous image.” The New Yorker wonders whether the platform is going to “break democracy.”

Apple is no better. “Have smartphones destroyed a generation?” asks The Atlantic in a cover-story headline. The article’s author, Jean Twenge, says smartphones have made the young less independent, more reclusive, and more depressed. She claims that today’s teens are “on the brink of the worst mental-health”—wait for it—“crisis in decades.” “Much of this deterioration,” she contends, “can be traced to their phones.”

And then there’s Amazon. It’s too efficient. Alex Salkever worries in Fortune that “too many clicks, too much time spent, and too much money spent on Amazon” is “bad for our collective financial, psychological, and physical health.”

Here’s a rule of thumb for the refined cultural critic to ponder. When the talking points you use to convey your depth and perspicacity match those of a sermonizing Republican senator, start worrying that your pseudo-profound TED-Talk-y concerns for social justice are actually just fusty get-off-my-lawn fears of novelty and change.

Enter Josh Hawley, freshman GOP senator from Missouri. Hawley claims that Facebook is a “digital drug” that “dulls” attention spans and “frays” relationships. He speculates about whether social media is causing teenage girls to attempt suicide. “What passes for innovation by Big Tech today,” he insists, is “ever more sophisticated exploitation of people.” He scolds the tech companies for failing to produce products that—in his judgment—“enrich lives” and “strengthen society.”

As for the stuff the industry does make, Hawley wants it changed. He has introduced a bill to ban infinite scrolling, music and video autoplay, and the use of “badges and other awards” (gamification) on social media. The bill also requires defaults that limit a user’s time on a platform to 30 minutes a day. A user could opt out of this restriction, but only for a month at a stretch.

The available evidence does not bear out the notion that highbrow magazines, let alone Josh Hawley, should redesign tech products and police how people use their time. You’d probably have to pay someone around $500 to stay off Facebook for a year. Getting her to forego using Amazon would cost even more. And Google is worth more still—perhaps thousands of dollars per user per year. These figures are of course quite rough, but that just proves the point: the consumer surplus created by the internet is inestimable.

Is technology making teenagers sad? Probably not. A recent study tracked the social-media use, along with the wellbeing, of around ten-thousand British children for almost a decade. “In more than half of the thousands of statistical models we tested,” the study’s authors write, “we found nothing more than random statistical noise.” Although there were some small links between teenage girls’ mood and their social-media use, the connections were “miniscule” and too “trivial” to “inform personal parenting decisions.” “It’s probably best,” the researchers conclude, “to retire the idea that the amount of time teens spend on social media is a meaningful metric influencing their wellbeing.”

One could head the other way, in fact, and argue that technology is making children smarter. Surfing the web and playing video games might broaden their attention spans and improve their abstract thinking.

Is Facebook a threat to democracy? Not yet. The memes that Russian trolls distributed during the 2016 election were clumsy, garish, illiterate piffle. Most of it was the kind of thing that only an Alex Jones fan or a QAnon conspiracist would take seriously. And sure enough, one study finds that only a tiny fraction of voters, most of them older conservatives, read and spread the material. It appears, in other words, that the Russian fake news and propaganda just bounced around among a few wingnuts whose support for Donald Trump was never in doubt.

Over time, it is fair to say, the known costs and benefits of the latest technological innovations could change. New data and further study might reveal that the handwringers are on to something. But there’s good news: if you have fears, doubts, or objections, nothing stops you from acting on them. If you believe that Facebook’s behavior is intolerable, or that its impact on society is malign, stop using it. If you think Amazon is undermining small businesses, shop more at local stores. If you fret about your kid’s screen time, don’t give her a smartphone. Indeed, if you suspect that everything has gone pear-shaped since the Industrial Revolution started, throw out your refrigerator and stop going to the dentist.

We now hit the crux of the intellectuals’ (and Josh Hawley’s) complaint. It’s not a gripe about Big Tech so much as a gripe about you. You, the average person, are too dim, weak, and base. You lack the wits to use an iPhone on your own terms. You lack the self-control to post, “like”, and share in moderation (or the discipline to make your children follow suit). You lack the virtue to abstain from the pleasures of Prime-membership consumerism.

One AI researcher digs to the root. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or ‘I’m not on social media,’” she tells Vox. No one wields the “privilege” epithet quite like the modern privileged do. It is one of the remarkable features of our time. Pundits and professors use the word to announce, albeit unintentionally, that only they and their peers have any agency. Those other people, meanwhile, need protection from too much information, too much choice, too much freedom.

There’s nothing crazy about wanting the new aristocrats of the mind to shepherd everyone else. Noblesse oblige is a venerable concept. The lords care for the peasants, the king cares for the lords, God cares for the king. But that is not our arrangement. Our forebears embraced the Enlightenment. They began with the assumption that citizens are autonomous. They got suspicious whenever the holders of political power started trying to tell those citizens what they can and cannot do.

Algorithms might one day expose, and play on, our innate lack of free will so much that serious legal and societal adjustments are needed. That, however, is a remote and hypothetical issue, one likely to fall on a generation, yet unborn, who will smile in their turn at our qualms. (Before you place much weight on more dramatic predictions, consider that the great Herbert Simon asserted, in 1965, that we’d have general AI by 1985.)

The question today is more mundane: do voters crave moral direction from their betters? Are they clamoring to be viewed as lowly creatures who can hardly be relied on to tie their shoes? If so, they’re perfectly capable of debasing themselves accordingly through their choice of political representatives. Judging from Congress’s flat response to Hawley’s bill, the electorate is not quite there yet.

In the meantime, the great and the good might reevaluate their campaign to infantilize their less fortunate brothers and sisters. Lecturing people about how helpless they are is not deep. It’s not cool. It’s condescending and demeaning. It’s a form of trolling. Above all, it’s old-fashioned and priggish.

In 1816 The Times of London warned “every parent against exposing his daughter to so fatal a contagion” as . . . the waltz. “The novelty is one deserving of severe reprobation,” Britain’s paper of record intoned, “and we trust it will never again be tolerated in any moral English society.”

There was a time, Lord Macaulay felt sure, when some brahmin or other looked down his nose at the plough and the alphabet.

Wall Street Journal commentator, Greg Ip, reviews Thomas Philippon’s forthcoming book, The Great Reversal: How America Gave Up On Free Markets. Ip describes a “growing mountain” of research on industry concentration in the U.S. and reports that Philippon concludes competition has declined over time, harming U.S. consumers.

In one example, Philippon points to air travel. He notes that concentration in the U.S. has increased rapidly—spiking since the Great Recession—while concentration in the EU has increased modestly. At the same time, Ip reports “U.S. airlines are now far more profitable than their European counterparts.” (Although it’s debatable whether a five percentage point difference in net profit margin is “far more profitable”). 

On first impression, the figures fit nicely with the populist antitrust narrative: As concentration in the U.S. grew, so did profit margins. Closer inspection raises some questions, however. 

For example, the U.S. airline industry had a negative net profit margin in each of the years prior to the spike in concentration. While negative profits may be good for consumers, it would be a stretch to argue that long-run losses are good for competition as a whole. At some point one or more of the money losing firms is going to pull the ripcord. Which raises the issue of causation.

Just looking at the figures from the WSJ article, one could argue that rather than concentration driving profit margins, instead profit margins are driving concentration. Indeed, textbook IO economics would indicate that in the face of losses, firms will exit until economic profit equals zero. Paraphrasing Alfred Marshall, “Which blade of the scissors is doing the cutting?”

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to Philippon’s conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration.

Regressing U.S. air fare price index against Philippon’s concentration information in the figure above (and controlling for general inflation) finds that if U.S. concentration in 2015 was the same as in 1995, U.S. airfares would be about 2.8% lower. That a 1,250 point increase in HHI would be associated with a 2.8% increase in prices indicates that the increased concentration in U.S. airlines has led to no significant increase in consumer prices.

Also, if consumers are truly worse off, one would expect to see a drop off or slow down in the use of air travel. An eyeballing of passenger data does not fit the populist narrative. Instead, we see airlines are carrying more passengers and consumers are paying lower prices on average.

While it’s true that low-cost airlines have shaken up air travel in the EU, the differences are not solely explained by differences in market concentration. For example, U.S. regulations prohibit foreign airlines from operating domestic flights while EU carriers compete against operators from other parts of Europe. While the WSJ’s figures tell an interesting story of concentration, prices, and profits, they do not provide a compelling case of anticompetitive conduct.

A spate of recent newspaper investigations and commentary have focused on Apple allegedly discriminating against rivals in the App Store. The underlying assumption is that Apple, as a vertically integrated entity that operates both a platform for third-party apps and also makes it own apps, is acting nefariously whenever it “discriminates” against rival apps through prioritization, enters into popular app markets, or charges a “tax” or “surcharge” on rival apps. 

For most people, the word discrimination has a pejorative connotation of animus based upon prejudice: racism, sexism, homophobia. One of the definitions you will find in the dictionary reflects this. But another definition is a lot less charged: the act of making or perceiving a difference. (This is what people mean when they say that a person has a discriminating palate, or a discriminating taste in music, for example.)

In economics, discrimination can be a positive attribute. For instance, effective price discrimination can result in wealthier consumers paying a higher price than less well off consumers for the same product or service, and it can ensure that products and services are in fact available for less-wealthy consumers in the first place. That would seem to be a socially desirable outcome (although under some circumstances, perfect price discrimination can be socially undesirable). 

Antitrust law rightly condemns conduct only when it harms competition and not simply when it harms a competitor. This is because it is competition that enhances consumer welfare, not the presence or absence of a competitor — or, indeed, the profitability of competitors. The difficult task for antitrust enforcers is to determine when a vertically integrated firm with “market power” in an upstream market is able to effectively discriminate against rivals in a downstream market in a way that harms consumers

Even assuming the claims of critics are true, alleged discrimination by Apple against competitor apps in the App Store may harm those competitors, but it doesn’t necessarily harm either competition or consumer welfare.

The three potential antitrust issues facing Apple can be summarized as:

There is nothing new here economically. All three issues are analogous to claims against other tech companies. But, as I detail below, the evidence to establish any of these claims at best represents harm to competitors, and fails to establish any harm to the competitive process or to consumer welfare.

Prioritization

Antitrust enforcers have rejected similar prioritization claims against Google. For instance, rivals like Microsoft and Yelp have funded attacks against Google, arguing the search engine is harming competition by prioritizing its own services in its product search results over competitors. As ICLE and affiliated scholars have pointed out, though, there is nothing inherently harmful to consumers about such prioritization. There are also numerous benefits in platforms directly answering queries, even if it ends up directing users to platform-owned products or services.

As Geoffrey Manne has observed:

there is good reason to believe that Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to vigorously compete and to decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content to partially displace the original “ten blue links” design of its search results page and offer its own answers to users’ queries in its stead. 

Here, the antitrust case against Apple for prioritization is similarly flawed. For example, as noted in a recent article in the WSJ, users often use the App Store search in order to find apps they already have installed:

“Apple customers have a very strong connection to our products and many of them use search as a way to find and open their apps,” Apple said in a statement. “This customer usage is the reason Apple has strong rankings in search, and it’s the same reason Uber, Microsoft and so many others often have high rankings as well.” 

If a substantial portion of searches within the App Store are for apps already on the iPhone, then showing the Apple app near the top of the search results could easily be consumer welfare-enhancing. 

Apple is also theoretically leaving money on the table by prioritizing its (already pre-loaded) apps over third party apps. If its algorithm promotes its own apps over those that may earn it a 30% fee — additional revenue — the prioritization couldn’t plausibly be characterized as a “benefit” to Apple. Apple is ultimately in the business of selling hardware. Losing customers of the iPhone or iPad by prioritizing apps consumers want less would not be a winning business strategy.

Further, it stands to reason that those who use an iPhone may have a preference for Apple apps. Such consumers would be naturally better served by seeing Apple’s apps prioritized over third-party developer apps. And if consumers do not prefer Apple’s apps, rival apps are merely seconds of scrolling away.

Moreover, all of the above assumes that Apple is engaging in sufficiently pervasive discrimination through prioritzation to have a major impact on the app ecosystem. But substantial evidence exists that the universe of searches for which Apple’s algorithm prioritizes Apple apps is small. For instance, most searches are for branded apps already known by the searcher:

Keywords: how many are brands?

  • Top 500: 58.4%
  • Top 400: 60.75%
  • Top 300: 68.33%
  • Top 200: 80.5%
  • Top 100: 86%
  • Top 50: 90%
  • Top 25: 92%
  • Top 10: 100%

This is corroborated by data from the NYT’s own study, which suggests Apple prioritized its own apps first in only roughly 1% of the overall keywords queried: 

Whatever the precise extent of increase in prioritization, it seems like any claims of harm are undermined by the reality that almost 99% of App Store results don’t list Apple apps first. 

The fact is, very few keyword searches are even allegedly affected by prioritization. And the algorithm is often adjusting to searches for apps already pre-loaded on the device. Under these circumstances, it is very difficult to conclude consumers are being harmed by prioritization in search results of the App Store.

Entry

The issue of Apple building apps to compete with popular apps in its marketplace is similar to complaints about Amazon creating its own brands to compete with what is sold by third parties on its platform. For instance, as reported multiple times in the Washington Post:

Clue, a popular app that women use to track their periods, recently rocketed to the top of the App Store charts. But the app’s future is now in jeopardy as Apple incorporates period and fertility tracking features into its own free Health app, which comes preinstalled on every device. Clue makes money by selling subscriptions and services in its free app. 

However, there is nothing inherently anticompetitive about retailers selling their own brands. If anything, entry into the market is normally procompetitive. As Randy Picker recently noted with respect to similar claims against Amazon: 

The heart of this dynamic isn’t new. Sears started its catalogue business in 1888 and then started using the Craftsman and Kenmore brands as in-house brands in 1927. Sears was acquiring inventory from third parties and obviously knew exactly which ones were selling well and presumably made decisions about which markets to enter and which to stay out of based on that information. Walmart, the nation’s largest retailer, has a number of well-known private brands and firms negotiating with Walmart know full well that Walmart can enter their markets, subject of course to otherwise applicable restraints on entry such as intellectual property laws… I think that is possible to tease out advantages that a platform has regarding inventory experimentation. It can outsource some of those costs to third parties, though sophisticated third parties should understand where they can and cannot have a sustainable advantage given Amazon’s ability to move to build-or-bought first-party inventory. We have entire bodies of law— copyright, patent, trademark and more—that limit the ability of competitors to appropriate works, inventions and symbols. Those legal systems draw very carefully considered lines regarding permitted and forbidden uses. And antitrust law generally favors entry into markets and doesn’t look to create barriers that block firms, large or small, from entering new markets.

If anything, Apple is in an even better position than Amazon. Apple invests revenue in app development, not because the apps themselves generate revenue, but because it wants people to use the hardware, i.e. the iPhones, iPads, and Apple Watches. The reason Apple created an App Store in the first place is because this allows Apple to make more money from selling devices. In order to promote security on those devices, Apple institutes rules for the App Store, but it ultimately decides whether to create its own apps and provide access to other apps based upon its desire to maximize the value of the device. If Apple chooses to create free apps in order to improve iOS for users and sell more hardware, it is not a harm to competition.

Apple’s ability to enter into popular app markets should not be constrained unless it can be shown that by giving consumers another choice, consumers are harmed. As noted above, most searches in the App Store are for branded apps to begin with. If consumers already know what they want in an app, it hardly seems harmful for Apple to offer — and promote — its own, additional version as well. 

In the case of Clue, if Apple creates a free health app, it may hurt sales for Clue. But it doesn’t hurt consumers who want the functionality and would prefer to get it from Apple for free. This sort of product evolution is not harming competition, but enhancing it. And, it must be noted, Apple doesn’t exclude Clue from its devices. If, indeed, Clue offers a better product, or one that some users prefer, they remain able to find it and use it.

The so-called App Store “Tax”

The argument that Apple has an unfair competitive advantage over rival apps which have to pay commissions to Apple to be on the App Store (a “tax” or “surcharge”) has similarly produced no evidence of harm to consumers. 

Apple invested a lot into building the iPhone and the App Store. This infrastructure has created an incredibly lucrative marketplace for app developers to exploit. And, lest we forget a point fundamental to our legal system, Apple’s App Store is its property

The WSJ and NYT stories give the impression that Apple uses its commissions on third party apps to reduce competition for its own apps. However, this is inconsistent with how Apple charges its commission

For instance, Apple doesn’t charge commissions on free apps, which make up 84% of the App Store. Apple also doesn’t charge commissions for apps that are free to download but are supported by advertising — including hugely popular apps like Yelp, Buzzfeed, Instagram, Pinterest, Twitter, and Facebook. Even apps which are “readers” where users purchase or subscribe to content outside the app but use the app to access that content are not subject to commissions, like Spotify, Netflix, Amazon Kindle, and Audible. Apps for “physical goods and services” — like Amazon, Airbnb, Lyft, Target, and Uber — are also free to download and are not subject to commissions. The class of apps which are subject to a 30% commission include:

  • paid apps (like many games),
  • free apps that then have in-app purchases (other games and services like Skype and TikTok), 
  • and free apps with digital subscriptions (Pandora, Hulu, which have 30% commission first year and then 15% in subsequent years), and
  • cross-platform apps (Dropbox, Hulu, and Minecraft) which allow for digital goods and services to be purchased in-app and Apple collects commission on in-app sales, but not sales from other platforms. 

Despite protestations to the contrary, these costs are hardly unreasonable: third party apps receive the benefit not only of being in Apple’s App Store (without which they wouldn’t have any opportunity to earn revenue from sales on Apple’s platform), but also of the features and other investments Apple continues to pour into its platform — investments that make the ecosystem better for consumers and app developers alike. There is enormous value to the platform Apple has invested in, and a great deal of it is willingly shared with developers and consumers.  It does not make it anticompetitive to ask those who use the platform to pay for it. 

In fact, these benefits are probably even more important for smaller developers rather than bigger ones who can invest in the necessary back end to reach consumers without the App Store, like Netflix, Spotify, and Amazon Kindle. For apps without brand reputation (and giant marketing budgets), the ability for consumers to trust that downloading the app will not lead to the installation of malware (as often occurs when downloading from the web) is surely essential to small developers’ ability to compete. The App Store offers this.

Despite the claims made in Spotify’s complaint against Apple, Apple doesn’t have a duty to deal with app developers. Indeed, Apple could theoretically fill the App Store with only apps that it developed itself, like Apple Music. Instead, Apple has opted for a platform business model, which entails the creation of a new outlet for others’ innovation and offerings. This is pro-consumer in that it created an entire marketplace that consumers probably didn’t even know they wanted — and certainly had no means to obtain — until it existed. Spotify, which out-competed iTunes to the point that Apple had to go back to the drawing board and create Apple Music, cannot realistically complain that Apple’s entry into music streaming is harmful to competition. Rather, it is precisely what vigorous competition looks like: the creation of more product innovation, lower prices, and arguably (at least for some) higher quality.

Interestingly, Spotify is not even subject to the App Store commission. Instead, Spotify offers a work-around to iPhone users to obtain its premium version without ads on iOS. What Spotify actually desires is the ability to sell premium subscriptions to Apple device users without paying anything above the de minimis up-front cost to Apple for the creation and maintenance of the App Store. It is unclear how many potential Spotify users are affected by the inability to directly buy the ad-free version since Spotify discontinued offering it within the App Store. But, whatever the potential harm to Spotify itself, there’s little reason to think consumers or competition bear any of it. 

Conclusion

There is no evidence that Apple’s alleged “discrimination” against rival apps harms consumers. Indeed, the opposite would seem to be the case. The regulatory discrimination against successful tech platforms like Apple and the App Store is far more harmful to consumers.

Why Data Is Not the New Oil

Alec Stapp —  8 October 2019

“Data is the new oil,” said Jaron Lanier in a recent op-ed for The New York Times. Lanier’s use of this metaphor is only the latest instance of what has become the dumbest meme in tech policy. As the digital economy becomes more prominent in our lives, it is not unreasonable to seek to understand one of its most important inputs. But this analogy to the physical economy is fundamentally flawed. Worse, introducing regulations premised upon faulty assumptions like this will likely do far more harm than good. Here are seven reasons why “data is the new oil” misses the mark:

1. Oil is rivalrous; data is non-rivalrous

If someone uses a barrel of oil, it can’t be consumed again. But, as Alan McQuinn, a senior policy analyst at the Information Technology and Innovation Foundation, noted, “when consumers ‘pay with data’ to access a website, they still have the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services.” Imposing restrictions on data collection makes this infinite resource finite. 

2. Oil is excludable; data is non-excludable

Oil is highly excludable because, as a physical commodity, it can be stored in ways that prevent use by non-authorized parties. However, as my colleagues pointed out in a recent comment to the FTC: “While databases may be proprietary, the underlying data usually is not.” They go on to argue that this can lead to under-investment in data collection:

[C]ompanies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also  means that firms may well be more reluctant to invest in data generation than is socially optimal. In fact, to the extent this is true there is arguably more risk of companies under-investing in data  generation than of firms over-investing in order to create data troves with which to monopolize a market. This contrasts with oil, where complete excludability is the norm.

3. Oil is fungible; data is non-fungible

Oil is a commodity, so, by definition, one barrel of oil of a given grade is equivalent to any other barrel of that grade. Data, on the other hand, is heterogeneous. Each person’s data is unique and may consist of a practically unlimited number of different attributes that can be collected into a profile. This means that oil will follow the law of one price, while a dataset’s value will be highly contingent on its particular properties and commercialization potential.

4. Oil has positive marginal costs; data has zero marginal costs

There is a significant expense to producing and distributing an additional barrel of oil (as low as $5.49 per barrel in Saudi Arabia; as high as $21.66 in the U.K.). Data is merely encoded information (bits of 1s and 0s), so gathering, storing, and transferring it is nearly costless (though, to be clear, setting up systems for collecting and processing can be a large fixed cost). Under perfect competition, the market clearing price is equal to the marginal cost of production (hence why data is traded for free services and oil still requires cold, hard cash).

5. Oil is a search good; data is an experience good

Oil is a search good, meaning its value can be assessed prior to purchasing. By contrast, data tends to be an experience good because companies don’t know how much a new dataset is worth until it has been combined with pre-existing datasets and deployed using algorithms (from which value is derived). This is one reason why purpose limitation rules can have unintended consequences. If firms are unable to predict what data they will need in order to develop new products, then restricting what data they’re allowed to collect is per se anti-innovation.

6. Oil has constant returns to scale; data has rapidly diminishing returns

As an energy input into a mechanical process, oil has relatively constant returns to scale (e.g., when oil is used as the fuel source to power a machine). When data is used as an input for an algorithm, it shows rapidly diminishing returns, as the charts collected in a presentation by Google’s Hal Varian demonstrate. The initial training data is hugely valuable for increasing an algorithm’s accuracy. But as you increase the dataset by a fixed amount each time, the improvements steadily decline (because new data is only helpful in so far as it’s differentiated from the existing dataset).

7. Oil is valuable; data is worthless

The features detailed above — rivalrousness, fungibility, marginal cost, returns to scale — all lead to perhaps the most important distinction between oil and data: The average barrel of oil is valuable (currently $56.49) and the average dataset is worthless (on the open market). As Will Rinehart showed, putting a price on data is a difficult task. But when data brokers and other intermediaries in the digital economy do try to value data, the prices are almost uniformly low. The Financial Times had the most detailed numbers on what personal data is sold for in the market:

  • “General information about a person, such as their age, gender and location is worth a mere $0.0005 per person, or $0.50 per 1,000 people.”
  • “A person who is shopping for a car, a financial product or a vacation is more valuable to companies eager to pitch those goods. Auto buyers, for instance, are worth about $0.0021 a pop, or $2.11 per 1,000 people.”
  • “Knowing that a woman is expecting a baby and is in her second trimester of pregnancy, for instance, sends the price tag for that information about her to $0.11.”
  • “For $0.26 per person, buyers can access lists of people with specific health conditions or taking certain prescriptions.”
  • “The company estimates that the value of a relatively high Klout score adds up to more than $3 in word-of-mouth marketing value.”
  • [T]he sum total for most individuals often is less than a dollar.

Data is a specific asset, meaning it has “a significantly higher value within a particular transacting relationship than outside the relationship.” We only think data is so valuable because tech companies are so valuable. In reality, it is the combination of high-skilled labor, large capital expenditures, and cutting-edge technologies (e.g., machine learning) that makes those companies so valuable. Yes, data is an important component of these production functions. But to claim that data is responsible for all the value created by these businesses, as Lanier does in his NYT op-ed, is farcical (and reminiscent of the labor theory of value). 

Conclusion

People who analogize data to oil or gold may merely be trying to convey that data is as valuable in the 21st century as those commodities were in the 20th century (though, as argued, a dubious proposition). If the comparison stopped there, it would be relatively harmless. But there is a real risk that policymakers might take the analogy literally and regulate data in the same way they regulate commodities. As this article shows, data has many unique properties that are simply incompatible with 20th-century modes of regulation.

A better — though imperfect — analogy, as author Bernard Marr suggests, would be renewable energy. The sources of renewable energy are all around us — solar, wind, hydroelectric — and there is more available than we could ever use. We just need the right incentives and technology to capture it. The same is true for data. We leave our digital fingerprints everywhere — we just need to dust for them.

Source: New York Magazine

When she rolled out her plan to break up Big Tech, Elizabeth Warren paid for ads (like the one shown above) claiming that “Facebook and Google account for 70% of all internet traffic.” This statistic has since been repeated in various forms by Rolling Stone, Vox, National Review, and Washingtonian. In my last post, I fact checked this claim and found it wanting.

Warren’s data

As supporting evidence, Warren cited a Newsweek article from 2017, which in turn cited a blog post from an open-source freelancer, who was aggregating data from a 2015 blog post published by Parse.ly, a web analytics company, which said: “Today, Facebook remains a top referring site to the publishers in Parse.ly’s network, claiming 39 percent of referral traffic versus Google’s share of 34 percent.” At the time, Parse.ly had “around 400 publisher domains” in its network. To put it lightly, this is not what it means to “account for” or “control” or “directly influence” 70 percent of all internet traffic, as Warren and others have claimed.

Internet traffic measured in bytes

In an effort to contextualize how extreme Warren’s claim was, in my last post I used a common measure of internet traffic — total volume in bytes — to show that Google and Facebook account for less than 20 percent of global internet traffic. Some Warren defenders have correctly pointed out that measuring internet traffic in bytes will weight the results toward data-heavy services, such as video streaming. It’s not obvious a priori, however, whether this would bias the results in favor of Facebook and Google or against them, given that users stream lots of video using those companies’ sites and apps (hello, YouTube).

Internet traffic measured by time spent by users

As I said in my post, there are multiple ways to measure total internet traffic, and no one of them is likely to offer a perfect measure. So, to get a fuller picture, we could also look at how users are spending their time on the internet. While there is no single source for global internet time use statistics, we can combine a few to reach an estimate (NB: this analysis includes time spent in apps as well as on the web). 

According to the Global Digital report by Hootsuite and We Are Social, in 2018 there were 4.021 billion active internet users, and the worldwide average for time spent using the internet was 6 hours and 42 minutes per day. That means there were 1,616 billion internet user-minutes per day.

Data from Apptopia shows that, in the three months from May through July 2018, users spent 300 billion hours in Facebook-owned apps and 118 billion hours in Google-owned apps. In other words, all Facebook-owned apps consume, on average, 197 billion user-minutes per day and all Google-owned apps consume, on average, 78 billion user-minutes per day. And according to SimilarWeb data for the three months from June to August 2019, web users spent 11 billion user-minutes per day visiting Facebook domains (facebook.com, whatsapp.com, instagram.com, messenger.com) and 52 billion user-minutes per day visiting Google domains, including google.com (and all subdomains) and youtube.com.

If you add up all app and web user-minutes for Google and Facebook, the total is 338 billion user minutes per day. A staggering number. But as a share of all internet traffic (in this case measured in terms of time spent)? Google- and Facebook-owned sites and apps account for about 21 percent of user-minutes.

Internet traffic measured by “connections”

In my last post, I cited a Sandvine study that measured total internet traffic by volume of upstream and downstream bytes. The same report also includes numbers for what Sandvine calls “connections,” which is defined as “the number of conversations occurring for an application.” Sandvine notes that while “some applications use a single connection for all traffic, others use many connections to transfer data or video to the end user.” For example, a video stream on Netflix uses a single connection, while every item on a webpage, such as loading images, may require a distinct connection.

Cam Cullen, Sandvine’s VP of marketing, also implored readers to “never forget Google connections include YouTube, Search, and DoubleClick — all of which are very noisy applications and universally consumed,” which would bias this statistic toward inflating Google’s share. With these caveats in mind, Sandvine’s data shows that Google is responsible for 30 percent of these connections, while Facebook is responsible for under 8 percent of connections. Note that Netflix’s share is less than 1 percent, which implies this statistic is not biased toward data-heavy services. Again, the numbers for Google and Facebook are a far cry from what Warren and others are claiming.

Source: Sandvine

Internet traffic measured by sources

I’m not sure whether either of these measures is preferable to what I offered in my original post, but each is at least a plausible measure of internet traffic — and all of them fall well short of Waren’s claimed 70 percent. What I do know is that the preferred metric offered by the people most critical of my post — external referrals to online publishers (content sites) — is decidedly not a plausible measure of internet traffic.

In defense of Warren, Jason Kint, the CEO of a trade association for digital content publishers, wrote, “I just checked actual benchmark data across our members (most publishers) and 67% of their external traffic comes through Google or Facebook.” Rand Fishkin cites his own analysis of data from Jumpshot showing that 66.0 percent of external referral visits were sent by Google and 5.1 percent were sent by Facebook.

In another response to my piece, former digital advertising executive, Dina Srinivasan, said, “[Percentage] of referrals is relevant because it is pointing out that two companies control a large [percentage] of business that comes through their door.” 

In my opinion, equating “external referrals to publishers” with “internet traffic” is unacceptable for at least two reasons.

First, the internet is much broader than traditional content publishers — it encompasses everything from email and Yelp to TikTok, Amazon, and Netflix. The relevant market is consumer attention and, in that sense, every internet supplier is bidding for scarce time. In a recent investor letter, Netflix said, “We compete with (and lose to) ‘Fortnite’ more than HBO,” adding: “There are thousands of competitors in this highly fragmented market vying to entertain consumers and low barriers to entry for those great experiences.” Previously, CEO Reed Hastings had only half-jokingly said, “We’re competing with sleep on the margin.” In this debate over internet traffic, the opposing side fails to grasp the scope of the internet market. It is unsuprising, then, that the one metric that does best at capturing attention — time spent — is about the same as bytes.

Second, and perhaps more important, even if we limit our analysis to publisher traffic, the external referral statistic these critics cite completely (and conveniently?) omits direct and internal traffic — traffic that represents the majority of publisher traffic. In fact, according to Parse.ly’s most recent data, which now includes more than 3,000 “high-traffic sites,” only 35 percent of total traffic comes from search and social referrers (as the graph below shows). Of course, Google and Facebook drive the majority of search and social referrals. But given that most users visit webpages without being referred at all, Google and Facebook are responsible for less than a third of total traffic

Source: Parse.ly

It is simply incorrect to say, as Srinivasan does, that external referrals offers a useful measurement of internet traffic because it captures a “large [percentage] of business that comes through [publishers’] door.” Well, “large” is relative, but the implication that these external referrals from Facebook and Google explain Warren’s 70%-of-internet-traffic claim is both factually incorrect and horribly misleading — especially in an antitrust context. 

It is factually incorrect because, at most, Google and Facebook are responsible for a third of the traffic on these sites; it is misleading because if our concern is ensuring that users can reach content sites without passing through Google or Facebook, the evidence is clear that they can and do — at least twice as often as they follow links from Google or Facebook to do so.

Conclusion

As my colleague Gus Hurwitz said, Warren is making a very specific and very alarming claim: 

There may be ‘softer’ versions of [Warren’s claim] that are reasonably correct (e.g., digital ad revenue, visibility into traffic). But for 99% of people hearing (and reporting on) these claims, they hear the hard version of the claim: Google and Facebook control 70% of what you do online. That claim is wrong, alarmist, misinformation, intended to foment fear, uncertainty, and doubt — to bootstrap the argument that ‘everything is terrible, worse, really!, and I’m here to save you.’ This is classic propaganda.

Google and Facebook do account for a 59 percent (and declining) share of US digital advertising. But that’s not what Warren said (nor would anyone try to claim with a straight face that “volume of advertising” was the same thing as “internet traffic”). And if our concern is with competition, it’s hard to look at the advertising market and conclude that it’s got a competition problem. Prices are falling like crazy (down 42 percent in the last decade), and volume is only increasing. If you add in offline advertising (which, whatever you think about market definition here, certainly competes with online advertising at the very least on some dimensions) Google and Facebook are responsible for only about 32 percent.

In her comments criticizing my article, Dina Srinivasan mentioned another of these “softer” versions:

Also, each time a publisher page loads, what [percentage] then queries Google or Facebook servers during the page loads? About 98+% of every page load. That stat is not even in Warren or your analysis. That is 1000% relevant.

It’s true that Google and Facebook have visibility into a great deal of all internet traffic (beyond their own) through a variety of products and services: browsers, content delivery networks (CDNs), web beacons, cloud computing, VPNs, data brokers, single sign-on (SSO), and web analytics services. But seeing internet traffic is not the same thing as “account[ing] for” — or controlling or even directly influencing — internet traffic. The first is a very different claim than the latter, and one with considerably more attenuated competitive relevance (if any). It certainly wouldn’t be a sufficient basis for advocating that Google and Facebook be broken up — which is probably why, although arguably accurate, it’s not the statistic upon which Warren based her proposal to do so.

In March of this year, Elizabeth Warren announced her proposal to break up Big Tech in a blog post on Medium. She tried to paint the tech giants as dominant players crushing their smaller competitors and strangling the open internet. This line in particular stood out: “More than 70% of all Internet traffic goes through sites owned or operated by Google or Facebook.

This statistic immediately struck me as outlandish, but I knew I would need to do some digging to fact check it. After seeing the claim repeated in a recent profile of the Open Markets Institute — “Google and Facebook control websites that receive 70 percent of all internet traffic” — I decided to track down the original source for this surprising finding. 

Warren’s blog post links to a November 2017 Newsweek article — “Who Controls the Internet? Facebook and Google Dominance Could Cause the ‘Death of the Web’” — written by Anthony Cuthbertson. The piece is even more alarmist than Warren’s blog post: “Facebook and Google now have direct influence over nearly three quarters of all internet traffic, prompting warnings that the end of a free and open web is imminent.

The Newsweek article, in turn, cites an October 2017 blog post by André Staltz, an open source freelancer, on his personal website titled “The Web began dying in 2014, here’s how”. His takeaway is equally dire: “It looks like nothing changed since 2014, but GOOG and FB now have direct influence over 70%+ of internet traffic.” Staltz claims the blog post took “months of research to write”, but the headline statistic is merely aggregated from a December 2015 blog post by Parse.ly, a web analytics and content optimization software company.

Source: André Staltz

The Parse.ly article — “Facebook Continues to Beat Google in Sending Traffic to Top Publishers” — is about external referrals (i.e., outside links) to publisher sites (not total internet traffic) and says the “data set used for this study included around 400 publisher domains.” This is not even a random sample much less a comprehensive measure of total internet traffic. Here’s how they summarize their results: “Today, Facebook remains a top referring site to the publishers in Parse.ly’s network, claiming 39 percent of referral traffic versus Google’s share of 34 percent.” 

Source: Parse.ly

So, using the sources provided by the respective authors, the claim from Elizabeth Warren that “more than 70% of all Internet traffic goes through sites owned or operated by Google or Facebook” can be more accurately rewritten as “more than 70 percent of external links to 400 publishers come from sites owned or operated by Google and Facebook.” When framed that way, it’s much less conclusive (and much less scary).

But what’s the real statistic for total internet traffic? This is a surprisingly difficult question to answer, because there is no single way to measure it: Are we talking about share of users, or user-minutes, of bits, or total visits, or unique visits, or referrals? According to Wikipedia, “Common measurements of traffic are total volume, in units of multiples of the byte, or as transmission rates in bytes per certain time units.”

One of the more comprehensive efforts to answer this question is undertaken annually by Sandvine. The networking equipment company uses its vast installed footprint of equipment across the internet to generate statistics on connections, upstream traffic, downstream traffic, and total internet traffic (summarized in the table below). This dataset covers both browser-based and app-based internet traffic, which is crucial for capturing the full picture of internet user behavior.

Source: Sandvine

Looking at two categories of traffic analyzed by Sandvine — downstream traffic and overall traffic — gives lie to the narrative pushed by Warren and others. As you can see in the chart below, HTTP media streaming — a category for smaller streaming services that Sandvine has not yet tracked individually — represented 12.8% of global downstream traffic and Netflix accounted for 12.6%. According to Sandvine, “the aggregate volume of the long tail is actually greater than the largest of the short-tail providers.” So much for the open internet being smothered by the tech giants.

Source: Sandvine

As for Google and Facebook? The report found that Google-operated sites receive 12.00 percent of total internet traffic while Facebook-controlled sites receive 7.79 percent. In other words, less than 20 percent of all Internet traffic goes through sites owned or operated by Google or Facebook. While this statistic may be less eye-popping than the one trumpeted by Warren and other antitrust activists, it does have the virtue of being true.

Source: Sandvine

On March 19-20, 2020, the University of Nebraska College of Law will be hosting its third annual roundtable on closing the digital divide. UNL is expanding its program this year to include a one-day roundtable that focuses on the work of academics and researchers who are conducting empirical studies of the rural digital divide. 

Academics and researchers interested in having their work featured in this event are now invited to submit pieces for consideration. Submissions should be submitted by November 18th, 2019 using this form. The authors of papers and projects selected for inclusion will be notified by December 9, 2019. Research honoraria of up to $5,000 may be awarded for selected projects.

Example topics include cost studies of rural wireless deployments, comparative studies of the effects of ACAM funding, event studies of legislative interventions such as allowing customers unserved by carriers in their home exchange to request service from carriers in adjoining exchanges, comparative studies of the effectiveness of various federal and state funding mechanisms, and cost studies of different sorts of municipal deployments. This list is far from exhaustive.

Any questions about this event or the request for projects can be directed to Gus Hurwitz at ghurwitz@unl.edu or Elsbeth Magilton at elsbeth@unl.edu.

In the Federal Trade Commission’s recent hearings on competition policy in the 21st century, Georgetown professor Steven Salop urged greater scrutiny of vertical mergers. He argued that regulators should be skeptical of the claim that vertical integration tends to produce efficiencies that can enhance consumer welfare. In his presentation to the FTC, Professor Salop provided what he viewed as exceptions to this long-held theory.

Also, vertical merger efficiencies are not inevitable. I mean, vertical integration is common, but so is vertical non-integration. There is an awful lot of companies that are not vertically integrated. And we have lots of examples in which vertical integration has failed. Pepsi’s acquisition of KFC and Pizza Hut; you know, of course Coca-Cola has not merged with McDonald’s . . . .

Aside from the logical fallacy of cherry picking examples (he also includes Betamax/VHS and the split up of Alcoa and Arconic, as well as “integration and disintegration” “in cable”), Professor Salop misses the fact that PepsiCo’s 20 year venture into restaurants had very little to do with vertical integration.

Popular folklore says PepsiCo got into fast food because it was looking for a way to lock up sales of its fountain sodas. Soda is considered one of the highest margin products sold by restaurants. Vertical integration by a soda manufacturer into restaurants would eliminate double marginalization with the vertically integrated firm reaping most of the gains. The folklore fits nicely with economic theory. But, the facts may not fit the theory.

PepsiCo acquired Pizza Hut in 1977, Taco Bell in 1978, and Kentucky Fried Chicken in 1986. Prior to PepsiCo’s purchase, KFC had been owned by spirits company Heublein and conglomerate RJR Nabisco. This was the period of conglomerates—Pillsbury owned Burger King and General Foods owned Burger Chef (or maybe they were vertically integrated into bun distribution).

In the early 1990s Pepsi also bought California Pizza Kitchen, Chevys Fresh Mex, and D’Angelo Grilled Sandwiches.

In 1997, PepsiCo exited the restaurant business. It spun off Pizza Hut, Taco Bell, and KFC to Tricon Global Restaurants, which would later be renamed Yum! Brands. CPK and Chevy’s were purchased by private equity investors. D’Angelo was sold to Papa Gino’s Holdings, a restaurant chain. Since then, both Chevy’s and Papa Gino’s have filed for bankruptcy and Chevy’s has had some major shake-ups.

Professor Salop’s story focuses on the spin-off as an example of the failure of vertical mergers. But there is also a story of success. PepsiCo was in the restaurant business for two decades. More importantly, it continued its restaurant acquisitions over time. If PepsiCo’s restaurants strategy was a failure, it seems odd that the company would continue acquisitions into the early 1990s.

It’s easy, and largely correct, to conclude that PepsiCo’s restaurant acquisitions involved some degree of vertical integration, with upstream PepsiCo selling beverages to downstream restaurants. At the time PepsiCo bought Kentucky Fried Chicken, the New York Times reported KFC was Coke’s second-largest fountain account, behind McDonald’s.

But, what if vertical efficiencies were not the primary reason for the acquisitions?

Growth in U.S. carbonated beverage sales began slowing in the 1970s. It was also the “decade of the fast-food business.” From 1971 to 1977, Pizza Hut’s profits grew an average of 40% per year. Colonel Sanders sold his ownership in KFC for $2 million in 1964. Seven years later, the company was sold to Heublein for $280 million; PepsiCo paid $850 million in 1986.

Although KFC was Coke’s second largest customer at the time, about 20% of KFC’s stores served Pepsi products, “PepsiCo stressed that the major reason for the acquisition was to expand its restaurant business, which last year accounted for 26 percent of its revenues of $8.1 billion,” according to the New York Times.

Viewed in this light, portfolio diversification goes a much longer way toward explaining PepsiCo’s restaurant purchases than hoped-for vertical efficiencies. In 1997, former PepsiCo chairman Roger Enrico explained to investment analysts that the company entered the restaurant business in the first place, “because it didn’t see future growth in its soft drink and snack” businesses and thought diversification into restaurants would provide expansion opportunities.

Prior to its Pizza Hut and Taco Bell acquisitions, PepsiCo owned companies as diverse as Frito-Lay, North American Van Lines, Wilson Sporting Goods, and Rheingold Brewery. This further supports a diversification theory rather than a vertical integration theory of PepsiCo’s restaurant purchases. 

The mid 1990s and early 2000s were tough times for restaurants. Consumers were demanding healthier foods and fast foods were considered the worst of the worst. This was when Kentucky Fried Chicken rebranded as KFC. Debt hangovers from the leveraged buyout era added financial pressure. Many restaurant groups were filing for bankruptcy and competition intensified among fast food companies. PepsiCo’s restaurants could not cover their cost of capital, and what was once a profitable diversification strategy became a financial albatross, so the restaurants were spun off.

Thus, it seems more reasonable to conclude PepsiCo’s exit from restaurants was driven more by market exigencies than by a failure to achieve vertical efficiencies. While the folklore of locking up distribution channels to eliminate double marginalization fits nicely with theory, the facts suggest a more mundane model of a firm scrambling to deliver shareholder wealth through diversification in the face of changing competition.