Archives For truth on the market

A screenshot of a cell phone

Description automatically generated

This is the first in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision. It draws on research from a soon-to-be published ICLE white paper.

The European Commission’s recent Google Android decision will surely go down as one of the most important competition proceedings of the past decade. And yet, an in-depth reading of the 328 page decision should leave attentive readers with a bitter taste.

One of the Commission’s most significant findings is that the Android operating system and Apple’s iOS are not in the same relevant market, along with the related conclusion that Apple’s App Store and Google Play are also in separate markets.

This blog post points to a series of flaws that undermine the Commission’s reasoning on this point. As a result, the Commission’s claim that Google and Apple operate in separate markets is mostly unsupported.

1. Everyone but the European Commission thinks that iOS competes with Android

Surely the assertion that the two predominant smartphone ecosystems in Europe don’t compete with each other will come as a surprise to… anyone paying attention: 

A screenshot of a cell phone

Description automatically generated

Apple 10-K:

The Company believes the availability of third-party software applications and services for its products depends in part on the developers’ perception and analysis of the relative benefits of developing, maintaining and upgrading such software and services for the Company’s products compared to competitors’ platforms, such as Android for smartphones and tablets and Windows for personal computers.

Google 10-K:

We face competition from: Companies that design, manufacture, and market consumer electronics products, including businesses that have developed proprietary platforms.

This leads to a critical question: Why did the Commission choose to depart from the instinctive conclusion that Google and Apple compete vigorously against each other in the smartphone and mobile operating system market? 

As explained below, its justifications for doing so were deeply flawed.

2. It does not matter that OEMs cannot license iOS (or the App Store)

One of the main reasons why the Commission chose to exclude Apple from the relevant market is that OEMs cannot license Apple’s iOS or its App Store.

But is it really possible to infer that Google and Apple do not compete against each other because their products are not substitutes from OEMs’ point of view? 

The answer to this question is likely no.

Relevant markets, and market shares, are merely a proxy for market power (which is the appropriate baseline upon which build a competition investigation). As Louis Kaplow puts it:

[T]he entire rationale for the market definition process is to enable an inference about market power.

If there is a competitive market for Android and Apple smartphones, then it is somewhat immaterial that Google is the only firm to successfully offer a licensable mobile operating system (as opposed to Apple and Blackberry’s “closed” alternatives).

By exercising its “power” against OEMs by, for instance, degrading the quality of Android, Google would, by the same token, weaken its competitive position against Apple. Google’s competition with Apple in the smartphone market thus constrains Google’s behavior and limits its market power in Android-specific aftermarkets (on this topic, see Borenstein et al., and Klein).

This is not to say that Apple’s iOS (and App Store) is, or is not, in the same relevant market as Google Android (and Google Play). But the fact that OEMs cannot license iOS or the App Store is mostly immaterial for market  definition purposes.

 3. Google would find itself in a more “competitive” market if it decided to stop licensing the Android OS

The Commission’s reasoning also leads to illogical outcomes from a policy standpoint. 

Google could suddenly find itself in a more “competitive” market if it decided to stop licensing the Android OS and operated a closed platform (like Apple does). The direct purchasers of its products – consumers – would then be free to switch between Apple and Google’s products.

As a result, an act that has no obvious effect on actual market power — and that could have a distinctly negative effect on consumers — could nevertheless significantly alter the outcome of competition proceedings on the Commission’s theory. 

One potential consequence is that firms might decide to close their platforms (or refuse to open them in the first place) in order to avoid competition scrutiny (because maintaining a closed platform might effectively lead competition authorities to place them within a wider relevant market). This might ultimately reduce product differentiation among mobile platforms (due to the disappearance of open ecosystems) – the exact opposite of what the Commission sought to achieve with its decision.

This is, among other things, what Antonin Scalia objected to in his Eastman Kodak dissent: 

It is quite simply anomalous that a manufacturer functioning in a competitive equipment market should be exempt from the per se rule when it bundles equipment with parts and service, but not when it bundles parts with service [when the manufacturer has a high share of the “market” for its machines’ spare parts]. This vast difference in the treatment of what will ordinarily be economically similar phenomena is alone enough to call today’s decision into question.

4. Market shares are a poor proxy for market power, especially in narrowly defined markets

Finally, the problem with the Commission’s decision is not so much that it chose to exclude Apple from the relevant markets, but that it then cited the resulting market shares as evidence of Google’s alleged dominance:

(440) Google holds a dominant position in the worldwide market (excluding China) for the licensing of smart mobile OSs since 2011. This conclusion is based on: 

(1) the market shares of Google and competing developers of licensable smart mobile OSs […]

In doing so, the Commission ignored one of the critical findings of the law & economics literature on market definition and market power: Although defining a narrow relevant market may not itself be problematic, the market shares thus adduced provide little information about a firm’s actual market power. 

For instance, Richard Posner and William Landes have argued that:

If instead the market were defined narrowly, the firm’s market share would be larger but the effect on market power would be offset by the higher market elasticity of demand; when fewer substitutes are included in the market, substitution of products outside of the market is easier. […]

If all the submarket approach signifies is willingness in appropriate cases to call a narrowly defined market a relevant market for antitrust purposes, it is unobjectionable – so long as appropriately less weight is given to market shares computed in such a market.

Likewise, Louis Kaplow observes that:

In choosing between a narrower and a broader market (where, as mentioned, we are supposing that the truth lies somewhere in between), one would ask whether the inference from the larger market share in the narrower market overstates market power by more than the inference from the smaller market share in the broader market understates market power. If the lesser error lies with the former choice, then the narrower market is the relevant market; if the latter minimizes error, then the broader market is best.

The Commission failed to heed these important findings.

5. Conclusion

The upshot is that Apple should not have been automatically excluded from the relevant market. 

To be clear, the Commission did discuss this competition from Apple later in the decision. And it also asserted that its findings would hold even if Apple were included in the OS and App Store markets, because Android’s share of devices sold would have ranged from 45% to 79%, depending on the year (although this ignores other potential metrics such as the value of devices sold or Google’s share of advertising revenue

However, by gerrymandering the market definition (which European case law likely permitted it to do), the Commission ensured that Google would face an uphill battle, starting from a very high market share and thus a strong presumption of dominance. 

Moreover, that it might reach the same result by adopting a more accurate market definition is no excuse for adopting a faulty one and resting its case (and undertaking its entire analysis) on it. In fact, the Commission’s choice of a faulty market definition underpins its entire analysis, and is far from a “harmless error.” 

I shall discuss the consequences of this error in an upcoming blog post. Stay tuned.

The Economists' Hour

John Maynard Keynes wrote in his famous General Theory that “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” 

This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society,  New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning. 

Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.  

Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.” 

Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s. 

Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.

In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.

First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.

The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.

In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.

Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.

Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,

“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”

This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.

Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.

In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data. 

As explained in a recent blog post on Truth on the Market by ICLE’s chief economist Eric Fruits: 

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration. 

In fact, one recent study, titled Are legacy airline mergers pro- or anti-competitive? Evidence from recent U.S. airline mergers takes it a step further. Data from legacy U.S. airline mergers appears to show they have resulted in pro-consumer benefits once quality-adjusted fares are taken into account:

Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger… 

One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.

In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.

Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:

U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.

Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).

Country Model 1 Model 2 Model 3 Model 4
Price Rank Price Rank Price Rank Price Rank
Australia $78.30 28 $82.81 27 $102.63 26 $84.45 23
Austria $48.04 17 $60.59 15 $73.17 11 $74.02 17
Belgium $46.82 16 $66.62 21 $75.29 13 $81.09 22
Canada $69.66 27 $74.99 25 $92.73 24 $76.57 19
Chile $33.42 8 $73.60 23 $83.81 20 $88.97 25
Czech Republic $26.83 3 $49.18 6 $69.91 9 $60.49 6
Denmark $43.46 14 $52.27 8 $69.37 8 $63.85 8
Estonia $30.65 6 $56.91 12 $81.68 19 $69.06 12
Finland $35.00 9 $37.95 1 $57.49 2 $51.61 1
France $30.12 5 $44.04 4 $61.96 4 $54.25 3
Germany $36.00 12 $53.62 10 $75.09 12 $66.06 11
Greece $35.38 10 $64.51 19 $80.72 17 $78.66 21
Iceland $65.78 25 $73.96 24 $94.85 25 $90.39 26
Ireland $56.79 22 $62.37 16 $76.46 14 $64.83 9
Italy $29.62 4 $48.00 5 $68.80 7 $59.00 5
Japan $40.12 13 $53.58 9 $81.47 18 $72.12 15
Latvia $20.29 1 $42.78 3 $63.05 5 $52.20 2
Luxembourg $56.32 21 $54.32 11 $76.83 15 $72.51 16
Mexico $35.58 11 $91.29 29 $120.40 29 $109.64 29
Netherlands $44.39 15 $63.89 18 $89.51 21 $77.88 20
New Zealand $59.51 24 $81.42 26 $90.55 22 $76.25 18
Norway $88.41 29 $71.77 22 $103.98 27 $96.95 27
Portugal $30.82 7 $58.27 13 $72.83 10 $71.15 14
South Korea $25.45 2 $42.07 2 $52.01 1 $56.28 4
Spain $54.95 20 $87.69 28 $115.51 28 $106.53 28
Sweden $52.48 19 $52.16 7 $61.08 3 $70.41 13
Switzerland $66.88 26 $65.01 20 $91.15 23 $84.46 24
United Kingdom $50.77 18 $63.75 17 $79.88 16 $65.44 10
United States $58.00 23 $59.84 14 $64.75 6 $62.94 7
Average $46.55 $61.70 $80.24 $73.73

Model 1: Unadjusted for demographics and content quality

Model 2: Adjusted for demographics but not content quality

Model 3: Adjusted for demographics and data usage

Model 4: Adjusted for demographics and content quality

Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:

The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing. 

In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE. 

Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition. 

In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.

Conclusion

At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway.  For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors. 

So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”

For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in The Economists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.

This guest post is by Jonathan M. Barnett, Torrey H. Webb Professor Law, University of Southern California Gould School of Law.

It has become virtual received wisdom that antitrust law has been subdued by economic analysis into a state of chronic underenforcement. Following this line of thinking, many commentators applauded the Antitrust Division’s unsuccessful campaign to oppose the acquisition of Time-Warner by AT&T and some (unsuccessfully) urged the Division to take stronger action against the acquisition of most of Fox by Disney. The arguments in both cases followed a similar “big is bad” logic. Consolidating control of a large portfolio of creative properties (Fox plus Disney) or integrating content production and distribution capacities (Time-Warner plus AT&T) would exacerbate market concentration, leading to reduced competition and some combination of higher prices and reduced product for consumers. 

Less than 18 months after the closing of both transactions, those concerns seem to have been largely unwarranted. 

Far from precipitating any decline in product output or variety, both transactions have been followed by a vigorous burst of competition in the digital streaming market. In place of the Amazon plus Netflix bottleneck (with Hulu trailing behind), consumers now, or in 2020 will, have a choice of at least four new streaming services with original content, Disney+, AT&T’s “HBO Max”, Apple’s “Apple TV+” and Comcast’s NBCUniversal “Peacock” services. Critically, each service relies on a formidable combination of creative, financing and technological capacities that can only be delivered by a firm of sufficiently large size and scale.  As modern antitrust law has long recognized, it turns out that “big” is sometimes not bad.

Where’s the Harm?

At present, it is hard to see any net consumer harm arising from the concurrence of increased size and increased competition. 

On the supply side, this is just the next episode in the ongoing “Golden Age of Television” in which content producers have enjoyed access to exceptional funding to support high-value productions.  It has been reported that Apple TV+’s new “Morning Show” series will cost $15 million per episode while similar estimates are reported for hit shows such as HBO’s “Game of Thrones” and Netflix’s “The Crown.”  Each of those services is locked in a fierce competition to gain and retain sufficient subscribers to earn a return on those investments, which leads directly to the next happy development.

On the demand side, consumers enjoy a proliferating array of streaming services, ranging from free ad-supported services to subscription ad-free services. Consumers can now easily “cut the cord” and assemble a customized bundle of preferred content from multiple services, each of which is less costly than a traditional cable package and can generally be cancelled at any time.  Current market performance does not plausibly conform to the declining output, limited variety or increasing prices that are the telltale symptoms of a less than competitive market.

Real-World v. Theoretical Markets

The market’s favorable trajectory following these two controversial transactions should not be surprising. When scrutinized against the actual characteristics of real-world digital content markets, rather than stylized theoretical models or antiquated pre-digital content markets, the arguments leveled against these transactions never made much sense. There were two fundamental and related errors. 

Error #1: Content is Scarce

Advocates for antitrust intervention assumed that entry barriers into the content market were high, in which case it followed that the owner of an especially valuable creative portfolio could exert pricing power to consumers’ detriment. Yet, in reality, funding for content production is plentiful and even a service that has an especially popular show is unlikely to have sustained pricing power in the face of a continuous flow of high-value productions being released by formidable competitors. The amounts being spent on content in 2019 by leading streaming services are unprecedented, ranging from a reported $15 billion for Netflix to an estimated $6 billion for Amazon and Apple TV+ to an estimated $3.9 billion for AT&T’s HBO Max. It is also important to note that a hit show is often a mobile asset that a streaming or other video distribution service has licensed from independent production companies and other rights holders. Once the existing deal expires, those rights are available for purchase by the highest bidder. For example, in 2019, Netflix purchased the streaming rights to “Seinfeld”, Viacom purchased the cable rights to “Seinfeld”, and HBO Max purchased the streaming rights to “South Park.” Similarly, the producers behind a hit show are always free to take their talents to competitors once any existing agreement terminates.

Error #2: Home Pay-TV is a “Monopoly”

Advocates of antitrust action were looking at the wrong market—or more precisely, the market as it existed about a decade ago. The theory that AT&T’s acquisition of Time-Warner’s creative portfolio would translate into pricing power in the home pay-TV market mighthave been plausible when consumers had no reasonable alternative to the local cable provider. But this argument makes little sense today when consumers are fleeing bulky home pay-TV bundles for cheaper cord-cutting options that deliver more targeted content packages to a mobile device.  In 2019, a “home” pay-TV market is fast becoming an anachronism and hence a home pay-TV “monopoly” largely reduces to a formalism that, with the possible exception of certain live programming, is unlikely to translate into meaningful pricing power. 

Wait a Second! What About the HBO Blackout?

A skeptical reader might reasonably object that this mostly rosy account of the post-merger home video market is unpersuasive since it does not address the ongoing blackout of HBO (now an AT&T property) on the Dish satellite TV service. Post-merger commentary that remains skeptical of the AT&T/Time-Warner merger has focused on this dispute, arguing that it “proves” that the government was right since AT&T is purportedly leveraging its new ownership of HBO to disadvantage one of its competitors in the pay-TV market. This interpretation tends to miss the forest for the trees (or more precisely, a tree).  

The AT&T/Dish dispute over HBO is only one of over 200 “carriage” disputes resulting in blackouts that have occurred this year, which continues an upward trend since approximately 2011. Some of those include Dish’s dispute with Univision (settled in March 2019 after a nine-month blackout) and AT&T’s dispute (as pay-TV provider) with Nexstar (settled in August 2019 after a nearly two-month blackout). These disputes reflect the fact that the flood of subscriber defections from traditional pay-TV to mobile streaming has made it difficult for pay-TV providers to pass on the fees sought by content owners. As a result, some pay-TV providers adopt the negotiating tactic of choosing to drop certain content until the terms improve, just as AT&T, in its capacity as a pay-TV provider, dropped CBS for three weeks in July and August 2019 pending renegotiation of licensing terms. It is the outward shift in the boundaries of the economically relevant market (from home to home-plus-mobile video delivery), rather than market power concerns, that best accounts for periodic breakdowns in licensing negotiations.  This might even be viewed positively from an antitrust perspective since it suggests that the “over the top” market is putting pressure on the fees that content owners can extract from providers in the traditional pay-TV market.

Concluding Thoughts

It is common to argue today that antitrust law has become excessively concerned about “false positives”– that is, the possibility of blocking a transaction or enjoining a practice that would have benefited consumers. Pending future developments, this early post-mortem on the regulatory and judicial treatment of these two landmark media transactions suggests that there are sometimes good reasons to stay the hand of the court or regulator. This is especially the case when a generational market shift is in progress and any regulator’s or judge’s foresight is likely to be guesswork. Antitrust law’s “failure” to stop these transactions may turn out to have been a ringing success.

This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.

In the spring of 1669 a “flying coach” transported six passengers from Oxford to London in a single day. Within a few years similar carriage services connected many major towns to the capital.

“As usual,” Lord Macaulay wrote in his history of England, “many persons” were “disposed to clamour against the innovation, simply because it was an innovation.” They objected that the express rides would corrupt traditional horsemanship, throw saddlers and boatmen out of work, bankrupt the roadside taverns, and force travelers to sit with children and the disabled. “It was gravely recommended,” reported Macaulay, by various towns and companies, that “no public coach should be permitted to have more than four horses, to start oftener that once a week, or to go more than thirty miles a day.”

Macaulay used the episode to offer his contemporaries a warning. Although “we smile at these things,” he said, “our descendants, when they read the history of the opposition offered by cupidity and prejudice to the improvements of the nineteenth century, may smile in their turn.” Macaulay wanted the smart set to take a wider view of history.

They rarely do. It is not in their nature. As Schumpeter understood, the “intellectual group” cannot help attacking “the foundations of capitalist society.” “It lives on criticism and its whole position depends on criticism that stings.”

An aspiring intellectual would do well to avoid restraint or good cheer. Better to build on a foundation of panic and indignation. Want to sell books and appear on television? Announce the “death” of this or a “crisis” over that. Want to seem fashionable among other writers, artists, and academics? Denounce greed and rail against “the system.”

New technology is always a good target. When a lantern inventor obtained a patent to light London, observed Macaulay, “the cause of darkness was not left undefended.” The learned technophobes have been especially vexed lately. The largest tech companies, they protest, are manipulating us.

Facebook, The New Republic declares, “remade the internet in its hideous image.” The New Yorker wonders whether the platform is going to “break democracy.”

Apple is no better. “Have smartphones destroyed a generation?” asks The Atlantic in a cover-story headline. The article’s author, Jean Twenge, says smartphones have made the young less independent, more reclusive, and more depressed. She claims that today’s teens are “on the brink of the worst mental-health”—wait for it—“crisis in decades.” “Much of this deterioration,” she contends, “can be traced to their phones.”

And then there’s Amazon. It’s too efficient. Alex Salkever worries in Fortune that “too many clicks, too much time spent, and too much money spent on Amazon” is “bad for our collective financial, psychological, and physical health.”

Here’s a rule of thumb for the refined cultural critic to ponder. When the talking points you use to convey your depth and perspicacity match those of a sermonizing Republican senator, start worrying that your pseudo-profound TED-Talk-y concerns for social justice are actually just fusty get-off-my-lawn fears of novelty and change.

Enter Josh Hawley, freshman GOP senator from Missouri. Hawley claims that Facebook is a “digital drug” that “dulls” attention spans and “frays” relationships. He speculates about whether social media is causing teenage girls to attempt suicide. “What passes for innovation by Big Tech today,” he insists, is “ever more sophisticated exploitation of people.” He scolds the tech companies for failing to produce products that—in his judgment—“enrich lives” and “strengthen society.”

As for the stuff the industry does make, Hawley wants it changed. He has introduced a bill to ban infinite scrolling, music and video autoplay, and the use of “badges and other awards” (gamification) on social media. The bill also requires defaults that limit a user’s time on a platform to 30 minutes a day. A user could opt out of this restriction, but only for a month at a stretch.

The available evidence does not bear out the notion that highbrow magazines, let alone Josh Hawley, should redesign tech products and police how people use their time. You’d probably have to pay someone around $500 to stay off Facebook for a year. Getting her to forego using Amazon would cost even more. And Google is worth more still—perhaps thousands of dollars per user per year. These figures are of course quite rough, but that just proves the point: the consumer surplus created by the internet is inestimable.

Is technology making teenagers sad? Probably not. A recent study tracked the social-media use, along with the wellbeing, of around ten-thousand British children for almost a decade. “In more than half of the thousands of statistical models we tested,” the study’s authors write, “we found nothing more than random statistical noise.” Although there were some small links between teenage girls’ mood and their social-media use, the connections were “miniscule” and too “trivial” to “inform personal parenting decisions.” “It’s probably best,” the researchers conclude, “to retire the idea that the amount of time teens spend on social media is a meaningful metric influencing their wellbeing.”

One could head the other way, in fact, and argue that technology is making children smarter. Surfing the web and playing video games might broaden their attention spans and improve their abstract thinking.

Is Facebook a threat to democracy? Not yet. The memes that Russian trolls distributed during the 2016 election were clumsy, garish, illiterate piffle. Most of it was the kind of thing that only an Alex Jones fan or a QAnon conspiracist would take seriously. And sure enough, one study finds that only a tiny fraction of voters, most of them older conservatives, read and spread the material. It appears, in other words, that the Russian fake news and propaganda just bounced around among a few wingnuts whose support for Donald Trump was never in doubt.

Over time, it is fair to say, the known costs and benefits of the latest technological innovations could change. New data and further study might reveal that the handwringers are on to something. But there’s good news: if you have fears, doubts, or objections, nothing stops you from acting on them. If you believe that Facebook’s behavior is intolerable, or that its impact on society is malign, stop using it. If you think Amazon is undermining small businesses, shop more at local stores. If you fret about your kid’s screen time, don’t give her a smartphone. Indeed, if you suspect that everything has gone pear-shaped since the Industrial Revolution started, throw out your refrigerator and stop going to the dentist.

We now hit the crux of the intellectuals’ (and Josh Hawley’s) complaint. It’s not a gripe about Big Tech so much as a gripe about you. You, the average person, are too dim, weak, and base. You lack the wits to use an iPhone on your own terms. You lack the self-control to post, “like”, and share in moderation (or the discipline to make your children follow suit). You lack the virtue to abstain from the pleasures of Prime-membership consumerism.

One AI researcher digs to the root. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or ‘I’m not on social media,’” she tells Vox. No one wields the “privilege” epithet quite like the modern privileged do. It is one of the remarkable features of our time. Pundits and professors use the word to announce, albeit unintentionally, that only they and their peers have any agency. Those other people, meanwhile, need protection from too much information, too much choice, too much freedom.

There’s nothing crazy about wanting the new aristocrats of the mind to shepherd everyone else. Noblesse oblige is a venerable concept. The lords care for the peasants, the king cares for the lords, God cares for the king. But that is not our arrangement. Our forebears embraced the Enlightenment. They began with the assumption that citizens are autonomous. They got suspicious whenever the holders of political power started trying to tell those citizens what they can and cannot do.

Algorithms might one day expose, and play on, our innate lack of free will so much that serious legal and societal adjustments are needed. That, however, is a remote and hypothetical issue, one likely to fall on a generation, yet unborn, who will smile in their turn at our qualms. (Before you place much weight on more dramatic predictions, consider that the great Herbert Simon asserted, in 1965, that we’d have general AI by 1985.)

The question today is more mundane: do voters crave moral direction from their betters? Are they clamoring to be viewed as lowly creatures who can hardly be relied on to tie their shoes? If so, they’re perfectly capable of debasing themselves accordingly through their choice of political representatives. Judging from Congress’s flat response to Hawley’s bill, the electorate is not quite there yet.

In the meantime, the great and the good might reevaluate their campaign to infantilize their less fortunate brothers and sisters. Lecturing people about how helpless they are is not deep. It’s not cool. It’s condescending and demeaning. It’s a form of trolling. Above all, it’s old-fashioned and priggish.

In 1816 The Times of London warned “every parent against exposing his daughter to so fatal a contagion” as . . . the waltz. “The novelty is one deserving of severe reprobation,” Britain’s paper of record intoned, “and we trust it will never again be tolerated in any moral English society.”

There was a time, Lord Macaulay felt sure, when some brahmin or other looked down his nose at the plough and the alphabet.

Wall Street Journal commentator, Greg Ip, reviews Thomas Philippon’s forthcoming book, The Great Reversal: How America Gave Up On Free Markets. Ip describes a “growing mountain” of research on industry concentration in the U.S. and reports that Philippon concludes competition has declined over time, harming U.S. consumers.

In one example, Philippon points to air travel. He notes that concentration in the U.S. has increased rapidly—spiking since the Great Recession—while concentration in the EU has increased modestly. At the same time, Ip reports “U.S. airlines are now far more profitable than their European counterparts.” (Although it’s debatable whether a five percentage point difference in net profit margin is “far more profitable”). 

On first impression, the figures fit nicely with the populist antitrust narrative: As concentration in the U.S. grew, so did profit margins. Closer inspection raises some questions, however. 

For example, the U.S. airline industry had a negative net profit margin in each of the years prior to the spike in concentration. While negative profits may be good for consumers, it would be a stretch to argue that long-run losses are good for competition as a whole. At some point one or more of the money losing firms is going to pull the ripcord. Which raises the issue of causation.

Just looking at the figures from the WSJ article, one could argue that rather than concentration driving profit margins, instead profit margins are driving concentration. Indeed, textbook IO economics would indicate that in the face of losses, firms will exit until economic profit equals zero. Paraphrasing Alfred Marshall, “Which blade of the scissors is doing the cutting?”

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to Philippon’s conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration.

Regressing U.S. air fare price index against Philippon’s concentration information in the figure above (and controlling for general inflation) finds that if U.S. concentration in 2015 was the same as in 1995, U.S. airfares would be about 2.8% lower. That a 1,250 point increase in HHI would be associated with a 2.8% increase in prices indicates that the increased concentration in U.S. airlines has led to no significant increase in consumer prices.

Also, if consumers are truly worse off, one would expect to see a drop off or slow down in the use of air travel. An eyeballing of passenger data does not fit the populist narrative. Instead, we see airlines are carrying more passengers and consumers are paying lower prices on average.

While it’s true that low-cost airlines have shaken up air travel in the EU, the differences are not solely explained by differences in market concentration. For example, U.S. regulations prohibit foreign airlines from operating domestic flights while EU carriers compete against operators from other parts of Europe. While the WSJ’s figures tell an interesting story of concentration, prices, and profits, they do not provide a compelling case of anticompetitive conduct.

A spate of recent newspaper investigations and commentary have focused on Apple allegedly discriminating against rivals in the App Store. The underlying assumption is that Apple, as a vertically integrated entity that operates both a platform for third-party apps and also makes it own apps, is acting nefariously whenever it “discriminates” against rival apps through prioritization, enters into popular app markets, or charges a “tax” or “surcharge” on rival apps. 

For most people, the word discrimination has a pejorative connotation of animus based upon prejudice: racism, sexism, homophobia. One of the definitions you will find in the dictionary reflects this. But another definition is a lot less charged: the act of making or perceiving a difference. (This is what people mean when they say that a person has a discriminating palate, or a discriminating taste in music, for example.)

In economics, discrimination can be a positive attribute. For instance, effective price discrimination can result in wealthier consumers paying a higher price than less well off consumers for the same product or service, and it can ensure that products and services are in fact available for less-wealthy consumers in the first place. That would seem to be a socially desirable outcome (although under some circumstances, perfect price discrimination can be socially undesirable). 

Antitrust law rightly condemns conduct only when it harms competition and not simply when it harms a competitor. This is because it is competition that enhances consumer welfare, not the presence or absence of a competitor — or, indeed, the profitability of competitors. The difficult task for antitrust enforcers is to determine when a vertically integrated firm with “market power” in an upstream market is able to effectively discriminate against rivals in a downstream market in a way that harms consumers

Even assuming the claims of critics are true, alleged discrimination by Apple against competitor apps in the App Store may harm those competitors, but it doesn’t necessarily harm either competition or consumer welfare.

The three potential antitrust issues facing Apple can be summarized as:

There is nothing new here economically. All three issues are analogous to claims against other tech companies. But, as I detail below, the evidence to establish any of these claims at best represents harm to competitors, and fails to establish any harm to the competitive process or to consumer welfare.

Prioritization

Antitrust enforcers have rejected similar prioritization claims against Google. For instance, rivals like Microsoft and Yelp have funded attacks against Google, arguing the search engine is harming competition by prioritizing its own services in its product search results over competitors. As ICLE and affiliated scholars have pointed out, though, there is nothing inherently harmful to consumers about such prioritization. There are also numerous benefits in platforms directly answering queries, even if it ends up directing users to platform-owned products or services.

As Geoffrey Manne has observed:

there is good reason to believe that Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to vigorously compete and to decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content to partially displace the original “ten blue links” design of its search results page and offer its own answers to users’ queries in its stead. 

Here, the antitrust case against Apple for prioritization is similarly flawed. For example, as noted in a recent article in the WSJ, users often use the App Store search in order to find apps they already have installed:

“Apple customers have a very strong connection to our products and many of them use search as a way to find and open their apps,” Apple said in a statement. “This customer usage is the reason Apple has strong rankings in search, and it’s the same reason Uber, Microsoft and so many others often have high rankings as well.” 

If a substantial portion of searches within the App Store are for apps already on the iPhone, then showing the Apple app near the top of the search results could easily be consumer welfare-enhancing. 

Apple is also theoretically leaving money on the table by prioritizing its (already pre-loaded) apps over third party apps. If its algorithm promotes its own apps over those that may earn it a 30% fee — additional revenue — the prioritization couldn’t plausibly be characterized as a “benefit” to Apple. Apple is ultimately in the business of selling hardware. Losing customers of the iPhone or iPad by prioritizing apps consumers want less would not be a winning business strategy.

Further, it stands to reason that those who use an iPhone may have a preference for Apple apps. Such consumers would be naturally better served by seeing Apple’s apps prioritized over third-party developer apps. And if consumers do not prefer Apple’s apps, rival apps are merely seconds of scrolling away.

Moreover, all of the above assumes that Apple is engaging in sufficiently pervasive discrimination through prioritzation to have a major impact on the app ecosystem. But substantial evidence exists that the universe of searches for which Apple’s algorithm prioritizes Apple apps is small. For instance, most searches are for branded apps already known by the searcher:

Keywords: how many are brands?

  • Top 500: 58.4%
  • Top 400: 60.75%
  • Top 300: 68.33%
  • Top 200: 80.5%
  • Top 100: 86%
  • Top 50: 90%
  • Top 25: 92%
  • Top 10: 100%

This is corroborated by data from the NYT’s own study, which suggests Apple prioritized its own apps first in only roughly 1% of the overall keywords queried: 

Whatever the precise extent of increase in prioritization, it seems like any claims of harm are undermined by the reality that almost 99% of App Store results don’t list Apple apps first. 

The fact is, very few keyword searches are even allegedly affected by prioritization. And the algorithm is often adjusting to searches for apps already pre-loaded on the device. Under these circumstances, it is very difficult to conclude consumers are being harmed by prioritization in search results of the App Store.

Entry

The issue of Apple building apps to compete with popular apps in its marketplace is similar to complaints about Amazon creating its own brands to compete with what is sold by third parties on its platform. For instance, as reported multiple times in the Washington Post:

Clue, a popular app that women use to track their periods, recently rocketed to the top of the App Store charts. But the app’s future is now in jeopardy as Apple incorporates period and fertility tracking features into its own free Health app, which comes preinstalled on every device. Clue makes money by selling subscriptions and services in its free app. 

However, there is nothing inherently anticompetitive about retailers selling their own brands. If anything, entry into the market is normally procompetitive. As Randy Picker recently noted with respect to similar claims against Amazon: 

The heart of this dynamic isn’t new. Sears started its catalogue business in 1888 and then started using the Craftsman and Kenmore brands as in-house brands in 1927. Sears was acquiring inventory from third parties and obviously knew exactly which ones were selling well and presumably made decisions about which markets to enter and which to stay out of based on that information. Walmart, the nation’s largest retailer, has a number of well-known private brands and firms negotiating with Walmart know full well that Walmart can enter their markets, subject of course to otherwise applicable restraints on entry such as intellectual property laws… I think that is possible to tease out advantages that a platform has regarding inventory experimentation. It can outsource some of those costs to third parties, though sophisticated third parties should understand where they can and cannot have a sustainable advantage given Amazon’s ability to move to build-or-bought first-party inventory. We have entire bodies of law— copyright, patent, trademark and more—that limit the ability of competitors to appropriate works, inventions and symbols. Those legal systems draw very carefully considered lines regarding permitted and forbidden uses. And antitrust law generally favors entry into markets and doesn’t look to create barriers that block firms, large or small, from entering new markets.

If anything, Apple is in an even better position than Amazon. Apple invests revenue in app development, not because the apps themselves generate revenue, but because it wants people to use the hardware, i.e. the iPhones, iPads, and Apple Watches. The reason Apple created an App Store in the first place is because this allows Apple to make more money from selling devices. In order to promote security on those devices, Apple institutes rules for the App Store, but it ultimately decides whether to create its own apps and provide access to other apps based upon its desire to maximize the value of the device. If Apple chooses to create free apps in order to improve iOS for users and sell more hardware, it is not a harm to competition.

Apple’s ability to enter into popular app markets should not be constrained unless it can be shown that by giving consumers another choice, consumers are harmed. As noted above, most searches in the App Store are for branded apps to begin with. If consumers already know what they want in an app, it hardly seems harmful for Apple to offer — and promote — its own, additional version as well. 

In the case of Clue, if Apple creates a free health app, it may hurt sales for Clue. But it doesn’t hurt consumers who want the functionality and would prefer to get it from Apple for free. This sort of product evolution is not harming competition, but enhancing it. And, it must be noted, Apple doesn’t exclude Clue from its devices. If, indeed, Clue offers a better product, or one that some users prefer, they remain able to find it and use it.

The so-called App Store “Tax”

The argument that Apple has an unfair competitive advantage over rival apps which have to pay commissions to Apple to be on the App Store (a “tax” or “surcharge”) has similarly produced no evidence of harm to consumers. 

Apple invested a lot into building the iPhone and the App Store. This infrastructure has created an incredibly lucrative marketplace for app developers to exploit. And, lest we forget a point fundamental to our legal system, Apple’s App Store is its property

The WSJ and NYT stories give the impression that Apple uses its commissions on third party apps to reduce competition for its own apps. However, this is inconsistent with how Apple charges its commission

For instance, Apple doesn’t charge commissions on free apps, which make up 84% of the App Store. Apple also doesn’t charge commissions for apps that are free to download but are supported by advertising — including hugely popular apps like Yelp, Buzzfeed, Instagram, Pinterest, Twitter, and Facebook. Even apps which are “readers” where users purchase or subscribe to content outside the app but use the app to access that content are not subject to commissions, like Spotify, Netflix, Amazon Kindle, and Audible. Apps for “physical goods and services” — like Amazon, Airbnb, Lyft, Target, and Uber — are also free to download and are not subject to commissions. The class of apps which are subject to a 30% commission include:

  • paid apps (like many games),
  • free apps that then have in-app purchases (other games and services like Skype and TikTok), 
  • and free apps with digital subscriptions (Pandora, Hulu, which have 30% commission first year and then 15% in subsequent years), and
  • cross-platform apps (Dropbox, Hulu, and Minecraft) which allow for digital goods and services to be purchased in-app and Apple collects commission on in-app sales, but not sales from other platforms. 

Despite protestations to the contrary, these costs are hardly unreasonable: third party apps receive the benefit not only of being in Apple’s App Store (without which they wouldn’t have any opportunity to earn revenue from sales on Apple’s platform), but also of the features and other investments Apple continues to pour into its platform — investments that make the ecosystem better for consumers and app developers alike. There is enormous value to the platform Apple has invested in, and a great deal of it is willingly shared with developers and consumers.  It does not make it anticompetitive to ask those who use the platform to pay for it. 

In fact, these benefits are probably even more important for smaller developers rather than bigger ones who can invest in the necessary back end to reach consumers without the App Store, like Netflix, Spotify, and Amazon Kindle. For apps without brand reputation (and giant marketing budgets), the ability for consumers to trust that downloading the app will not lead to the installation of malware (as often occurs when downloading from the web) is surely essential to small developers’ ability to compete. The App Store offers this.

Despite the claims made in Spotify’s complaint against Apple, Apple doesn’t have a duty to deal with app developers. Indeed, Apple could theoretically fill the App Store with only apps that it developed itself, like Apple Music. Instead, Apple has opted for a platform business model, which entails the creation of a new outlet for others’ innovation and offerings. This is pro-consumer in that it created an entire marketplace that consumers probably didn’t even know they wanted — and certainly had no means to obtain — until it existed. Spotify, which out-competed iTunes to the point that Apple had to go back to the drawing board and create Apple Music, cannot realistically complain that Apple’s entry into music streaming is harmful to competition. Rather, it is precisely what vigorous competition looks like: the creation of more product innovation, lower prices, and arguably (at least for some) higher quality.

Interestingly, Spotify is not even subject to the App Store commission. Instead, Spotify offers a work-around to iPhone users to obtain its premium version without ads on iOS. What Spotify actually desires is the ability to sell premium subscriptions to Apple device users without paying anything above the de minimis up-front cost to Apple for the creation and maintenance of the App Store. It is unclear how many potential Spotify users are affected by the inability to directly buy the ad-free version since Spotify discontinued offering it within the App Store. But, whatever the potential harm to Spotify itself, there’s little reason to think consumers or competition bear any of it. 

Conclusion

There is no evidence that Apple’s alleged “discrimination” against rival apps harms consumers. Indeed, the opposite would seem to be the case. The regulatory discrimination against successful tech platforms like Apple and the App Store is far more harmful to consumers.

Why Data Is Not the New Oil

Alec Stapp —  8 October 2019

“Data is the new oil,” said Jaron Lanier in a recent op-ed for The New York Times. Lanier’s use of this metaphor is only the latest instance of what has become the dumbest meme in tech policy. As the digital economy becomes more prominent in our lives, it is not unreasonable to seek to understand one of its most important inputs. But this analogy to the physical economy is fundamentally flawed. Worse, introducing regulations premised upon faulty assumptions like this will likely do far more harm than good. Here are seven reasons why “data is the new oil” misses the mark:

1. Oil is rivalrous; data is non-rivalrous

If someone uses a barrel of oil, it can’t be consumed again. But, as Alan McQuinn, a senior policy analyst at the Information Technology and Innovation Foundation, noted, “when consumers ‘pay with data’ to access a website, they still have the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services.” Imposing restrictions on data collection makes this infinite resource finite. 

2. Oil is excludable; data is non-excludable

Oil is highly excludable because, as a physical commodity, it can be stored in ways that prevent use by non-authorized parties. However, as my colleagues pointed out in a recent comment to the FTC: “While databases may be proprietary, the underlying data usually is not.” They go on to argue that this can lead to under-investment in data collection:

[C]ompanies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also  means that firms may well be more reluctant to invest in data generation than is socially optimal. In fact, to the extent this is true there is arguably more risk of companies under-investing in data  generation than of firms over-investing in order to create data troves with which to monopolize a market. This contrasts with oil, where complete excludability is the norm.

3. Oil is fungible; data is non-fungible

Oil is a commodity, so, by definition, one barrel of oil of a given grade is equivalent to any other barrel of that grade. Data, on the other hand, is heterogeneous. Each person’s data is unique and may consist of a practically unlimited number of different attributes that can be collected into a profile. This means that oil will follow the law of one price, while a dataset’s value will be highly contingent on its particular properties and commercialization potential.

4. Oil has positive marginal costs; data has zero marginal costs

There is a significant expense to producing and distributing an additional barrel of oil (as low as $5.49 per barrel in Saudi Arabia; as high as $21.66 in the U.K.). Data is merely encoded information (bits of 1s and 0s), so gathering, storing, and transferring it is nearly costless (though, to be clear, setting up systems for collecting and processing can be a large fixed cost). Under perfect competition, the market clearing price is equal to the marginal cost of production (hence why data is traded for free services and oil still requires cold, hard cash).

5. Oil is a search good; data is an experience good

Oil is a search good, meaning its value can be assessed prior to purchasing. By contrast, data tends to be an experience good because companies don’t know how much a new dataset is worth until it has been combined with pre-existing datasets and deployed using algorithms (from which value is derived). This is one reason why purpose limitation rules can have unintended consequences. If firms are unable to predict what data they will need in order to develop new products, then restricting what data they’re allowed to collect is per se anti-innovation.

6. Oil has constant returns to scale; data has rapidly diminishing returns

As an energy input into a mechanical process, oil has relatively constant returns to scale (e.g., when oil is used as the fuel source to power a machine). When data is used as an input for an algorithm, it shows rapidly diminishing returns, as the charts collected in a presentation by Google’s Hal Varian demonstrate. The initial training data is hugely valuable for increasing an algorithm’s accuracy. But as you increase the dataset by a fixed amount each time, the improvements steadily decline (because new data is only helpful in so far as it’s differentiated from the existing dataset).

7. Oil is valuable; data is worthless

The features detailed above — rivalrousness, fungibility, marginal cost, returns to scale — all lead to perhaps the most important distinction between oil and data: The average barrel of oil is valuable (currently $56.49) and the average dataset is worthless (on the open market). As Will Rinehart showed, putting a price on data is a difficult task. But when data brokers and other intermediaries in the digital economy do try to value data, the prices are almost uniformly low. The Financial Times had the most detailed numbers on what personal data is sold for in the market:

  • “General information about a person, such as their age, gender and location is worth a mere $0.0005 per person, or $0.50 per 1,000 people.”
  • “A person who is shopping for a car, a financial product or a vacation is more valuable to companies eager to pitch those goods. Auto buyers, for instance, are worth about $0.0021 a pop, or $2.11 per 1,000 people.”
  • “Knowing that a woman is expecting a baby and is in her second trimester of pregnancy, for instance, sends the price tag for that information about her to $0.11.”
  • “For $0.26 per person, buyers can access lists of people with specific health conditions or taking certain prescriptions.”
  • “The company estimates that the value of a relatively high Klout score adds up to more than $3 in word-of-mouth marketing value.”
  • [T]he sum total for most individuals often is less than a dollar.

Data is a specific asset, meaning it has “a significantly higher value within a particular transacting relationship than outside the relationship.” We only think data is so valuable because tech companies are so valuable. In reality, it is the combination of high-skilled labor, large capital expenditures, and cutting-edge technologies (e.g., machine learning) that makes those companies so valuable. Yes, data is an important component of these production functions. But to claim that data is responsible for all the value created by these businesses, as Lanier does in his NYT op-ed, is farcical (and reminiscent of the labor theory of value). 

Conclusion

People who analogize data to oil or gold may merely be trying to convey that data is as valuable in the 21st century as those commodities were in the 20th century (though, as argued, a dubious proposition). If the comparison stopped there, it would be relatively harmless. But there is a real risk that policymakers might take the analogy literally and regulate data in the same way they regulate commodities. As this article shows, data has many unique properties that are simply incompatible with 20th-century modes of regulation.

A better — though imperfect — analogy, as author Bernard Marr suggests, would be renewable energy. The sources of renewable energy are all around us — solar, wind, hydroelectric — and there is more available than we could ever use. We just need the right incentives and technology to capture it. The same is true for data. We leave our digital fingerprints everywhere — we just need to dust for them.

In March of this year, Elizabeth Warren announced her proposal to break up Big Tech in a blog post on Medium. She tried to paint the tech giants as dominant players crushing their smaller competitors and strangling the open internet. This line in particular stood out: “More than 70% of all Internet traffic goes through sites owned or operated by Google or Facebook.

This statistic immediately struck me as outlandish, but I knew I would need to do some digging to fact check it. After seeing the claim repeated in a recent profile of the Open Markets Institute — “Google and Facebook control websites that receive 70 percent of all internet traffic” — I decided to track down the original source for this surprising finding. 

Warren’s blog post links to a November 2017 Newsweek article — “Who Controls the Internet? Facebook and Google Dominance Could Cause the ‘Death of the Web’” — written by Anthony Cuthbertson. The piece is even more alarmist than Warren’s blog post: “Facebook and Google now have direct influence over nearly three quarters of all internet traffic, prompting warnings that the end of a free and open web is imminent.

The Newsweek article, in turn, cites an October 2017 blog post by André Staltz, an open source freelancer, on his personal website titled “The Web began dying in 2014, here’s how”. His takeaway is equally dire: “It looks like nothing changed since 2014, but GOOG and FB now have direct influence over 70%+ of internet traffic.” Staltz claims the blog post took “months of research to write”, but the headline statistic is merely aggregated from a December 2015 blog post by Parse.ly, a web analytics and content optimization software company.

Source: André Staltz

The Parse.ly article — “Facebook Continues to Beat Google in Sending Traffic to Top Publishers” — is about external referrals (i.e., outside links) to publisher sites (not total internet traffic) and says the “data set used for this study included around 400 publisher domains.” This is not even a random sample much less a comprehensive measure of total internet traffic. Here’s how they summarize their results: “Today, Facebook remains a top referring site to the publishers in Parse.ly’s network, claiming 39 percent of referral traffic versus Google’s share of 34 percent.” 

Source: Parse.ly

So, using the sources provided by the respective authors, the claim from Elizabeth Warren that “more than 70% of all Internet traffic goes through sites owned or operated by Google or Facebook” can be more accurately rewritten as “more than 70 percent of external links to 400 publishers come from sites owned or operated by Google and Facebook.” When framed that way, it’s much less conclusive (and much less scary).

But what’s the real statistic for total internet traffic? This is a surprisingly difficult question to answer, because there is no single way to measure it: Are we talking about share of users, or user-minutes, of bits, or total visits, or unique visits, or referrals? According to Wikipedia, “Common measurements of traffic are total volume, in units of multiples of the byte, or as transmission rates in bytes per certain time units.”

One of the more comprehensive efforts to answer this question is undertaken annually by Sandvine. The networking equipment company uses its vast installed footprint of equipment across the internet to generate statistics on connections, upstream traffic, downstream traffic, and total internet traffic (summarized in the table below). This dataset covers both browser-based and app-based internet traffic, which is crucial for capturing the full picture of internet user behavior.

Source: Sandvine

Looking at two categories of traffic analyzed by Sandvine — downstream traffic and overall traffic — gives lie to the narrative pushed by Warren and others. As you can see in the chart below, HTTP media streaming — a category for smaller streaming services that Sandvine has not yet tracked individually — represented 12.8% of global downstream traffic and Netflix accounted for 12.6%. According to Sandvine, “the aggregate volume of the long tail is actually greater than the largest of the short-tail providers.” So much for the open internet being smothered by the tech giants.

Source: Sandvine

As for Google and Facebook? The report found that Google-operated sites receive 12.00 percent of total internet traffic while Facebook-controlled sites receive 7.79 percent. In other words, less than 20 percent of all Internet traffic goes through sites owned or operated by Google or Facebook. While this statistic may be less eye-popping than the one trumpeted by Warren and other antitrust activists, it does have the virtue of being true.

Source: Sandvine

On March 19-20, 2020, the University of Nebraska College of Law will be hosting its third annual roundtable on closing the digital divide. UNL is expanding its program this year to include a one-day roundtable that focuses on the work of academics and researchers who are conducting empirical studies of the rural digital divide. 

Academics and researchers interested in having their work featured in this event are now invited to submit pieces for consideration. Submissions should be submitted by November 18th, 2019 using this form. The authors of papers and projects selected for inclusion will be notified by December 9, 2019. Research honoraria of up to $5,000 may be awarded for selected projects.

Example topics include cost studies of rural wireless deployments, comparative studies of the effects of ACAM funding, event studies of legislative interventions such as allowing customers unserved by carriers in their home exchange to request service from carriers in adjoining exchanges, comparative studies of the effectiveness of various federal and state funding mechanisms, and cost studies of different sorts of municipal deployments. This list is far from exhaustive.

Any questions about this event or the request for projects can be directed to Gus Hurwitz at ghurwitz@unl.edu or Elsbeth Magilton at elsbeth@unl.edu.

In the Federal Trade Commission’s recent hearings on competition policy in the 21st century, Georgetown professor Steven Salop urged greater scrutiny of vertical mergers. He argued that regulators should be skeptical of the claim that vertical integration tends to produce efficiencies that can enhance consumer welfare. In his presentation to the FTC, Professor Salop provided what he viewed as exceptions to this long-held theory.

Also, vertical merger efficiencies are not inevitable. I mean, vertical integration is common, but so is vertical non-integration. There is an awful lot of companies that are not vertically integrated. And we have lots of examples in which vertical integration has failed. Pepsi’s acquisition of KFC and Pizza Hut; you know, of course Coca-Cola has not merged with McDonald’s . . . .

Aside from the logical fallacy of cherry picking examples (he also includes Betamax/VHS and the split up of Alcoa and Arconic, as well as “integration and disintegration” “in cable”), Professor Salop misses the fact that PepsiCo’s 20 year venture into restaurants had very little to do with vertical integration.

Popular folklore says PepsiCo got into fast food because it was looking for a way to lock up sales of its fountain sodas. Soda is considered one of the highest margin products sold by restaurants. Vertical integration by a soda manufacturer into restaurants would eliminate double marginalization with the vertically integrated firm reaping most of the gains. The folklore fits nicely with economic theory. But, the facts may not fit the theory.

PepsiCo acquired Pizza Hut in 1977, Taco Bell in 1978, and Kentucky Fried Chicken in 1986. Prior to PepsiCo’s purchase, KFC had been owned by spirits company Heublein and conglomerate RJR Nabisco. This was the period of conglomerates—Pillsbury owned Burger King and General Foods owned Burger Chef (or maybe they were vertically integrated into bun distribution).

In the early 1990s Pepsi also bought California Pizza Kitchen, Chevys Fresh Mex, and D’Angelo Grilled Sandwiches.

In 1997, PepsiCo exited the restaurant business. It spun off Pizza Hut, Taco Bell, and KFC to Tricon Global Restaurants, which would later be renamed Yum! Brands. CPK and Chevy’s were purchased by private equity investors. D’Angelo was sold to Papa Gino’s Holdings, a restaurant chain. Since then, both Chevy’s and Papa Gino’s have filed for bankruptcy and Chevy’s has had some major shake-ups.

Professor Salop’s story focuses on the spin-off as an example of the failure of vertical mergers. But there is also a story of success. PepsiCo was in the restaurant business for two decades. More importantly, it continued its restaurant acquisitions over time. If PepsiCo’s restaurants strategy was a failure, it seems odd that the company would continue acquisitions into the early 1990s.

It’s easy, and largely correct, to conclude that PepsiCo’s restaurant acquisitions involved some degree of vertical integration, with upstream PepsiCo selling beverages to downstream restaurants. At the time PepsiCo bought Kentucky Fried Chicken, the New York Times reported KFC was Coke’s second-largest fountain account, behind McDonald’s.

But, what if vertical efficiencies were not the primary reason for the acquisitions?

Growth in U.S. carbonated beverage sales began slowing in the 1970s. It was also the “decade of the fast-food business.” From 1971 to 1977, Pizza Hut’s profits grew an average of 40% per year. Colonel Sanders sold his ownership in KFC for $2 million in 1964. Seven years later, the company was sold to Heublein for $280 million; PepsiCo paid $850 million in 1986.

Although KFC was Coke’s second largest customer at the time, about 20% of KFC’s stores served Pepsi products, “PepsiCo stressed that the major reason for the acquisition was to expand its restaurant business, which last year accounted for 26 percent of its revenues of $8.1 billion,” according to the New York Times.

Viewed in this light, portfolio diversification goes a much longer way toward explaining PepsiCo’s restaurant purchases than hoped-for vertical efficiencies. In 1997, former PepsiCo chairman Roger Enrico explained to investment analysts that the company entered the restaurant business in the first place, “because it didn’t see future growth in its soft drink and snack” businesses and thought diversification into restaurants would provide expansion opportunities.

Prior to its Pizza Hut and Taco Bell acquisitions, PepsiCo owned companies as diverse as Frito-Lay, North American Van Lines, Wilson Sporting Goods, and Rheingold Brewery. This further supports a diversification theory rather than a vertical integration theory of PepsiCo’s restaurant purchases. 

The mid 1990s and early 2000s were tough times for restaurants. Consumers were demanding healthier foods and fast foods were considered the worst of the worst. This was when Kentucky Fried Chicken rebranded as KFC. Debt hangovers from the leveraged buyout era added financial pressure. Many restaurant groups were filing for bankruptcy and competition intensified among fast food companies. PepsiCo’s restaurants could not cover their cost of capital, and what was once a profitable diversification strategy became a financial albatross, so the restaurants were spun off.

Thus, it seems more reasonable to conclude PepsiCo’s exit from restaurants was driven more by market exigencies than by a failure to achieve vertical efficiencies. While the folklore of locking up distribution channels to eliminate double marginalization fits nicely with theory, the facts suggest a more mundane model of a firm scrambling to deliver shareholder wealth through diversification in the face of changing competition.

These days, lacking a coherent legal theory presents no challenge to the would-be antitrust crusader. In a previous post, we noted how Shaoul Sussman’s predatory pricing claims against Amazon lacked a serious legal foundation. Sussman has returned with a new post, trying to build out his fledgling theory, but fares little better under even casual scrutiny.

According to Sussman, Amazon’s allegedly anticompetitive 

conduct not only cemented its role as the primary destination for consumers that shop online but also helped it solidify its power over brands.

Further, the company 

was willing to go to great lengths to ensure brand availability and inventory, including turning to the grey market, recruiting unauthorized sellers, and even selling diverted goods and counterfeits to its customers.

Sussman is trying to make out a fairly convoluted predatory pricing case, but once again without ever truly connecting the dots in a way that develops a cognizable antitrust claim. According to Sussman: 

Amazon sold products as a first-party to consumers on its platform at below average variable cost and [] Amazon recently began to recoup its losses by shifting the bulk of the transactions that occur on the website to its marketplace, where millions of third-party sellers pay hefty fees that enable Amazon to take a deep cut of every transaction.

Sussman now bases this claim on an allegation that Amazon relied on  “grey market” sellers on its platform, the presence of which forces legitimate brands onto the Amazon Marketplace. Moreover, Sussman claims that — somehow — these brands coming on board on Amazon’s terms forces those brands raise prices elsewhere, and the net effect of this process at scale is that prices across the economy have risen. 

As we detail below, Sussman’s chimerical argument depends on conflating unrelated concepts and relies on non-public anecdotal accounts to piece together an argument that, even if you squint at it, doesn’t make out a viable theory of harm.

Conflating legal reselling and illegal counterfeit selling as the “grey market”

The biggest problem with Sussman’s new theory is that he conflates pro-consumer unauthorized reselling and anti-consumer illegal counterfeiting, erroneously labeling both the “grey market”: 

Amazon had an ace up its sleeve. My sources indicate that the company deliberately turned to and empowered the “grey market“ — where both genuine, authentic goods and knockoffs are purchased and resold outside of brands’ intended distribution pipes — to dominate certain brands.

By definition, grey market goods are — as the link provided by Sussman states — “goods sold outside the authorized distribution channels by entities which may have no relationship with the producer of the goods.” Yet Sussman suggests this also encompasses counterfeit goods. This conflation is no minor problem for his argument. In general, the grey market is legal and beneficial for consumers. Brands such as Nike may try to limit the distribution of their products to channels the company controls, but they cannot legally prevent third parties from purchasing Nike products and reselling them on Amazon (or anywhere else).

This legal activity can increase consumer choice and can lead to lower prices, even though Sussman’s framing omits these key possibilities:

In the course of my conversations with former Amazon employees, some reported that Amazon actively sought out and recruited unauthorized sellers as both third-party sellers and first-party suppliers. Being unauthorized, these sellers were not bound by the brands’ policies and therefore outside the scope of their supervision.

In other words, Amazon actively courted third-party sellers who could bring legitimate goods, priced competitively, onto its platform. Perhaps this gives Amazon “leverage” over brands that would otherwise like to control the activities of legal resellers, but it’s exceedingly strange to try to frame this as nefarious or anticompetitive behavior.

Of course, we shouldn’t ignore the fact that there are also potential consumer gains when Amazon tries to restrict grey market activity by partnering with brands. But it is up to Amazon and the brands to determine through a contracting process when it makes the most sense to partner and control the grey market, or when consumers are better served by allowing unauthorized resellers. The point is: there is simply no reason to assume that either of these approaches is inherently problematic. 

Yet, even when Amazon tries to restrict its platform to authorized resellers, it exposes itself to a whole different set of complaints. In 2018, the company made a deal with Apple to bring the iPhone maker onto its marketplace platform. In exchange for Apple selling its products directly on Amazon, the latter agreed to remove unauthorized Apple resellers from the platform. Sussman portrays this as a welcome development in line with the policy changes he recommends. 

But news reports last month indicate the FTC is reviewing this deal for potential antitrust violations. One is reminded of Ronald Coase’s famous lament that he “had gotten tired of antitrust because when the prices went up the judges said it was monopoly, when the prices went down they said it was predatory pricing, and when they stayed the same they said it was tacit collusion.” It seems the same is true for Amazon and its relationship with the grey market.

Amazon’s incentive to remove counterfeits

What is illegal — and explicitly against Amazon’s marketplace rules  — is selling counterfeit goods. Counterfeit goods destroy consumer trust in the Amazon ecosystem, which is why the company actively polices its listings for abuses. And as Sussman himself notes, when there is an illegal counterfeit listing, “Brands can then file a trademark infringement lawsuit against the unauthorized seller in order to force Amazon to suspend it.”

Sussman’s attempt to hang counterfeiting problems around Amazon’s neck belies the actual truth about counterfeiting: probably the most cost-effective way to stop counterfeiting is simply to prohibit all third-party sellers. Yet, a serious cost-benefit analysis of Amazon’s platforms could hardly support such an action (and would harm the small sellers that antitrust activists seem most concerned about).

But, more to the point, if Amazon’s strategy is to encourage piracy, it’s doing a terrible job. It engages in litigation against known pirates, and earlier this year it rolled out a suite of tools (called Project Zero) meant to help brand owners report and remove known counterfeits. As part of this program, according to Amazon, “brands provide key data points about themselves (e.g., trademarks, logos, etc.) and we scan over 5 billion daily listing update attempts, looking for suspected counterfeits.” And when a brand identifies a counterfeit listing, they can remove it using a self-service tool (without needing approval from Amazon). 

Any large platform that tries to make it easy for independent retailers to reach customers is going to run into a counterfeit problem eventually. In his rush to discover some theory of predatory pricing to stick on Amazon, Sussman ignores the tradeoffs implicit in running a large platform that essentially democratizes retail:

Indeed, the democratizing effect of online platforms (and of technology writ large) should not be underestimated. While many are quick to disparage Amazon’s effect on local communities, these arguments fail to recognize that by reducing the costs associated with physical distance between sellers and consumers, e-commerce enables even the smallest merchant on Main Street, and the entrepreneur in her garage, to compete in the global marketplace.

In short, Amazon Marketplace is designed to make it as easy as possible for anyone to sell their products to Amazon customers. As the WSJ reported

Counterfeiters, though, have been able to exploit Amazon’s drive to increase the site’s selection and offer lower prices. The company has made the process to list products on its website simple—sellers can register with little more than a business name, email and address, phone number, credit card, ID and bank account—but that also has allowed impostors to create ersatz versions of hot-selling items, according to small brands and seller consultants.

The existence of counterfeits is a direct result of policies designed to lower prices and increase consumer choice. Thus, we would expect some number of counterfeits to exist as a result of running a relatively open platform. The question is not whether counterfeits exist, but — at least in terms of Sussman’s attempt to use antitrust law — whether there is any reason to think that Amazon’s conduct with respect to counterfeits is actually anticompetitive. But, even if we assume for the moment that there is some plausible way to draw a competition claim out of the existence of counterfeit goods on the platform, his theory still falls apart. 

There is both theoretical and empirical evidence for why Amazon is likely not engaged in the conduct Sussman describes. As a platform owner involved in a repeated game with customers, sellers, and developers, Amazon has an incentive to increase trust within the ecosystem. Counterfeit goods directly destroy that trust and likely decrease sales in the long run. If individuals can’t depend on the quality of goods on Amazon, they can easily defect to Walmart, eBay, or any number of smaller independent sellers. That’s why Amazon enters into agreements with companies like Apple to ensure there are only legitimate products offered. That’s also why Amazon actively sues counterfeiters in partnership with its sellers and brands, and also why Project Zero is a priority for the company.

Sussman relies on private, anecdotal claims while engaging in speculation that is entirely unsupported by public data 

Much of Sussman’s evidence is “[b]ased on conversations [he] held with former employees, sellers, and brands following the publication of [his] paper”, which — to put it mildly — makes it difficult for anyone to take seriously, let alone address head on. Here’s one example:

One third-party seller, who asked to remain anonymous, was willing to turn over his books for inspection in order to illustrate the magnitude of the increase in consumer prices. Together, we analyzed a single product, of which tens of thousands of units have been sold since 2015. The minimum advertised price for this single product, at any and all outlets, has increased more than 30 percent in the past four years. Despite this fact, this seller’s margins on this product are tighter than ever due to Amazon’s fee increases.

Needless to say, sales data showing the minimum advertised price for a single product “has increased more than 30 percent in the past four years” is not sufficient to prove, well, anything. At minimum, showing an increase in prices above costs would require data from a large and representative sample of sellers. All we have to go on from the article is a vague anecdote representing — maybe — one data point.

Not only is Sussman’s own data impossible to evaluate, but he bases his allegations on speculation that is demonstrably false. For instance, he asserts that Amazon used its leverage over brands in a way that caused retail prices to rise throughout the economy. But his starting point assumption is flatly contradicted by reality: 

To remedy this, Amazon once again exploited brands’ MAP policies. As mentioned, MAP policies effectively dictate the minimum advertised price of a given product across the entire retail industry. Traditionally, this meant that the price of a typical product in a brick and mortar store would be lower than the price online, where consumers are charged an additional shipping fee at checkout.

Sussman presents no evidence for the claim that “the price of a typical product in a brick and mortar store would be lower than the price online.” The widespread phenomenon of showrooming — when a customer examines a product at a brick-and-mortar store but then buys it for a lower price online — belies the notion that prices are higher online. One recent study by Nielsen found that “nearly 75% of grocery shoppers have used a physical store to ‘showroom’ before purchasing online.”

In fact, the company’s downward pressure on prices is so large that researchers now speculate that Amazon and other internet retailers are partially responsible for the low and stagnant inflation in the US over the last decade (dubbing this the “Amazon effect”). It is also curious that Sussman cites shipping fees as the reason prices are higher online while ignoring all the overhead costs of running a brick-and-mortar store which online retailers don’t incur. The assumption that prices are lower in brick-and-mortar stores doesn’t pass the laugh test.

Conclusion

Sussman can keep trying to tell a predatory pricing story about Amazon, but the more convoluted his theories get — and the less based in empirical reality they are — the less convincing they become. There is a predatory pricing law on the books, but it’s hard to bring a case because, as it turns out, it’s actually really hard to profitably operate as a predatory pricer. Speculating over complicated new theories might be entertaining, but it would be dangerous and irresponsible if these sorts of poorly supported theories were incorporated into public policy.

The FTC’s recent YouTube settlement and $170 million fine related to charges that YouTube violated the Children’s Online Privacy Protection Act (COPPA) has the issue of targeted advertising back in the news. With an upcoming FTC workshop and COPPA Rule Review looming, it’s worth looking at this case in more detail and reconsidering COPPA’s 2013 amendment to the definition of personal information.

According to the complaint issued by the FTC and the New York Attorney General, YouTube violated COPPA by collecting personal information of children on its platform without obtaining parental consent. While the headlines scream that this is an egregious violation of privacy and parental rights, a closer look suggests that there is actually very little about the case that normal people would find to be all that troubling. Instead, it appears to be another in the current spate of elitist technopanics.

COPPA defines personal information to include persistent identifiers, like cookies, used for targeted advertising. These cookies allow site operators to have some idea of what kinds of websites a user may have visited previously. Having knowledge of users’ browsing history allows companies to advertise more effectively than is possible with contextual advertisements, which guess at users’ interests based upon the type of content being viewed at the time. The age old problem for advertisers is that “half the money spent on advertising is wasted; the trouble is they don’t know which half.” While this isn’t completely solved by the use of targeted advertising based on web browsing and search history, the fact that such advertising is more lucrative compared to contextual advertisements suggests that it works better for companies.

COPPA, since the 2013 update, states that persistent identifiers are personal information by themselves, even if not linked to any other information that could be used to actually identify children (i.e., anyone under 13 years old). 

As a consequence of this rule, YouTube doesn’t allow children under 13 to create an account. Instead, YouTube created a separate mobile application called YouTube Kids with curated content targeted at younger users. That application serves only contextual advertisements that do not rely on cookies or other persistent identifiers, but the content available on YouTube Kids also remains available on YouTube. 

YouTube’s error, in the eyes of the FTC, was that the site left it to channel owners on YouTube’s general audience site to determine whether to monetize their content through targeted advertising or to opt out and use only contextual advertisements. Turns out, many of those channels — including channels identified by the FTC as “directed to children” — made the more lucrative choice by choosing to have targeted advertisements on their channels. 

Whether YouTube’s practices violate the letter of COPPA or not, a more fundamental question remains unanswered: What is the harm, exactly?

COPPA takes for granted that it is harmful for kids to receive targeted advertisements, even where, as here, the targeting is based not on any knowledge about the users as individuals, but upon the browsing and search history of the device they happen to be on. But children under 13 are extremely unlikely to have purchased the devices they use, to pay for the access to the Internet to use the devices, or to have any disposable income or means of paying for goods and services online. Which makes one wonder: To whom are these advertisements served to children actually targeted? The answer is obvious to everyone but the FTC and those who support the COPPA Rule: the children’s parents.

Television programs aimed at children have long been supported by contextual advertisements for cereal and toys. Tony the Tiger and Lucky the Leprechaun were staples of Saturday morning cartoons when I was growing up, along with all kinds of Hot Wheels commercials. As I soon discovered as a kid, I had the ability to ask my parents to buy these things, but ultimately no ability to buy them on my own. In other words: Parental oversight is essentially built-in to any type of advertisement children see, in the sense that few children can realistically make their own purchases or even view those advertisements without their parents giving them a device and internet access to do so.

When broken down like this, it is much harder to see the harm. It’s one thing to create regulatory schemes to prevent stalkers, creepers, and perverts from using online information to interact with children. It’s quite another to greatly reduce the ability of children’s content to generate revenue by use of relatively anonymous persistent identifiers like cookies — and thus, almost certainly, to greatly reduce the amount of content actually made for and offered to children.

On the one hand, COPPA thus disregards the possibility that controls that take advantage of parental oversight may be the most cost-effective form of protection in such circumstances. As Geoffrey Manne noted regarding the FTC’s analogous complaint against Amazon under the FTC Act, which ignored the possibility that Amazon’s in-app purchasing scheme was tailored to take advantage of parental oversight in order to avoid imposing excessive and needless costs:

[For the FTC], the imagined mechanism of “affirmatively seeking a customer’s authorized consent to a charge” is all benefit and no cost. Whatever design decisions may have informed the way Amazon decided to seek consent are either irrelevant, or else the user-experience benefits they confer are negligible….

Amazon is not abdicating its obligation to act fairly under the FTC Act and to ensure that users are protected from unauthorized charges. It’s just doing so in ways that also take account of the costs such protections may impose — particularly, in this case, on the majority of Amazon customers who didn’t and wouldn’t suffer such unauthorized charges….

At the same time, enforcement of COPPA against targeted advertising on kids’ content will have perverse and self-defeating consequences. As Berin Szoka notes:

This settlement will cut advertising revenue for creators of child-directed content by more than half. This will give content creators a perverse incentive to mislabel their content. COPPA was supposed to empower parents, but the FTC’s new approach actually makes life harder for parents and cripples functionality even when they want it. In short, artists, content creators, and parents will all lose, and it is not at all clear that this will do anything to meaningfully protect children.

This war against targeted advertising aimed at children has a cost. While many cheer the fine levied against YouTube (or think it wasn’t high enough) and the promised changes to its platform (though the dissenting Commissioners didn’t think those went far enough, either), the actual result will be less content — and especially less free content — available to children. 

Far from being a win for parents and children, the shift in oversight responsibility from parents to the FTC will likely lead to less-effective oversight, more difficult user interfaces, less children’s programming, and higher costs for everyone — all without obviously mitigating any harm in the first place.