A spate of recent newspaper investigations and commentary have focused on Apple allegedly discriminating against rivals in the App Store. The underlying assumption is that Apple, as a vertically integrated entity that operates both a platform for third-party apps and also makes it own apps, is acting nefariously whenever it “discriminates” against rival apps through prioritization, enters into popular app markets, or charges a “tax” or “surcharge” on rival apps. 

For most people, the word discrimination has a pejorative connotation of animus based upon prejudice: racism, sexism, homophobia. One of the definitions you will find in the dictionary reflects this. But another definition is a lot less charged: the act of making or perceiving a difference. (This is what people mean when they say that a person has a discriminating palate, or a discriminating taste in music, for example.)

In economics, discrimination can be a positive attribute. For instance, effective price discrimination can result in wealthier consumers paying a higher price than less well off consumers for the same product or service, and it can ensure that products and services are in fact available for less-wealthy consumers in the first place. That would seem to be a socially desirable outcome (although under some circumstances, perfect price discrimination can be socially undesirable). 

Antitrust law rightly condemns conduct only when it harms competition and not simply when it harms a competitor. This is because it is competition that enhances consumer welfare, not the presence or absence of a competitor — or, indeed, the profitability of competitors. The difficult task for antitrust enforcers is to determine when a vertically integrated firm with “market power” in an upstream market is able to effectively discriminate against rivals in a downstream market in a way that harms consumers

Even assuming the claims of critics are true, alleged discrimination by Apple against competitor apps in the App Store may harm those competitors, but it doesn’t necessarily harm either competition or consumer welfare.

The three potential antitrust issues facing Apple can be summarized as:

There is nothing new here economically. All three issues are analogous to claims against other tech companies. But, as I detail below, the evidence to establish any of these claims at best represents harm to competitors, and fails to establish any harm to the competitive process or to consumer welfare.


Antitrust enforcers have rejected similar prioritization claims against Google. For instance, rivals like Microsoft and Yelp have funded attacks against Google, arguing the search engine is harming competition by prioritizing its own services in its product search results over competitors. As ICLE and affiliated scholars have pointed out, though, there is nothing inherently harmful to consumers about such prioritization. There are also numerous benefits in platforms directly answering queries, even if it ends up directing users to platform-owned products or services.

As Geoffrey Manne has observed:

there is good reason to believe that Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to vigorously compete and to decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content to partially displace the original “ten blue links” design of its search results page and offer its own answers to users’ queries in its stead. 

Here, the antitrust case against Apple for prioritization is similarly flawed. For example, as noted in a recent article in the WSJ, users often use the App Store search in order to find apps they already have installed:

“Apple customers have a very strong connection to our products and many of them use search as a way to find and open their apps,” Apple said in a statement. “This customer usage is the reason Apple has strong rankings in search, and it’s the same reason Uber, Microsoft and so many others often have high rankings as well.” 

If a substantial portion of searches within the App Store are for apps already on the iPhone, then showing the Apple app near the top of the search results could easily be consumer welfare-enhancing. 

Apple is also theoretically leaving money on the table by prioritizing its (already pre-loaded) apps over third party apps. If its algorithm promotes its own apps over those that may earn it a 30% fee — additional revenue — the prioritization couldn’t plausibly be characterized as a “benefit” to Apple. Apple is ultimately in the business of selling hardware. Losing customers of the iPhone or iPad by prioritizing apps consumers want less would not be a winning business strategy.

Further, it stands to reason that those who use an iPhone may have a preference for Apple apps. Such consumers would be naturally better served by seeing Apple’s apps prioritized over third-party developer apps. And if consumers do not prefer Apple’s apps, rival apps are merely seconds of scrolling away.

Moreover, all of the above assumes that Apple is engaging in sufficiently pervasive discrimination through prioritzation to have a major impact on the app ecosystem. But substantial evidence exists that the universe of searches for which Apple’s algorithm prioritizes Apple apps is small. For instance, most searches are for branded apps already known by the searcher:

Keywords: how many are brands?

  • Top 500: 58.4%
  • Top 400: 60.75%
  • Top 300: 68.33%
  • Top 200: 80.5%
  • Top 100: 86%
  • Top 50: 90%
  • Top 25: 92%
  • Top 10: 100%

This is corroborated by data from the NYT’s own study, which suggests Apple prioritized its own apps first in only roughly 1% of the overall keywords queried: 

Whatever the precise extent of increase in prioritization, it seems like any claims of harm are undermined by the reality that almost 99% of App Store results don’t list Apple apps first. 

The fact is, very few keyword searches are even allegedly affected by prioritization. And the algorithm is often adjusting to searches for apps already pre-loaded on the device. Under these circumstances, it is very difficult to conclude consumers are being harmed by prioritization in search results of the App Store.


The issue of Apple building apps to compete with popular apps in its marketplace is similar to complaints about Amazon creating its own brands to compete with what is sold by third parties on its platform. For instance, as reported multiple times in the Washington Post:

Clue, a popular app that women use to track their periods, recently rocketed to the top of the App Store charts. But the app’s future is now in jeopardy as Apple incorporates period and fertility tracking features into its own free Health app, which comes preinstalled on every device. Clue makes money by selling subscriptions and services in its free app. 

However, there is nothing inherently anticompetitive about retailers selling their own brands. If anything, entry into the market is normally procompetitive. As Randy Picker recently noted with respect to similar claims against Amazon: 

The heart of this dynamic isn’t new. Sears started its catalogue business in 1888 and then started using the Craftsman and Kenmore brands as in-house brands in 1927. Sears was acquiring inventory from third parties and obviously knew exactly which ones were selling well and presumably made decisions about which markets to enter and which to stay out of based on that information. Walmart, the nation’s largest retailer, has a number of well-known private brands and firms negotiating with Walmart know full well that Walmart can enter their markets, subject of course to otherwise applicable restraints on entry such as intellectual property laws… I think that is possible to tease out advantages that a platform has regarding inventory experimentation. It can outsource some of those costs to third parties, though sophisticated third parties should understand where they can and cannot have a sustainable advantage given Amazon’s ability to move to build-or-bought first-party inventory. We have entire bodies of law— copyright, patent, trademark and more—that limit the ability of competitors to appropriate works, inventions and symbols. Those legal systems draw very carefully considered lines regarding permitted and forbidden uses. And antitrust law generally favors entry into markets and doesn’t look to create barriers that block firms, large or small, from entering new markets.

If anything, Apple is in an even better position than Amazon. Apple invests revenue in app development, not because the apps themselves generate revenue, but because it wants people to use the hardware, i.e. the iPhones, iPads, and Apple Watches. The reason Apple created an App Store in the first place is because this allows Apple to make more money from selling devices. In order to promote security on those devices, Apple institutes rules for the App Store, but it ultimately decides whether to create its own apps and provide access to other apps based upon its desire to maximize the value of the device. If Apple chooses to create free apps in order to improve iOS for users and sell more hardware, it is not a harm to competition.

Apple’s ability to enter into popular app markets should not be constrained unless it can be shown that by giving consumers another choice, consumers are harmed. As noted above, most searches in the App Store are for branded apps to begin with. If consumers already know what they want in an app, it hardly seems harmful for Apple to offer — and promote — its own, additional version as well. 

In the case of Clue, if Apple creates a free health app, it may hurt sales for Clue. But it doesn’t hurt consumers who want the functionality and would prefer to get it from Apple for free. This sort of product evolution is not harming competition, but enhancing it. And, it must be noted, Apple doesn’t exclude Clue from its devices. If, indeed, Clue offers a better product, or one that some users prefer, they remain able to find it and use it.

The so-called App Store “Tax”

The argument that Apple has an unfair competitive advantage over rival apps which have to pay commissions to Apple to be on the App Store (a “tax” or “surcharge”) has similarly produced no evidence of harm to consumers. 

Apple invested a lot into building the iPhone and the App Store. This infrastructure has created an incredibly lucrative marketplace for app developers to exploit. And, lest we forget a point fundamental to our legal system, Apple’s App Store is its property

The WSJ and NYT stories give the impression that Apple uses its commissions on third party apps to reduce competition for its own apps. However, this is inconsistent with how Apple charges its commission

For instance, Apple doesn’t charge commissions on free apps, which make up 84% of the App Store. Apple also doesn’t charge commissions for apps that are free to download but are supported by advertising — including hugely popular apps like Yelp, Buzzfeed, Instagram, Pinterest, Twitter, and Facebook. Even apps which are “readers” where users purchase or subscribe to content outside the app but use the app to access that content are not subject to commissions, like Spotify, Netflix, Amazon Kindle, and Audible. Apps for “physical goods and services” — like Amazon, Airbnb, Lyft, Target, and Uber — are also free to download and are not subject to commissions. The class of apps which are subject to a 30% commission include:

  • paid apps (like many games),
  • free apps that then have in-app purchases (other games and services like Skype and TikTok), 
  • and free apps with digital subscriptions (Pandora, Hulu, which have 30% commission first year and then 15% in subsequent years), and
  • cross-platform apps (Dropbox, Hulu, and Minecraft) which allow for digital goods and services to be purchased in-app and Apple collects commission on in-app sales, but not sales from other platforms. 

Despite protestations to the contrary, these costs are hardly unreasonable: third party apps receive the benefit not only of being in Apple’s App Store (without which they wouldn’t have any opportunity to earn revenue from sales on Apple’s platform), but also of the features and other investments Apple continues to pour into its platform — investments that make the ecosystem better for consumers and app developers alike. There is enormous value to the platform Apple has invested in, and a great deal of it is willingly shared with developers and consumers.  It does not make it anticompetitive to ask those who use the platform to pay for it. 

In fact, these benefits are probably even more important for smaller developers rather than bigger ones who can invest in the necessary back end to reach consumers without the App Store, like Netflix, Spotify, and Amazon Kindle. For apps without brand reputation (and giant marketing budgets), the ability for consumers to trust that downloading the app will not lead to the installation of malware (as often occurs when downloading from the web) is surely essential to small developers’ ability to compete. The App Store offers this.

Despite the claims made in Spotify’s complaint against Apple, Apple doesn’t have a duty to deal with app developers. Indeed, Apple could theoretically fill the App Store with only apps that it developed itself, like Apple Music. Instead, Apple has opted for a platform business model, which entails the creation of a new outlet for others’ innovation and offerings. This is pro-consumer in that it created an entire marketplace that consumers probably didn’t even know they wanted — and certainly had no means to obtain — until it existed. Spotify, which out-competed iTunes to the point that Apple had to go back to the drawing board and create Apple Music, cannot realistically complain that Apple’s entry into music streaming is harmful to competition. Rather, it is precisely what vigorous competition looks like: the creation of more product innovation, lower prices, and arguably (at least for some) higher quality.

Interestingly, Spotify is not even subject to the App Store commission. Instead, Spotify offers a work-around to iPhone users to obtain its premium version without ads on iOS. What Spotify actually desires is the ability to sell premium subscriptions to Apple device users without paying anything above the de minimis up-front cost to Apple for the creation and maintenance of the App Store. It is unclear how many potential Spotify users are affected by the inability to directly buy the ad-free version since Spotify discontinued offering it within the App Store. But, whatever the potential harm to Spotify itself, there’s little reason to think consumers or competition bear any of it. 


There is no evidence that Apple’s alleged “discrimination” against rival apps harms consumers. Indeed, the opposite would seem to be the case. The regulatory discrimination against successful tech platforms like Apple and the App Store is far more harmful to consumers.

“Data is the new oil,” said Jaron Lanier in a recent op-ed for The New York Times. Lanier’s use of this metaphor is only the latest instance of what has become the dumbest meme in tech policy. As the digital economy becomes more prominent in our lives, it is not unreasonable to seek to understand one of its most important inputs. But this analogy to the physical economy is fundamentally flawed. Worse, introducing regulations premised upon faulty assumptions like this will likely do far more harm than good. Here are seven reasons why “data is the new oil” misses the mark:

1. Oil is rivalrous; data is non-rivalrous

If someone uses a barrel of oil, it can’t be consumed again. But, as Alan McQuinn, a senior policy analyst at the Information Technology and Innovation Foundation, noted, “when consumers ‘pay with data’ to access a website, they still have the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services.” Imposing restrictions on data collection makes this infinite resource finite. 

2. Oil is excludable; data is non-excludable

Oil is highly excludable because, as a physical commodity, it can be stored in ways that prevent use by non-authorized parties. However, as my colleagues pointed out in a recent comment to the FTC: “While databases may be proprietary, the underlying data usually is not.” They go on to argue that this can lead to under-investment in data collection:

[C]ompanies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also  means that firms may well be more reluctant to invest in data generation than is socially optimal. In fact, to the extent this is true there is arguably more risk of companies under-investing in data  generation than of firms over-investing in order to create data troves with which to monopolize a market. This contrasts with oil, where complete excludability is the norm.

3. Oil is fungible; data is non-fungible

Oil is a commodity, so, by definition, one barrel of oil of a given grade is equivalent to any other barrel of that grade. Data, on the other hand, is heterogeneous. Each person’s data is unique and may consist of a practically unlimited number of different attributes that can be collected into a profile. This means that oil will follow the law of one price, while a dataset’s value will be highly contingent on its particular properties and commercialization potential.

4. Oil has positive marginal costs; data has zero marginal costs

There is a significant expense to producing and distributing an additional barrel of oil (as low as $5.49 per barrel in Saudi Arabia; as high as $21.66 in the U.K.). Data is merely encoded information (bits of 1s and 0s), so gathering, storing, and transferring it is nearly costless (though, to be clear, setting up systems for collecting and processing can be a large fixed cost). Under perfect competition, the market clearing price is equal to the marginal cost of production (hence why data is traded for free services and oil still requires cold, hard cash).

5. Oil is a search good; data is an experience good

Oil is a search good, meaning its value can be assessed prior to purchasing. By contrast, data tends to be an experience good because companies don’t know how much a new dataset is worth until it has been combined with pre-existing datasets and deployed using algorithms (from which value is derived). This is one reason why purpose limitation rules can have unintended consequences. If firms are unable to predict what data they will need in order to develop new products, then restricting what data they’re allowed to collect is per se anti-innovation.

6. Oil has constant returns to scale; data has rapidly diminishing returns

As an energy input into a mechanical process, oil has relatively constant returns to scale (e.g., when oil is used as the fuel source to power a machine). When data is used as an input for an algorithm, it shows rapidly diminishing returns, as the charts collected in a presentation by Google’s Hal Varian demonstrate. The initial training data is hugely valuable for increasing an algorithm’s accuracy. But as you increase the dataset by a fixed amount each time, the improvements steadily decline (because new data is only helpful in so far as it’s differentiated from the existing dataset).

7. Oil is valuable; data is worthless

The features detailed above — rivalrousness, fungibility, marginal cost, returns to scale — all lead to perhaps the most important distinction between oil and data: The average barrel of oil is valuable (currently $56.49) and the average dataset is worthless (on the open market). As Will Rinehart showed, putting a price on data is a difficult task. But when data brokers and other intermediaries in the digital economy do try to value data, the prices are almost uniformly low. The Financial Times had the most detailed numbers on what personal data is sold for in the market:

  • “General information about a person, such as their age, gender and location is worth a mere $0.0005 per person, or $0.50 per 1,000 people.”
  • “A person who is shopping for a car, a financial product or a vacation is more valuable to companies eager to pitch those goods. Auto buyers, for instance, are worth about $0.0021 a pop, or $2.11 per 1,000 people.”
  • “Knowing that a woman is expecting a baby and is in her second trimester of pregnancy, for instance, sends the price tag for that information about her to $0.11.”
  • “For $0.26 per person, buyers can access lists of people with specific health conditions or taking certain prescriptions.”
  • “The company estimates that the value of a relatively high Klout score adds up to more than $3 in word-of-mouth marketing value.”
  • [T]he sum total for most individuals often is less than a dollar.

Data is a specific asset, meaning it has “a significantly higher value within a particular transacting relationship than outside the relationship.” We only think data is so valuable because tech companies are so valuable. In reality, it is the combination of high-skilled labor, large capital expenditures, and cutting-edge technologies (e.g., machine learning) that makes those companies so valuable. Yes, data is an important component of these production functions. But to claim that data is responsible for all the value created by these businesses, as Lanier does in his NYT op-ed, is farcical (and reminiscent of the labor theory of value). 


People who analogize data to oil or gold may merely be trying to convey that data is as valuable in the 21st century as those commodities were in the 20th century (though, as argued, a dubious proposition). If the comparison stopped there, it would be relatively harmless. But there is a real risk that policymakers might take the analogy literally and regulate data in the same way they regulate commodities. As this article shows, data has many unique properties that are simply incompatible with 20th-century modes of regulation.

A better — though imperfect — analogy, as author Bernard Marr suggests, would be renewable energy. The sources of renewable energy are all around us — solar, wind, hydroelectric — and there is more available than we could ever use. We just need the right incentives and technology to capture it. The same is true for data. We leave our digital fingerprints everywhere — we just need to dust for them.

Source: New York Magazine

When she rolled out her plan to break up Big Tech, Elizabeth Warren paid for ads (like the one shown above) claiming that “Facebook and Google account for 70% of all internet traffic.” This statistic has since been repeated in various forms by Rolling Stone, Vox, National Review, and Washingtonian. In my last post, I fact checked this claim and found it wanting.

Warren’s data

As supporting evidence, Warren cited a Newsweek article from 2017, which in turn cited a blog post from an open-source freelancer, who was aggregating data from a 2015 blog post published by Parse.ly, a web analytics company, which said: “Today, Facebook remains a top referring site to the publishers in Parse.ly’s network, claiming 39 percent of referral traffic versus Google’s share of 34 percent.” At the time, Parse.ly had “around 400 publisher domains” in its network. To put it lightly, this is not what it means to “account for” or “control” or “directly influence” 70 percent of all internet traffic, as Warren and others have claimed.

Internet traffic measured in bytes

In an effort to contextualize how extreme Warren’s claim was, in my last post I used a common measure of internet traffic — total volume in bytes — to show that Google and Facebook account for less than 20 percent of global internet traffic. Some Warren defenders have correctly pointed out that measuring internet traffic in bytes will weight the results toward data-heavy services, such as video streaming. It’s not obvious a priori, however, whether this would bias the results in favor of Facebook and Google or against them, given that users stream lots of video using those companies’ sites and apps (hello, YouTube).

Internet traffic measured by time spent by users

As I said in my post, there are multiple ways to measure total internet traffic, and no one of them is likely to offer a perfect measure. So, to get a fuller picture, we could also look at how users are spending their time on the internet. While there is no single source for global internet time use statistics, we can combine a few to reach an estimate (NB: this analysis includes time spent in apps as well as on the web). 

According to the Global Digital report by Hootsuite and We Are Social, in 2018 there were 4.021 billion active internet users, and the worldwide average for time spent using the internet was 6 hours and 42 minutes per day. That means there were 1,616 billion internet user-minutes per day.

Data from Apptopia shows that, in the three months from May through July 2018, users spent 300 billion hours in Facebook-owned apps and 118 billion hours in Google-owned apps. In other words, all Facebook-owned apps consume, on average, 197 billion user-minutes per day and all Google-owned apps consume, on average, 78 billion user-minutes per day. And according to SimilarWeb data for the three months from June to August 2019, web users spent 11 billion user-minutes per day visiting Facebook domains (facebook.com, whatsapp.com, instagram.com, messenger.com) and 52 billion user-minutes per day visiting Google domains, including google.com (and all subdomains) and youtube.com.

If you add up all app and web user-minutes for Google and Facebook, the total is 338 billion user minutes per day. A staggering number. But as a share of all internet traffic (in this case measured in terms of time spent)? Google- and Facebook-owned sites and apps account for about 21 percent of user-minutes.

Internet traffic measured by “connections”

In my last post, I cited a Sandvine study that measured total internet traffic by volume of upstream and downstream bytes. The same report also includes numbers for what Sandvine calls “connections,” which is defined as “the number of conversations occurring for an application.” Sandvine notes that while “some applications use a single connection for all traffic, others use many connections to transfer data or video to the end user.” For example, a video stream on Netflix uses a single connection, while every item on a webpage, such as loading images, may require a distinct connection.

Cam Cullen, Sandvine’s VP of marketing, also implored readers to “never forget Google connections include YouTube, Search, and DoubleClick — all of which are very noisy applications and universally consumed,” which would bias this statistic toward inflating Google’s share. With these caveats in mind, Sandvine’s data shows that Google is responsible for 30 percent of these connections, while Facebook is responsible for under 8 percent of connections. Note that Netflix’s share is less than 1 percent, which implies this statistic is not biased toward data-heavy services. Again, the numbers for Google and Facebook are a far cry from what Warren and others are claiming.

Source: Sandvine

Internet traffic measured by sources

I’m not sure whether either of these measures is preferable to what I offered in my original post, but each is at least a plausible measure of internet traffic — and all of them fall well short of Waren’s claimed 70 percent. What I do know is that the preferred metric offered by the people most critical of my post — external referrals to online publishers (content sites) — is decidedly not a plausible measure of internet traffic.

In defense of Warren, Jason Kint, the CEO of a trade association for digital content publishers, wrote, “I just checked actual benchmark data across our members (most publishers) and 67% of their external traffic comes through Google or Facebook.” Rand Fishkin cites his own analysis of data from Jumpshot showing that 66.0 percent of external referral visits were sent by Google and 5.1 percent were sent by Facebook.

In another response to my piece, former digital advertising executive, Dina Srinivasan, said, “[Percentage] of referrals is relevant because it is pointing out that two companies control a large [percentage] of business that comes through their door.” 

In my opinion, equating “external referrals to publishers” with “internet traffic” is unacceptable for at least two reasons.

First, the internet is much broader than traditional content publishers — it encompasses everything from email and Yelp to TikTok, Amazon, and Netflix. The relevant market is consumer attention and, in that sense, every internet supplier is bidding for scarce time. In a recent investor letter, Netflix said, “We compete with (and lose to) ‘Fortnite’ more than HBO,” adding: “There are thousands of competitors in this highly fragmented market vying to entertain consumers and low barriers to entry for those great experiences.” Previously, CEO Reed Hastings had only half-jokingly said, “We’re competing with sleep on the margin.” In this debate over internet traffic, the opposing side fails to grasp the scope of the internet market. It is unsuprising, then, that the one metric that does best at capturing attention — time spent — is about the same as bytes.

Second, and perhaps more important, even if we limit our analysis to publisher traffic, the external referral statistic these critics cite completely (and conveniently?) omits direct and internal traffic — traffic that represents the majority of publisher traffic. In fact, according to Parse.ly’s most recent data, which now includes more than 3,000 “high-traffic sites,” only 35 percent of total traffic comes from search and social referrers (as the graph below shows). Of course, Google and Facebook drive the majority of search and social referrals. But given that most users visit webpages without being referred at all, Google and Facebook are responsible for less than a third of total traffic

Source: Parse.ly

It is simply incorrect to say, as Srinivasan does, that external referrals offers a useful measurement of internet traffic because it captures a “large [percentage] of business that comes through [publishers’] door.” Well, “large” is relative, but the implication that these external referrals from Facebook and Google explain Warren’s 70%-of-internet-traffic claim is both factually incorrect and horribly misleading — especially in an antitrust context. 

It is factually incorrect because, at most, Google and Facebook are responsible for a third of the traffic on these sites; it is misleading because if our concern is ensuring that users can reach content sites without passing through Google or Facebook, the evidence is clear that they can and do — at least twice as often as they follow links from Google or Facebook to do so.


As my colleague Gus Hurwitz said, Warren is making a very specific and very alarming claim: 

There may be ‘softer’ versions of [Warren’s claim] that are reasonably correct (e.g., digital ad revenue, visibility into traffic). But for 99% of people hearing (and reporting on) these claims, they hear the hard version of the claim: Google and Facebook control 70% of what you do online. That claim is wrong, alarmist, misinformation, intended to foment fear, uncertainty, and doubt — to bootstrap the argument that ‘everything is terrible, worse, really!, and I’m here to save you.’ This is classic propaganda.

Google and Facebook do account for a 59 percent (and declining) share of US digital advertising. But that’s not what Warren said (nor would anyone try to claim with a straight face that “volume of advertising” was the same thing as “internet traffic”). And if our concern is with competition, it’s hard to look at the advertising market and conclude that it’s got a competition problem. Prices are falling like crazy (down 42 percent in the last decade), and volume is only increasing. If you add in offline advertising (which, whatever you think about market definition here, certainly competes with online advertising at the very least on some dimensions) Google and Facebook are responsible for only about 32 percent.

In her comments criticizing my article, Dina Srinivasan mentioned another of these “softer” versions:

Also, each time a publisher page loads, what [percentage] then queries Google or Facebook servers during the page loads? About 98+% of every page load. That stat is not even in Warren or your analysis. That is 1000% relevant.

It’s true that Google and Facebook have visibility into a great deal of all internet traffic (beyond their own) through a variety of products and services: browsers, content delivery networks (CDNs), web beacons, cloud computing, VPNs, data brokers, single sign-on (SSO), and web analytics services. But seeing internet traffic is not the same thing as “account[ing] for” — or controlling or even directly influencing — internet traffic. The first is a very different claim than the latter, and one with considerably more attenuated competitive relevance (if any). It certainly wouldn’t be a sufficient basis for advocating that Google and Facebook be broken up — which is probably why, although arguably accurate, it’s not the statistic upon which Warren based her proposal to do so.

In March of this year, Elizabeth Warren announced her proposal to break up Big Tech in a blog post on Medium. She tried to paint the tech giants as dominant players crushing their smaller competitors and strangling the open internet. This line in particular stood out: “More than 70% of all Internet traffic goes through sites owned or operated by Google or Facebook.

This statistic immediately struck me as outlandish, but I knew I would need to do some digging to fact check it. After seeing the claim repeated in a recent profile of the Open Markets Institute — “Google and Facebook control websites that receive 70 percent of all internet traffic” — I decided to track down the original source for this surprising finding. 

Warren’s blog post links to a November 2017 Newsweek article — “Who Controls the Internet? Facebook and Google Dominance Could Cause the ‘Death of the Web’” — written by Anthony Cuthbertson. The piece is even more alarmist than Warren’s blog post: “Facebook and Google now have direct influence over nearly three quarters of all internet traffic, prompting warnings that the end of a free and open web is imminent.

The Newsweek article, in turn, cites an October 2017 blog post by André Staltz, an open source freelancer, on his personal website titled “The Web began dying in 2014, here’s how”. His takeaway is equally dire: “It looks like nothing changed since 2014, but GOOG and FB now have direct influence over 70%+ of internet traffic.” Staltz claims the blog post took “months of research to write”, but the headline statistic is merely aggregated from a December 2015 blog post by Parse.ly, a web analytics and content optimization software company.

Source: André Staltz

The Parse.ly article — “Facebook Continues to Beat Google in Sending Traffic to Top Publishers” — is about external referrals (i.e., outside links) to publisher sites (not total internet traffic) and says the “data set used for this study included around 400 publisher domains.” This is not even a random sample much less a comprehensive measure of total internet traffic. Here’s how they summarize their results: “Today, Facebook remains a top referring site to the publishers in Parse.ly’s network, claiming 39 percent of referral traffic versus Google’s share of 34 percent.” 

Source: Parse.ly

So, using the sources provided by the respective authors, the claim from Elizabeth Warren that “more than 70% of all Internet traffic goes through sites owned or operated by Google or Facebook” can be more accurately rewritten as “more than 70 percent of external links to 400 publishers come from sites owned or operated by Google and Facebook.” When framed that way, it’s much less conclusive (and much less scary).

But what’s the real statistic for total internet traffic? This is a surprisingly difficult question to answer, because there is no single way to measure it: Are we talking about share of users, or user-minutes, of bits, or total visits, or unique visits, or referrals? According to Wikipedia, “Common measurements of traffic are total volume, in units of multiples of the byte, or as transmission rates in bytes per certain time units.”

One of the more comprehensive efforts to answer this question is undertaken annually by Sandvine. The networking equipment company uses its vast installed footprint of equipment across the internet to generate statistics on connections, upstream traffic, downstream traffic, and total internet traffic (summarized in the table below). This dataset covers both browser-based and app-based internet traffic, which is crucial for capturing the full picture of internet user behavior.

Source: Sandvine

Looking at two categories of traffic analyzed by Sandvine — downstream traffic and overall traffic — gives lie to the narrative pushed by Warren and others. As you can see in the chart below, HTTP media streaming — a category for smaller streaming services that Sandvine has not yet tracked individually — represented 12.8% of global downstream traffic and Netflix accounted for 12.6%. According to Sandvine, “the aggregate volume of the long tail is actually greater than the largest of the short-tail providers.” So much for the open internet being smothered by the tech giants.

Source: Sandvine

As for Google and Facebook? The report found that Google-operated sites receive 12.00 percent of total internet traffic while Facebook-controlled sites receive 7.79 percent. In other words, less than 20 percent of all Internet traffic goes through sites owned or operated by Google or Facebook. While this statistic may be less eye-popping than the one trumpeted by Warren and other antitrust activists, it does have the virtue of being true.

Source: Sandvine

On March 19-20, 2010, the University of Nebraska College of Law will be hosting its third annual roundtable on closing the digital divide. UNL is expanding its program this year to include a one-day roundtable that focuses on the work of academics and researchers who are conducting empirical studies of the rural digital divide. 

Academics and researchers interested in having their work featured in this event are now invited to submit pieces for consideration. Submissions should be submitted by November 18th, 2019 using this form. The authors of papers and projects selected for inclusion will be notified by December 9, 2019. Research honoraria of up to $5,000 may be awarded for selected projects.

Example topics include cost studies of rural wireless deployments, comparative studies of the effects of ACAM funding, event studies of legislative interventions such as allowing customers unserved by carriers in their home exchange to request service from carriers in adjoining exchanges, comparative studies of the effectiveness of various federal and state funding mechanisms, and cost studies of different sorts of municipal deployments. This list is far from exhaustive.

Any questions about this event or the request for projects can be directed to Gus Hurwitz at ghurwitz@unl.edu or Elsbeth Magilton at elsbeth@unl.edu.

In the Federal Trade Commission’s recent hearings on competition policy in the 21st century, Georgetown professor Steven Salop urged greater scrutiny of vertical mergers. He argued that regulators should be skeptical of the claim that vertical integration tends to produce efficiencies that can enhance consumer welfare. In his presentation to the FTC, Professor Salop provided what he viewed as exceptions to this long-held theory.

Also, vertical merger efficiencies are not inevitable. I mean, vertical integration is common, but so is vertical non-integration. There is an awful lot of companies that are not vertically integrated. And we have lots of examples in which vertical integration has failed. Pepsi’s acquisition of KFC and Pizza Hut; you know, of course Coca-Cola has not merged with McDonald’s . . . .

Aside from the logical fallacy of cherry picking examples (he also includes Betamax/VHS and the split up of Alcoa and Arconic, as well as “integration and disintegration” “in cable”), Professor Salop misses the fact that PepsiCo’s 20 year venture into restaurants had very little to do with vertical integration.

Popular folklore says PepsiCo got into fast food because it was looking for a way to lock up sales of its fountain sodas. Soda is considered one of the highest margin products sold by restaurants. Vertical integration by a soda manufacturer into restaurants would eliminate double marginalization with the vertically integrated firm reaping most of the gains. The folklore fits nicely with economic theory. But, the facts may not fit the theory.

PepsiCo acquired Pizza Hut in 1977, Taco Bell in 1978, and Kentucky Fried Chicken in 1986. Prior to PepsiCo’s purchase, KFC had been owned by spirits company Heublein and conglomerate RJR Nabisco. This was the period of conglomerates—Pillsbury owned Burger King and General Foods owned Burger Chef (or maybe they were vertically integrated into bun distribution).

In the early 1990s Pepsi also bought California Pizza Kitchen, Chevys Fresh Mex, and D’Angelo Grilled Sandwiches.

In 1997, PepsiCo exited the restaurant business. It spun off Pizza Hut, Taco Bell, and KFC to Tricon Global Restaurants, which would later be renamed Yum! Brands. CPK and Chevy’s were purchased by private equity investors. D’Angelo was sold to Papa Gino’s Holdings, a restaurant chain. Since then, both Chevy’s and Papa Gino’s have filed for bankruptcy and Chevy’s has had some major shake-ups.

Professor Salop’s story focuses on the spin-off as an example of the failure of vertical mergers. But there is also a story of success. PepsiCo was in the restaurant business for two decades. More importantly, it continued its restaurant acquisitions over time. If PepsiCo’s restaurants strategy was a failure, it seems odd that the company would continue acquisitions into the early 1990s.

It’s easy, and largely correct, to conclude that PepsiCo’s restaurant acquisitions involved some degree of vertical integration, with upstream PepsiCo selling beverages to downstream restaurants. At the time PepsiCo bought Kentucky Fried Chicken, the New York Times reported KFC was Coke’s second-largest fountain account, behind McDonald’s.

But, what if vertical efficiencies were not the primary reason for the acquisitions?

Growth in U.S. carbonated beverage sales began slowing in the 1970s. It was also the “decade of the fast-food business.” From 1971 to 1977, Pizza Hut’s profits grew an average of 40% per year. Colonel Sanders sold his ownership in KFC for $2 million in 1964. Seven years later, the company was sold to Heublein for $280 million; PepsiCo paid $850 million in 1986.

Although KFC was Coke’s second largest customer at the time, about 20% of KFC’s stores served Pepsi products, “PepsiCo stressed that the major reason for the acquisition was to expand its restaurant business, which last year accounted for 26 percent of its revenues of $8.1 billion,” according to the New York Times.

Viewed in this light, portfolio diversification goes a much longer way toward explaining PepsiCo’s restaurant purchases than hoped-for vertical efficiencies. In 1997, former PepsiCo chairman Roger Enrico explained to investment analysts that the company entered the restaurant business in the first place, “because it didn’t see future growth in its soft drink and snack” businesses and thought diversification into restaurants would provide expansion opportunities.

Prior to its Pizza Hut and Taco Bell acquisitions, PepsiCo owned companies as diverse as Frito-Lay, North American Van Lines, Wilson Sporting Goods, and Rheingold Brewery. This further supports a diversification theory rather than a vertical integration theory of PepsiCo’s restaurant purchases. 

The mid 1990s and early 2000s were tough times for restaurants. Consumers were demanding healthier foods and fast foods were considered the worst of the worst. This was when Kentucky Fried Chicken rebranded as KFC. Debt hangovers from the leveraged buyout era added financial pressure. Many restaurant groups were filing for bankruptcy and competition intensified among fast food companies. PepsiCo’s restaurants could not cover their cost of capital, and what was once a profitable diversification strategy became a financial albatross, so the restaurants were spun off.

Thus, it seems more reasonable to conclude PepsiCo’s exit from restaurants was driven more by market exigencies than by a failure to achieve vertical efficiencies. While the folklore of locking up distribution channels to eliminate double marginalization fits nicely with theory, the facts suggest a more mundane model of a firm scrambling to deliver shareholder wealth through diversification in the face of changing competition.

These days, lacking a coherent legal theory presents no challenge to the would-be antitrust crusader. In a previous post, we noted how Shaoul Sussman’s predatory pricing claims against Amazon lacked a serious legal foundation. Sussman has returned with a new post, trying to build out his fledgling theory, but fares little better under even casual scrutiny.

According to Sussman, Amazon’s allegedly anticompetitive 

conduct not only cemented its role as the primary destination for consumers that shop online but also helped it solidify its power over brands.

Further, the company 

was willing to go to great lengths to ensure brand availability and inventory, including turning to the grey market, recruiting unauthorized sellers, and even selling diverted goods and counterfeits to its customers.

Sussman is trying to make out a fairly convoluted predatory pricing case, but once again without ever truly connecting the dots in a way that develops a cognizable antitrust claim. According to Sussman: 

Amazon sold products as a first-party to consumers on its platform at below average variable cost and [] Amazon recently began to recoup its losses by shifting the bulk of the transactions that occur on the website to its marketplace, where millions of third-party sellers pay hefty fees that enable Amazon to take a deep cut of every transaction.

Sussman now bases this claim on an allegation that Amazon relied on  “grey market” sellers on its platform, the presence of which forces legitimate brands onto the Amazon Marketplace. Moreover, Sussman claims that — somehow — these brands coming on board on Amazon’s terms forces those brands raise prices elsewhere, and the net effect of this process at scale is that prices across the economy have risen. 

As we detail below, Sussman’s chimerical argument depends on conflating unrelated concepts and relies on non-public anecdotal accounts to piece together an argument that, even if you squint at it, doesn’t make out a viable theory of harm.

Conflating legal reselling and illegal counterfeit selling as the “grey market”

The biggest problem with Sussman’s new theory is that he conflates pro-consumer unauthorized reselling and anti-consumer illegal counterfeiting, erroneously labeling both the “grey market”: 

Amazon had an ace up its sleeve. My sources indicate that the company deliberately turned to and empowered the “grey market“ — where both genuine, authentic goods and knockoffs are purchased and resold outside of brands’ intended distribution pipes — to dominate certain brands.

By definition, grey market goods are — as the link provided by Sussman states — “goods sold outside the authorized distribution channels by entities which may have no relationship with the producer of the goods.” Yet Sussman suggests this also encompasses counterfeit goods. This conflation is no minor problem for his argument. In general, the grey market is legal and beneficial for consumers. Brands such as Nike may try to limit the distribution of their products to channels the company controls, but they cannot legally prevent third parties from purchasing Nike products and reselling them on Amazon (or anywhere else).

This legal activity can increase consumer choice and can lead to lower prices, even though Sussman’s framing omits these key possibilities:

In the course of my conversations with former Amazon employees, some reported that Amazon actively sought out and recruited unauthorized sellers as both third-party sellers and first-party suppliers. Being unauthorized, these sellers were not bound by the brands’ policies and therefore outside the scope of their supervision.

In other words, Amazon actively courted third-party sellers who could bring legitimate goods, priced competitively, onto its platform. Perhaps this gives Amazon “leverage” over brands that would otherwise like to control the activities of legal resellers, but it’s exceedingly strange to try to frame this as nefarious or anticompetitive behavior.

Of course, we shouldn’t ignore the fact that there are also potential consumer gains when Amazon tries to restrict grey market activity by partnering with brands. But it is up to Amazon and the brands to determine through a contracting process when it makes the most sense to partner and control the grey market, or when consumers are better served by allowing unauthorized resellers. The point is: there is simply no reason to assume that either of these approaches is inherently problematic. 

Yet, even when Amazon tries to restrict its platform to authorized resellers, it exposes itself to a whole different set of complaints. In 2018, the company made a deal with Apple to bring the iPhone maker onto its marketplace platform. In exchange for Apple selling its products directly on Amazon, the latter agreed to remove unauthorized Apple resellers from the platform. Sussman portrays this as a welcome development in line with the policy changes he recommends. 

But news reports last month indicate the FTC is reviewing this deal for potential antitrust violations. One is reminded of Ronald Coase’s famous lament that he “had gotten tired of antitrust because when the prices went up the judges said it was monopoly, when the prices went down they said it was predatory pricing, and when they stayed the same they said it was tacit collusion.” It seems the same is true for Amazon and its relationship with the grey market.

Amazon’s incentive to remove counterfeits

What is illegal — and explicitly against Amazon’s marketplace rules  — is selling counterfeit goods. Counterfeit goods destroy consumer trust in the Amazon ecosystem, which is why the company actively polices its listings for abuses. And as Sussman himself notes, when there is an illegal counterfeit listing, “Brands can then file a trademark infringement lawsuit against the unauthorized seller in order to force Amazon to suspend it.”

Sussman’s attempt to hang counterfeiting problems around Amazon’s neck belies the actual truth about counterfeiting: probably the most cost-effective way to stop counterfeiting is simply to prohibit all third-party sellers. Yet, a serious cost-benefit analysis of Amazon’s platforms could hardly support such an action (and would harm the small sellers that antitrust activists seem most concerned about).

But, more to the point, if Amazon’s strategy is to encourage piracy, it’s doing a terrible job. It engages in litigation against known pirates, and earlier this year it rolled out a suite of tools (called Project Zero) meant to help brand owners report and remove known counterfeits. As part of this program, according to Amazon, “brands provide key data points about themselves (e.g., trademarks, logos, etc.) and we scan over 5 billion daily listing update attempts, looking for suspected counterfeits.” And when a brand identifies a counterfeit listing, they can remove it using a self-service tool (without needing approval from Amazon). 

Any large platform that tries to make it easy for independent retailers to reach customers is going to run into a counterfeit problem eventually. In his rush to discover some theory of predatory pricing to stick on Amazon, Sussman ignores the tradeoffs implicit in running a large platform that essentially democratizes retail:

Indeed, the democratizing effect of online platforms (and of technology writ large) should not be underestimated. While many are quick to disparage Amazon’s effect on local communities, these arguments fail to recognize that by reducing the costs associated with physical distance between sellers and consumers, e-commerce enables even the smallest merchant on Main Street, and the entrepreneur in her garage, to compete in the global marketplace.

In short, Amazon Marketplace is designed to make it as easy as possible for anyone to sell their products to Amazon customers. As the WSJ reported

Counterfeiters, though, have been able to exploit Amazon’s drive to increase the site’s selection and offer lower prices. The company has made the process to list products on its website simple—sellers can register with little more than a business name, email and address, phone number, credit card, ID and bank account—but that also has allowed impostors to create ersatz versions of hot-selling items, according to small brands and seller consultants.

The existence of counterfeits is a direct result of policies designed to lower prices and increase consumer choice. Thus, we would expect some number of counterfeits to exist as a result of running a relatively open platform. The question is not whether counterfeits exist, but — at least in terms of Sussman’s attempt to use antitrust law — whether there is any reason to think that Amazon’s conduct with respect to counterfeits is actually anticompetitive. But, even if we assume for the moment that there is some plausible way to draw a competition claim out of the existence of counterfeit goods on the platform, his theory still falls apart. 

There is both theoretical and empirical evidence for why Amazon is likely not engaged in the conduct Sussman describes. As a platform owner involved in a repeated game with customers, sellers, and developers, Amazon has an incentive to increase trust within the ecosystem. Counterfeit goods directly destroy that trust and likely decrease sales in the long run. If individuals can’t depend on the quality of goods on Amazon, they can easily defect to Walmart, eBay, or any number of smaller independent sellers. That’s why Amazon enters into agreements with companies like Apple to ensure there are only legitimate products offered. That’s also why Amazon actively sues counterfeiters in partnership with its sellers and brands, and also why Project Zero is a priority for the company.

Sussman relies on private, anecdotal claims while engaging in speculation that is entirely unsupported by public data 

Much of Sussman’s evidence is “[b]ased on conversations [he] held with former employees, sellers, and brands following the publication of [his] paper”, which — to put it mildly — makes it difficult for anyone to take seriously, let alone address head on. Here’s one example:

One third-party seller, who asked to remain anonymous, was willing to turn over his books for inspection in order to illustrate the magnitude of the increase in consumer prices. Together, we analyzed a single product, of which tens of thousands of units have been sold since 2015. The minimum advertised price for this single product, at any and all outlets, has increased more than 30 percent in the past four years. Despite this fact, this seller’s margins on this product are tighter than ever due to Amazon’s fee increases.

Needless to say, sales data showing the minimum advertised price for a single product “has increased more than 30 percent in the past four years” is not sufficient to prove, well, anything. At minimum, showing an increase in prices above costs would require data from a large and representative sample of sellers. All we have to go on from the article is a vague anecdote representing — maybe — one data point.

Not only is Sussman’s own data impossible to evaluate, but he bases his allegations on speculation that is demonstrably false. For instance, he asserts that Amazon used its leverage over brands in a way that caused retail prices to rise throughout the economy. But his starting point assumption is flatly contradicted by reality: 

To remedy this, Amazon once again exploited brands’ MAP policies. As mentioned, MAP policies effectively dictate the minimum advertised price of a given product across the entire retail industry. Traditionally, this meant that the price of a typical product in a brick and mortar store would be lower than the price online, where consumers are charged an additional shipping fee at checkout.

Sussman presents no evidence for the claim that “the price of a typical product in a brick and mortar store would be lower than the price online.” The widespread phenomenon of showrooming — when a customer examines a product at a brick-and-mortar store but then buys it for a lower price online — belies the notion that prices are higher online. One recent study by Nielsen found that “nearly 75% of grocery shoppers have used a physical store to ‘showroom’ before purchasing online.”

In fact, the company’s downward pressure on prices is so large that researchers now speculate that Amazon and other internet retailers are partially responsible for the low and stagnant inflation in the US over the last decade (dubbing this the “Amazon effect”). It is also curious that Sussman cites shipping fees as the reason prices are higher online while ignoring all the overhead costs of running a brick-and-mortar store which online retailers don’t incur. The assumption that prices are lower in brick-and-mortar stores doesn’t pass the laugh test.


Sussman can keep trying to tell a predatory pricing story about Amazon, but the more convoluted his theories get — and the less based in empirical reality they are — the less convincing they become. There is a predatory pricing law on the books, but it’s hard to bring a case because, as it turns out, it’s actually really hard to profitably operate as a predatory pricer. Speculating over complicated new theories might be entertaining, but it would be dangerous and irresponsible if these sorts of poorly supported theories were incorporated into public policy.

The FTC’s recent YouTube settlement and $170 million fine related to charges that YouTube violated the Children’s Online Privacy Protection Act (COPPA) has the issue of targeted advertising back in the news. With an upcoming FTC workshop and COPPA Rule Review looming, it’s worth looking at this case in more detail and reconsidering COPPA’s 2013 amendment to the definition of personal information.

According to the complaint issued by the FTC and the New York Attorney General, YouTube violated COPPA by collecting personal information of children on its platform without obtaining parental consent. While the headlines scream that this is an egregious violation of privacy and parental rights, a closer look suggests that there is actually very little about the case that normal people would find to be all that troubling. Instead, it appears to be another in the current spate of elitist technopanics.

COPPA defines personal information to include persistent identifiers, like cookies, used for targeted advertising. These cookies allow site operators to have some idea of what kinds of websites a user may have visited previously. Having knowledge of users’ browsing history allows companies to advertise more effectively than is possible with contextual advertisements, which guess at users’ interests based upon the type of content being viewed at the time. The age old problem for advertisers is that “half the money spent on advertising is wasted; the trouble is they don’t know which half.” While this isn’t completely solved by the use of targeted advertising based on web browsing and search history, the fact that such advertising is more lucrative compared to contextual advertisements suggests that it works better for companies.

COPPA, since the 2013 update, states that persistent identifiers are personal information by themselves, even if not linked to any other information that could be used to actually identify children (i.e., anyone under 13 years old). 

As a consequence of this rule, YouTube doesn’t allow children under 13 to create an account. Instead, YouTube created a separate mobile application called YouTube Kids with curated content targeted at younger users. That application serves only contextual advertisements that do not rely on cookies or other persistent identifiers, but the content available on YouTube Kids also remains available on YouTube. 

YouTube’s error, in the eyes of the FTC, was that the site left it to channel owners on YouTube’s general audience site to determine whether to monetize their content through targeted advertising or to opt out and use only contextual advertisements. Turns out, many of those channels — including channels identified by the FTC as “directed to children” — made the more lucrative choice by choosing to have targeted advertisements on their channels. 

Whether YouTube’s practices violate the letter of COPPA or not, a more fundamental question remains unanswered: What is the harm, exactly?

COPPA takes for granted that it is harmful for kids to receive targeted advertisements, even where, as here, the targeting is based not on any knowledge about the users as individuals, but upon the browsing and search history of the device they happen to be on. But children under 13 are extremely unlikely to have purchased the devices they use, to pay for the access to the Internet to use the devices, or to have any disposable income or means of paying for goods and services online. Which makes one wonder: To whom are these advertisements served to children actually targeted? The answer is obvious to everyone but the FTC and those who support the COPPA Rule: the children’s parents.

Television programs aimed at children have long been supported by contextual advertisements for cereal and toys. Tony the Tiger and Lucky the Leprechaun were staples of Saturday morning cartoons when I was growing up, along with all kinds of Hot Wheels commercials. As I soon discovered as a kid, I had the ability to ask my parents to buy these things, but ultimately no ability to buy them on my own. In other words: Parental oversight is essentially built-in to any type of advertisement children see, in the sense that few children can realistically make their own purchases or even view those advertisements without their parents giving them a device and internet access to do so.

When broken down like this, it is much harder to see the harm. It’s one thing to create regulatory schemes to prevent stalkers, creepers, and perverts from using online information to interact with children. It’s quite another to greatly reduce the ability of children’s content to generate revenue by use of relatively anonymous persistent identifiers like cookies — and thus, almost certainly, to greatly reduce the amount of content actually made for and offered to children.

On the one hand, COPPA thus disregards the possibility that controls that take advantage of parental oversight may be the most cost-effective form of protection in such circumstances. As Geoffrey Manne noted regarding the FTC’s analogous complaint against Amazon under the FTC Act, which ignored the possibility that Amazon’s in-app purchasing scheme was tailored to take advantage of parental oversight in order to avoid imposing excessive and needless costs:

[For the FTC], the imagined mechanism of “affirmatively seeking a customer’s authorized consent to a charge” is all benefit and no cost. Whatever design decisions may have informed the way Amazon decided to seek consent are either irrelevant, or else the user-experience benefits they confer are negligible….

Amazon is not abdicating its obligation to act fairly under the FTC Act and to ensure that users are protected from unauthorized charges. It’s just doing so in ways that also take account of the costs such protections may impose — particularly, in this case, on the majority of Amazon customers who didn’t and wouldn’t suffer such unauthorized charges….

At the same time, enforcement of COPPA against targeted advertising on kids’ content will have perverse and self-defeating consequences. As Berin Szoka notes:

This settlement will cut advertising revenue for creators of child-directed content by more than half. This will give content creators a perverse incentive to mislabel their content. COPPA was supposed to empower parents, but the FTC’s new approach actually makes life harder for parents and cripples functionality even when they want it. In short, artists, content creators, and parents will all lose, and it is not at all clear that this will do anything to meaningfully protect children.

This war against targeted advertising aimed at children has a cost. While many cheer the fine levied against YouTube (or think it wasn’t high enough) and the promised changes to its platform (though the dissenting Commissioners didn’t think those went far enough, either), the actual result will be less content — and especially less free content — available to children. 

Far from being a win for parents and children, the shift in oversight responsibility from parents to the FTC will likely lead to less-effective oversight, more difficult user interfaces, less children’s programming, and higher costs for everyone — all without obviously mitigating any harm in the first place.

Ursula von der Leyen has just announced the composition of the next European Commission. For tech firms, the headline is that Margrethe Vestager will not only retain her job as the head of DG Competition, she will also oversee the EU’s entire digital markets policy in her new role as Vice-President in charge of digital policy. Her promotion within the Commission as well as her track record at DG Competition both suggest that the digital economy will continue to be the fulcrum of European competition and regulatory intervention for the next five years.

The regulation (or not) of digital markets is an extremely important topic. Not only do we spend vast swaths of both our professional and personal lives online, but firms operating in digital markets will likely employ an ever-increasing share of the labor force in the near future

Likely recognizing the growing importance of the digital economy, the previous EU Commission intervened heavily in the digital sphere over the past five years. This resulted in a series of high-profile regulations (including the GDPR, the platform-to-business regulation, and the reform of EU copyright) and competition law decisions (most notably the Google cases). 

Lauded by supporters of the administrative state, these interventions have drawn flak from numerous corners. This includes foreign politicians (especially  Americans) who see in these measures an attempt to protect the EU’s tech industry from its foreign rivals, as well as free market enthusiasts who argue that the old continent has moved further in the direction of digital paternalism. 

Vestager’s increased role within the new Commission, the EU’s heavy regulation of digital markets over the past five years, and early pronouncements from Ursula von der Leyen all suggest that the EU is in for five more years of significant government intervention in the digital sphere.

Vestager the slayer of Big Tech

During her five years as Commissioner for competition, Margrethe Vestager has repeatedly been called the most powerful woman in Brussels (see here and here), and it is easy to see why. Yielding the heavy hammer of European competition and state aid enforcement, she has relentlessly attacked the world’s largest firms, especially American’s so-called “Tech Giants”. 

The record-breaking fines imposed on Google were probably her most high-profile victory. When Vestager entered office, in 2014, the EU’s case against Google had all but stalled. The Commission and Google had spent the best part of four years haggling over a potential remedy that was ultimately thrown out. Grabbing the bull by the horns, Margrethe Vestager made the case her own. 

Five years, three infringement decisions, and 8.25 billion euros later, Google probably wishes it had managed to keep the 2014 settlement alive. While Vestager’s supporters claim that justice was served, Barack Obama and Donald Trump, among others, branded her a protectionist (although, as Geoffrey Manne and I have noted, the evidence for this is decidedly mixed). Critics also argued that her decisions would harm innovation and penalize consumers (see here and here). Regardless, the case propelled Vestager into the public eye. It turned her into one of the most important political forces in Brussels. Cynics might even suggest that this was her plan all along.

But Google is not the only tech firm to have squared off with Vestager. Under her watch, Qualcomm was slapped with a total of €1.239 Billion in fines. The Commission also opened an investigation into Amazon’s operation of its online marketplace. If previous cases are anything to go by, the probe will most probably end with a headline-grabbing fine. The Commission even launched a probe into Facebook’s planned Libra cryptocurrency, even though it has yet to be launched, and recent talk suggests it may never be. Finally, in the area of state aid enforcement, the Commission ordered Ireland to recover €13 Billion in allegedly undue tax benefits from Apple.   

Margrethe Vestager also initiated a large-scale consultation on competition in the digital economy. The ensuing report concluded that the answer was more competition enforcement. Its findings will likely be cited by the Commission as further justification to ramp up its already significant competition investigations in the digital sphere.

Outside of the tech sector, Vestager has shown that she is not afraid to adopt controversial decisions. Blocking the proposed merger between Siemens and Alstom notably drew the ire of Angela Merkel and Emmanuel Macron, as the deal would have created a European champion in the rail industry (a key political demand in Germany and France). 

These numerous interventions all but guarantee that Vestager will not be pushing for light touch regulation in her new role as Vice-President in charge of digital policy. Vestager is also unlikely to put a halt to some of the “Big Tech” investigations that she herself launched during her previous spell at DG Competition. Finally, given her evident political capital in Brussels, it’s a safe bet that she will be given significant leeway to push forward landmark initiatives of her choosing. 

Vestager the prophet

Beneath these attempts to rein-in “Big Tech” lies a deeper agenda that is symptomatic of the EU’s current zeitgeist. Over the past couple of years, the EU has been steadily blazing a trail in digital market regulation (although much less so in digital market entrepreneurship and innovation). Underlying this push is a worldview that sees consumers and small startups as the uninformed victims of gigantic tech firms. True to form, the EU’s solution to this problem is more regulation and government intervention. This is unlikely to change given the Commission’s new (old) leadership.

If digital paternalism is the dogma, then Margrethe Vestager is its prophet. As Thibault Schrepel has shown, her speeches routinely call for digital firms to act “fairly”, and for policymakers to curb their “power”. According to her, it is our democracy that is at stake. In her own words, “you can’t sensibly talk about democracy today, without appreciating the enormous power of digital technology”. And yet, if history tells us one thing, it is that heavy-handed government intervention is anathema to liberal democracy. 

The Commission’s Google decisions neatly illustrate this worldview. For instance, in Google Shopping, the Commission concluded that Google was coercing consumers into using its own services, to the detriment of competition. But the Google Shopping decision focused entirely on competitors, and offered no evidence showing actual harm to consumers (see here). Could it be that users choose Google’s products because they actually prefer them? Rightly or wrongly, the Commission went to great lengths to dismiss evidence that arguably pointed in this direction (see here, §506-538).

Other European forays into the digital space are similarly paternalistic. The General Data Protection Regulation (GDPR) assumes that consumers are ill-equipped to decide what personal information they share with online platforms. Queue a deluge of time-consuming consent forms and cookie-related pop-ups. The jury is still out on whether the GDPR has improved users’ privacy. But it has been extremely costly for businesses — American S&P 500 companies and UK FTSE 350 companies alone spent an estimated total of $9 billion to comply with the GDPR — and has at least temporarily slowed venture capital investment in Europe. 

Likewise, the recently adopted Regulation on platform-to-business relations operates under the assumption that small firms routinely fall prey to powerful digital platforms: 

Given that increasing dependence, the providers of those services [i.e. digital platforms] often have superior bargaining power, which enables them to, in effect, behave unilaterally in a way that can be unfair and that can be harmful to the legitimate interests of their businesses users and, indirectly, also of consumers in the Union. For instance, they might unilaterally impose on business users practices which grossly deviate from good commercial conduct, or are contrary to good faith and fair dealing. 

But the platform-to-business Regulation conveniently overlooks the fact that economic opportunism is a two-way street. Small startups are equally capable of behaving in ways that greatly harm the reputation and profitability of much larger platforms. The Cambridge Analytica leak springs to mind. And what’s “unfair” to one small business may offer massive benefits to other businesses and consumers

Make what you will about the underlying merits of these individual policies, we should at least recognize that they are part of a greater whole, where Brussels is regulating ever greater aspects of our online lives — and not clearly for the benefit of consumers. 

With Margrethe Vestager now overseeing even more of these regulatory initiatives, readers should expect more of the same. The Mission Letter she received from Ursula von der Leyen is particularly enlightening in that respect: 

I want you to coordinate the work on upgrading our liability and safety rules for digital platforms, services and products as part of a new Digital Services Act…. 

I want you to focus on strengthening competition enforcement in all sectors. 

A hard rain’s a gonna fall… on Big Tech

Today’s announcements all but confirm that the EU will stay its current course in digital markets. This is unfortunate.

Digital firms currently provide consumers with tremendous benefits at no direct charge. A recent study shows that median users would need to be paid €15,875 to give up search engines for a year. They would also require €536 in order to forgo WhatsApp for a month, €97 for Facebook, and €59 to drop digital maps for the same duration. 

By continuing to heap ever more regulations on successful firms, the EU risks killing the goose that laid the golden egg. This is not just a theoretical possibility. The EU’s policies have already put technology firms under huge stress, and it is not clear that this has always been outweighed by benefits to consumers. The GDPR has notably caused numerous foreign firms to stop offering their services in Europe. And the EU’s Google decisions have forced it to start charging manufacturers for some of its apps. Are these really victories for European consumers?

It is also worth asking why there are so few European leaders in the digital economy. Not so long ago, European firms such as Nokia and Ericsson were at the forefront of the digital revolution. Today, with the possible exception of Spotify, the EU has fallen further down the global pecking order in the digital economy. 

The EU knows this, and plans to invest €100 Billion in order to boost European tech startups. But these sums will be all but wasted if excessive regulation threatens the long-term competitiveness of European startups. 

So if more of the same government intervention isn’t the answer, then what is? Recognizing that consumers have agency and are responsible for their own decisions might be a start. If you don’t like Facebook, close your account. Want a search engine that protects your privacy? Try DuckDuckGo. If YouTube and Spotify’s suggestions don’t appeal to you, create your own playlists and turn off the autoplay functions. The digital world has given us more choice than we could ever have dreamt of; but this comes with responsibility. Both Margrethe Vestager and the European institutions have often seemed oblivious to this reality. 

If the EU wants to turn itself into a digital economy powerhouse, it will have to switch towards light-touch regulation that allows firms to experiment with disruptive services, flexible employment options, and novel monetization strategies. But getting there requires a fundamental rethink — one that the EU’s previous leadership refused to contemplate. Margrethe Vestager’s dual role within the next Commission suggests that change isn’t coming any time soon.

A recently published book, “Kochland – The Secret History of Koch Industries and Corporate Power in America” by Christopher Leonard, presents a gripping account of relentless innovation and the power of the entrepreneur to overcome adversity in pursuit of delivering superior goods and services to the market while also reaping impressive profits. It’s truly an inspirational American story.

Now, I should note that I don’t believe Mr. Leonard actually intended his book to be quite so complimentary to the Koch brothers and the vast commercial empire they built up over the past several decades. He includes plenty of material detailing, for example, their employees playing fast and loose with environmental protection rules, or their labor lawyers aggressively bargaining with unions, sometimes to the detriment of workers. And all of the stories he presents are supported by sympathetic emotional appeals through personal anecdotes. 

But, even then, many of the negative claims are part of a larger theme of Koch Industries progressively improving its business practices. One prominent example is how Koch Industries learned from its environmentally unfriendly past and implemented vigorous programs to ensure “10,000% compliance” with all federal and state environmental laws. 

What really stands out across most or all of the stories Leonard has to tell, however, is the deep appreciation that Charles Koch and his entrepreneurially-minded employees have for the fundamental nature of the market as an information discovery process. Indeed, Koch Industries has much in common with modern technology firms like Amazon in this respect — but decades before the information technology revolution made the full power of “Big Data” gathering and processing as obvious as it is today.

The impressive information operation of Koch Industries

Much of Kochland is devoted to stories in which Koch Industries’ ability to gather and analyze data from across its various units led to the production of superior results for the economy and consumers. For example,  

Koch… discovered that the National Parks Service published data showing the snow pack in the California mountains, data that Koch could analyze to determine how much water would be flowing in future months to generate power at California’s hydroelectric plants. This helped Koch predict with great accuracy the future supply of electricity and the resulting demand for natural gas.

Koch Industries was able to use this information to anticipate the amount of power (megawatt hours) it needed to deliver to the California power grid (admittedly, in a way that was somewhat controversial because of poorly drafted legislation relating to the new regulatory regime governing power distribution and resale in the state).

And, in 2000, while many firms in the economy were still riding the natural gas boom of the 90s, 

two Koch analysts and a reservoir engineer… accurately predicted a coming disaster that would contribute to blackouts along the West Coast, the bankruptcy of major utilities, and skyrocketing costs for many consumers.

This insight enabled Koch Industries to reap huge profits in derivatives trading, and it also enabled it to enter — and essentially rescue — a market segment crucial for domestic farmers: nitrogen fertilizer.

The market volatility in natural gas from the late 90s through early 00s wreaked havoc on the nitrogen fertilizer industry, for which natural gas is the primary input. Farmland — a struggling fertilizer producer — had progressively mismanaged its business over the preceding two decades by focusing on developing lines of business outside of its core competencies, including blithely exposing itself to the volatile natural gas market in pursuit of short-term profits. By the time it was staring bankruptcy in the face, there were no other companies interested in acquiring it. 

Koch’s analysts, however, noticed that many of Farmland’s key fertilizer plants were located in prime locations for reaching local farmers. Once the market improved, whoever controlled those key locations would be in a superior position for selling into the nitrogen fertilizer market. So, by utilizing the data it derived from its natural gas operations (both operating pipelines and storage facilities, as well as understanding the volatility of gas prices and availability through its derivatives trading operations), Koch Industries was able to infer that it could make substantial profits by rescuing this bankrupt nitrogen fertilizer business. 

Emblematic of Koch’s philosophy of only making long-term investments, 

[o]ver the next ten years, [Koch Industries] spent roughly $500 million to outfit the plants with new technology while streamlining production… Koch installed a team of fertilizer traders in the office… [t]he traders bought and sold supplies around the globe, learning more about fertilizer markets each day. Within a few years, Koch Fertilizer built a global distribution network. Koch founded a new company, called Koch Energy Services, which bought and sold natural gas supplies to keep the fertilizer plants stocked.

Thus, Koch Industries not only rescued midwest farmers from shortages that would have decimated their businesses, it invested heavily to ensure that production would continue to increase to meet future demand. 

As noted, this acquisition was consistent with the ethos of Koch Industries, which stressed thinking about investments as part of long-term strategies, in contrast to their “counterparties in the market [who] were obsessed with the near-term horizon.” This led Koch Industries to look at investments over a period measured in years or decades, an approach that allowed the company to execute very intricate investment strategies: 

If Koch thought there was going to be an oversupply of oil in the Gulf Coast region, for example, it might snap up leases on giant oil barges, knowing that when the oversupply hit, companies would be scrambling for extra storage space and willing to pay a premium for the leases that Koch bought on the cheap. This was a much safer way to execute the trade than simply shorting the price of oil—even if Koch was wrong about the supply glut, the downside was limited because Koch could still sell or use the barge leases and almost certainly break even.

Entrepreneurs, regulators, and the problem of incentives

All of these accounts and more in Kochland brilliantly demonstrate a principal salutary role of entrepreneurs in the market, which is to discover slack or scarce resources in the system and manage them in a way that they will be available for utilization when demand increases. Guaranteeing the presence of oil barges in the face of market turbulence, or making sure that nitrogen fertilizer is available when needed, is precisely the sort of result sound public policy seeks to encourage from firms in the economy. 

Government, by contrast — and despite its best intentions — is institutionally incapable of performing the same sorts of entrepreneurial activities as even very large private organizations like Koch Industries. The stories recounted in Kochland demonstrate this repeatedly. 

For example, in the oil tanker episode, Koch’s analysts relied on “huge amounts of data from outside sources” – including “publicly available data…like the federal reports that tracked the volume of crude oil being stored in the United States.” Yet, because that data was “often stale” owing to a rigid, periodic publication schedule, it lacked the specificity necessary for making precise interventions in markets. 

Koch’s analysts therefore built on that data using additional public sources, such as manifests from the Customs Service which kept track of the oil tanker traffic in US waters. Leveraging all of this publicly available data, Koch analysts were able to develop “a picture of oil shipments and flows that was granular in its specificity.”

Similarly, when trying to predict snowfall in the western US, and how that would affect hydroelectric power production, Koch’s analysts relied on publicly available weather data — but extended it with their own analytical insights to make it more suitable to fine-grained predictions. 

By contrast, despite decades of altering the regulatory scheme around natural gas production, transport and sales, and being highly involved in regulating all aspects of the process, the federal government could not even provide the data necessary to adequately facilitate markets. Koch’s energy analysts would therefore engage in various deals that sometimes would only break even — if it meant they could develop a better overall picture of the relevant markets: 

As was often the case at Koch, the company… was more interested in the real-time window that origination deals could provide into the natural gas markets. Just as in the early days of the crude oil markets, information about prices was both scarce and incredibly valuable. There were not yet electronic exchanges that showed a visible price of natural gas, and government data on sales were irregular and relatively slow to come. Every origination deal provided fresh and precise information about prices, supply, and demand.

In most, if not all, of the deals detailed in Kochland, government regulators had every opportunity to find the same trends in the publicly available data — or see the same deficiencies in the data and correct them. Given their access to the same data, government regulators could, in some imagined world, have developed policies to mitigate the effects of natural gas market collapses, handle upcoming power shortages, or develop a reliable supply of fertilizer to midwest farmers. But they did not. Indeed, because of the different sets of incentives they face (among other factors), in the real world, they cannot do so, despite their best intentions.

The incentive to innovate

This gets to the core problem that Hayek described concerning how best to facilitate efficient use of dispersed knowledge in such a way as to achieve the most efficient allocation and distribution of resources: 

The various ways in which the knowledge on which people base their plans is communicated to them is the crucial problem for any theory explaining the economic process, and the problem of what is the best way of utilizing knowledge initially dispersed among all the people is at least one of the main problems of economic policy—or of designing an efficient economic system.

The question of how best to utilize dispersed knowledge in society can only be answered by considering who is best positioned to gather and deploy that knowledge. There is no fundamental objection to “planning”  per se, as Hayek notes. Indeed, in a complex society filled with transaction costs, there will need to be entities capable of internalizing those costs  — corporations or governments — in order to make use of the latent information in the system. The question is about what set of institutions, and what set of incentives governing those institutions, results in the best use of that latent information (and the optimal allocation and distribution of resources that follows from that). 

Armen Alchian captured the different incentive structures between private firms and government agencies well: 

The extent to which various costs and effects are discerned, measured and heeded depends on the institutional system of incentive-punishment for the deciders. One system of rewards-punishment may increase the extent to which some objectives are heeded, whereas another may make other goals more influential. Thus procedures for making or controlling decisions in one rewards-incentive system are not necessarily the “best” for some other system…

In the competitive, private, open-market economy, the wealth-survival prospects are not as strong for firms (or their employees) who do not heed the market’s test of cost effectiveness as for firms who do… as a result the market’s criterion is more likely to be heeded and anticipated by business people. They have personal wealth incentives to make more thorough cost-effectiveness calculations about the products they could produce …

In the government sector, two things are less effective. (1) The full cost and value consequences of decisions do not have as direct and severe a feedback impact on government employees as on people in the private sector. The costs of actions under their consideration are incomplete simply because the consequences of ignoring parts of the full span of costs are less likely to be imposed on them… (2) The effectiveness, in the sense of benefits, of their decisions has a different reward-inventive or feedback system … it is fallacious to assume that government officials are superhumans, who act solely with the national interest in mind and are never influenced by the consequences to their own personal position.

In short, incentives matter — and are a function of the institutional arrangement of the system. Given the same set of data about a scarce set of resources, over the long run, the private sector generally has stronger incentives to manage resources efficiently than does government. As Ludwig von Mises showed, moving those decisions into political hands creates a system of political preferences that is inherently inferior in terms of the production and distribution of goods and services.

Koch Industries: A model of entrepreneurial success

The market is not perfect, but no human institution is perfect. Despite its imperfections, the market provides the best system yet devised for fairly and efficiently managing the practically unlimited demands we place on our scarce resources. 

Kochland provides a valuable insight into the virtues of the market and entrepreneurs, made all the stronger by Mr. Leonard’s implied project of “exposing” the dark underbelly of Koch Industries. The book tells the bad tales, which I’m willing to believe are largely true. I would, frankly, be shocked if any large entity — corporation or government — never ran into problems with rogue employees, internal corporate dynamics gone awry, or a failure to properly understand some facet of the market or society that led to bad investments or policy. 

The story of Koch Industries — presented even as it is through the lens of a “secret history”  — is deeply admirable. It’s the story of a firm that not only learns from its own mistakes, as all firms must do if they are to survive, but of a firm that has a drive to learn in its DNA. Koch Industries relentlessly gathers information from the market, sometimes even to the exclusion of short-term profit. It eschews complex bureaucratic structures and processes, which encourages local managers to find opportunities and nimbly respond.

Kochland is a quick read that presents a gripping account of one of America’s corporate success stories. There is, of course, a healthy amount of material in the book covering the Koch brothers’ often controversial political activities. Nonetheless, even those who hate the Koch brothers on account of politics would do well to learn from the model of entrepreneurial success that Kochland cannot help but describe in its pages. 

FTC v. Qualcomm

Last week the International Center for Law & Economics (ICLE) and twelve noted law and economics scholars filed an amicus brief in the Ninth Circuit in FTC v. Qualcomm, in support of appellant (Qualcomm) and urging reversal of the district court’s decision. The brief was authored by Geoffrey A. Manne, President & founder of ICLE, and Ben Sperry, Associate Director, Legal Research of ICLE. Jarod M. Bona and Aaron R. Gott of Bona Law PC collaborated in drafting the brief and they and their team provided invaluable pro bono legal assistance, for which we are enormously grateful. Signatories on the brief are listed at the end of this post.

We’ve written about the case several times on Truth on the Market, as have a number of guest bloggers, in our ongoing blog series on the case here.   

The ICLE amicus brief focuses on the ways that the district court exceeded the “error cost” guardrails erected by the Supreme Court to minimize the risk and cost of mistaken antitrust decisions, particularly those that wrongly condemn procompetitive behavior. As the brief notes at the outset:

The district court’s decision is disconnected from the underlying economics of the case. It improperly applied antitrust doctrine to the facts, and the result subverts the economic rationale guiding monopolization jurisprudence. The decision—if it stands—will undercut the competitive values antitrust law was designed to protect.  

The antitrust error cost framework was most famously elaborated by Frank Easterbrook in his seminal article, The Limits of Antitrust (1984). It has since been squarely adopted by the Supreme Court—most significantly in Brooke Group (1986), Trinko (2003), and linkLine (2009).  

In essence, the Court’s monopolization case law implements the error cost framework by (among other things) obliging courts to operate under certain decision rules that limit the use of inferences about the consequences of a defendant’s conduct except when the circumstances create what game theorists call a “separating equilibrium.” A separating equilibrium is a 

solution to a game in which players of different types adopt different strategies and thereby allow an uninformed player to draw inferences about an informed player’s type from that player’s actions.

Baird, Gertner & Picker, Game Theory and the Law

The key problem in antitrust is that while the consequence of complained-of conduct for competition (i.e., consumers) is often ambiguous, its deleterious effect on competitors is typically quite evident—whether it is actually anticompetitive or not. The question is whether (and when) it is appropriate to infer anticompetitive effect from discernible harm to competitors. 

Except in the narrowly circumscribed (by Trinko) instance of a unilateral refusal to deal, anticompetitive harm under the rule of reason must be proven. It may not be inferred from harm to competitors, because such an inference is too likely to be mistaken—and “mistaken inferences are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” (Brooke Group (quoting yet another key Supreme Court antitrust error cost case, Matsushita (1986)). 

Yet, as the brief discusses, in finding Qualcomm liable the district court did not demand or find proof of harm to competition. Instead, the court’s opinion relies on impermissible inferences from ambiguous evidence to find that Qualcomm had (and violated) an antitrust duty to deal with rival chip makers and that its conduct resulted in anticompetitive foreclosure of competition. 

We urge you to read the brief (it’s pretty short—maybe the length of three blogs posts) to get the whole argument. Below we draw attention to a few points we make in the brief that are especially significant. 

The district court bases its approach entirely on Microsoft — which it misinterprets in clear contravention of Supreme Court case law

The district court doesn’t stay within the strictures of the Supreme Court’s monopolization case law. In fact, although it obligingly recites some of the error cost language from Trinko, it quickly moves away from Supreme Court precedent and bases its approach entirely on its reading of the D.C. Circuit’s Microsoft (2001) decision. 

Unfortunately, the district court’s reading of Microsoft is mistaken and impermissible under Supreme Court precedent. Indeed, both the Supreme Court and the D.C. Circuit make clear that a finding of illegal monopolization may not rest on an inference of anticompetitive harm.

The district court cites Microsoft for the proposition that

Where a government agency seeks injunctive relief, the Court need only conclude that Qualcomm’s conduct made a “significant contribution” to Qualcomm’s maintenance of monopoly power. The plaintiff is not required to “present direct proof that a defendant’s continued monopoly power is precisely attributable to its anticompetitive conduct.”

It’s true Microsoft held that, in government actions seeking injunctions, “courts [may] infer ‘causation’ from the fact that a defendant has engaged in anticompetitive conduct that ‘reasonably appears capable of making a significant contribution to maintaining monopoly power.’” (Emphasis added). 

But Microsoft never suggested that anticompetitiveness itself may be inferred.

“Causation” and “anticompetitive effect” are not the same thing. Indeed, Microsoft addresses “anticompetitive conduct” and “causation” in separate sections of its decision. And whereas Microsoft allows that courts may infer “causation” in certain government actions, it makes no such allowance with respect to “anticompetitive effect.” In fact, it explicitly rules it out:

[T]he plaintiff… must demonstrate that the monopolist’s conduct indeed has the requisite anticompetitive effect…; no less in a case brought by the Government, it must demonstrate that the monopolist’s conduct harmed competition, not just a competitor.”

The D.C. Circuit subsequently reinforced this clear conclusion of its holding in Microsoft in Rambus

Deceptive conduct—like any other kind—must have an anticompetitive effect in order to form the basis of a monopolization claim…. In Microsoft… [t]he focus of our antitrust scrutiny was properly placed on the resulting harms to competition.

Finding causation entails connecting evidentiary dots, while finding anticompetitive effect requires an economic assessment. Without such analysis it’s impossible to distinguish procompetitive from anticompetitive conduct, and basing liability on such an inference effectively writes “anticompetitive” out of the law.

Thus, the district court is correct when it holds that it “need not conclude that Qualcomm’s conduct is the sole reason for its rivals’ exits or impaired status.” But it is simply wrong to hold—in the same sentence—that it can thus “conclude that Qualcomm’s practices harmed competition and consumers.” The former claim is consistent with Microsoft; the latter is emphatically not.

Under Trinko and Aspen Skiing the district court’s finding of an antitrust duty to deal is impermissible 

Because finding that a company operates under a duty to deal essentially permits a court to infer anticompetitive harm without proof, such a finding “comes dangerously close to being a form of ‘no-fault’ monopolization,” as Herbert Hovenkamp has written. It is also thus seriously disfavored by the Court’s error cost jurisprudence.

In Trinko the Supreme Court interprets its holding in Aspen Skiing to identify essentially a single scenario from which it may plausibly be inferred that a monopolist’s refusal to deal with rivals harms consumers: the existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for the monopolist.

In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.”

But it’s not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. In a word, what the Court requires is that the defendant exhibit behavior that, but-for the expectation of future, anticompetitive returns, is irrational.

It should be noted, as John Lopatka (here) and Alan Meese (here) (both of whom joined the amicus brief) have written, that even the Supreme Court’s approach is likely insufficient to permit a court to distinguish between procompetitive and anticompetitive conduct. 

But what is certain is that the district court’s approach in no way permits such an inference.

“Evasion of a competitive constraint” is not an antitrust-relevant refusal to deal

In order to infer anticompetitive effect, it’s not enough that a firm may have a “duty” to deal, as that term is colloquially used, based on some obligation other than an antitrust duty, because it can in no way be inferred from the evasion of that obligation that conduct is anticompetitive.

The district court bases its determination that Qualcomm’s conduct is anticompetitive on the fact that it enables the company to avoid patent exhaustion, FRAND commitments, and thus price competition in the chip market. But this conclusion is directly precluded by the Supreme Court’s holding in NYNEX

Indeed, in Rambus, the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.”

As Josh Wright has noted:

[T]he objection to the “evasion” of any constraint approach is… that it opens the door to enforcement actions applied to business conduct that is not likely to harm competition and might be welfare increasing.

Thus NYNEX and Rambus (and linkLine) reinforce the Court’s repeated holding that an inference of harm to competition is permissible only where conduct points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not suffice.

The district court’s elaborate theory of harm rests fundamentally on the claim that Qualcomm injures rivals—and the record is devoid of evidence demonstrating actual harm to competition. Instead, the court infers it from what it labels “unreasonably high” royalty rates, enabled by Qualcomm’s evasion of competition from rivals. In turn, the court finds that that evasion of competition can be the source of liability if what Qualcomm evaded was an antitrust duty to deal. And, in impermissibly circular fashion, the court finds that Qualcomm indeed evaded an antitrust duty to deal—because its conduct allowed it to sustain “unreasonably high” prices. 

The Court’s antitrust error cost jurisprudence—from Brooke Group to NYNEX to Trinko & linkLine—stands for the proposition that no such circular inferences are permitted.

The district court’s foreclosure analysis also improperly relies on inferences in lieu of economic evidence

Because the district court doesn’t perform a competitive effects analysis, it fails to demonstrate the requisite “substantial” foreclosure of competition required to sustain a claim of anticompetitive exclusion. Instead the court once again infers anticompetitive harm from harm to competitors. 

The district court makes no effort to establish the quantity of competition foreclosed as required by the Supreme Court. Nor does the court demonstrate that the alleged foreclosure harms competition, as opposed to just rivals. Foreclosure per se is not impermissible and may be perfectly consistent with procompetitive conduct.

Again citing Microsoft, the district court asserts that a quantitative finding is not required. Yet, as the court’s citation to Microsoft should have made clear, in its stead a court must find actual anticompetitive effect; it may not simply assert it. As Microsoft held: 

It is clear that in all cases the plaintiff must… prove the degree of foreclosure. This is a prudential requirement; exclusivity provisions in contracts may serve many useful purposes. 

The court essentially infers substantiality from the fact that Qualcomm entered into exclusive deals with Apple (actually, volume discounts), from which the court concludes that Qualcomm foreclosed rivals’ access to a key customer. But its inference that this led to substantial foreclosure is based on internal business statements—so-called “hot docs”—characterizing the importance of Apple as a customer. Yet, as Geoffrey Manne and Marc Williamson explain, such documentary evidence is unreliable as a guide to economic significance or legal effect: 

Business people will often characterize information from a business perspective, and these characterizations may seem to have economic implications. However, business actors are subject to numerous forces that influence the rhetoric they use and the conclusions they draw….

There are perfectly good reasons to expect to see “bad” documents in business settings when there is no antitrust violation lurking behind them.

Assuming such language has the requisite economic or legal significance is unsupportable—especially when, as here, the requisite standard demands a particular quantitative significance.

Moreover, the court’s “surcharge” theory of exclusionary harm rests on assumptions regarding the mechanism by which the alleged surcharge excludes rivals and harms consumers. But the court incorrectly asserts that only one mechanism operates—and it makes no effort to quantify it. 

The court cites “basic economics” via Mankiw’s Principles of Microeconomics text for its conclusion:

The surcharge affects demand for rivals’ chips because as a matter of basic economics, regardless of whether a surcharge is imposed on OEMs or directly on Qualcomm’s rivals, “the price paid by buyers rises, and the price received by sellers falls.” Thus, the surcharge “places a wedge between the price that buyers pay and the price that sellers receive,” and demand for such transactions decreases. Rivals see lower sales volumes and lower margins, and consumers see less advanced features as competition decreases.

But even assuming the court is correct that Qualcomm’s conduct entails such a surcharge, basic economics does not hold that decreased demand for rivals’ chips is the only possible outcome. 

In actuality, an increase in the cost of an input for OEMs can have three possible effects:

  1. OEMs can pass all or some of the cost increase on to consumers in the form of higher phone prices. Assuming some elasticity of demand, this would mean fewer phone sales and thus less demand by OEMs for chips, as the court asserts. But the extent of that effect would depend on consumers’ demand elasticity and the magnitude of the cost increase as a percentage of the phone price. If demand is highly inelastic at this price (i.e., relatively insensitive to the relevant price change), it may have a tiny effect on the number of phones sold and thus the number of chips purchased—approaching zero as price insensitivity increases.
  2. OEMs can absorb the cost increase and realize lower profits but continue to sell the same number of phones and purchase the same number of chips. This would not directly affect demand for chips or their prices.
  3. OEMs can respond to a price increase by purchasing fewer chips from rivals and more chips from Qualcomm. While this would affect rivals’ chip sales, it would not necessarily affect consumer prices, the total number of phones sold, or OEMs’ margins—that result would depend on whether Qualcomm’s chips cost more or less than its rivals’. If the latter, it would even increase OEMs’ margins and/or lower consumer prices and increase output.

Alternatively, of course, the effect could be some combination of these.

Whether any of these outcomes would substantially exclude rivals is inherently uncertain to begin with. But demonstrating a reduction in rivals’ chip sales is a necessary but not sufficient condition for proving anticompetitive foreclosure. The FTC didn’t even demonstrate that rivals were substantially harmed, let alone that there was any effect on consumers—nor did the district court make such findings. 

Doing so would entail consideration of whether decreased demand for rivals’ chips flows from reduced consumer demand or OEMs’ switching to Qualcomm for supply, how consumer demand elasticity affects rivals’ chip sales, and whether Qualcomm’s chips were actually less or more expensive than rivals’. Yet the court determined none of these. 


Contrary to established Supreme Court precedent, the district court’s decision relies on mere inferences to establish anticompetitive effect. The decision, if it stands, would render a wide range of potentially procompetitive conduct presumptively illegal and thus harm consumer welfare. It should be reversed by the Ninth Circuit.

Joining ICLE on the brief are:

  • Donald J. Boudreaux, Professor of Economics, George Mason University
  • Kenneth G. Elzinga, Robert C. Taylor Professor of Economics, University of Virginia
  • Janice Hauge, Professor of Economics, University of North Texas
  • Justin (Gus) Hurwitz, Associate Professor of Law, University of Nebraska College of Law; Director of Law & Economics Programs, ICLE
  • Thomas A. Lambert, Wall Chair in Corporate Law and Governance, University of Missouri Law School
  • John E. Lopatka, A. Robert Noll Distinguished Professor of Law, Penn State University Law School
  • Daniel Lyons, Professor of Law, Boston College Law School
  • Geoffrey A. Manne, President and Founder, International Center for Law & Economics; Distinguished Fellow, Northwestern University Center on Law, Business & Economics
  • Alan J. Meese, Ball Professor of Law, William & Mary Law School
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
  • Vernon L. Smith, George L. Argyros Endowed Chair in Finance and Economics, Chapman University School of Business; Nobel Laureate in Economics, 2002
  • Michael Sykuta, Associate Professor of Economics, University of Missouri

[TOTM: The following is the eighth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here. The blog post is based on a forthcoming paper regarding patent holdup, co-authored by Dirk Auer and Julian Morris.]

Samsung SGH-F480V – controller board – Qualcomm MSM6280

In his latest book, Tyler Cowen calls big business an “American anti-hero”. Cowen argues that the growing animosity towards successful technology firms is to a large extent unwarranted. After all, these companies have generated tremendous prosperity and jobs.

Though it is less known to the public than its Silicon Valley counterparts, Qualcomm perfectly fits the anti-hero mold. Despite being a key contributor to the communications standards that enabled the proliferation of smartphones around the globe – an estimated 5 Billion people currently own a device – Qualcomm has been on the receiving end of considerable regulatory scrutiny on both sides of the Atlantic (including two in the EU; see here and here). 

In the US, Judge Lucy Koh recently ruled that a combination of anticompetitive practices had enabled Qualcomm to charge “unreasonably high royalty rates” for its CDMA and LTE cellular communications technology. Chief among these practices was Qualcomm’s so-called “no license, no chips” policy, whereby the firm refuses to sell baseband processors to implementers that have not taken out a license for its communications technology. Other grievances included Qualcomm’s purported refusal to license its patents to rival chipmakers, and allegations that it attempted to extract exclusivity obligations from large handset manufacturers, such as Apple. According to Judge Koh, these practices resulted in “unreasonably high” royalty rates that failed to comply with Qualcomm’s FRAND obligations.

Judge Koh’s ruling offers an unfortunate example of the numerous pitfalls that decisionmakers face when they second-guess the distributional outcomes achieved through market forces. This is particularly true in the complex standardization space.

The elephant in the room

The first striking feature of Judge Koh’s ruling is what it omits. Throughout the more than two-hundred-page long document, there is not a single reference to the concepts of holdup or holdout (crucial terms of art for a ruling that grapples with the prices charged by an SEP holder). 

At first sight, this might seem like a semantic quibble. But words are important. Patent holdup (along with the “unreasonable” royalties to which it arguably gives rise) is possible only when a number of cumulative conditions are met. Most importantly, the foundational literature on economic opportunism (here and here) shows that holdup (and holdout) mostly occur when parties have made asset-specific sunk investments. This focus on asset-specific investments is echoed by even the staunchest critics of the standardization status quo (here).

Though such investments may well have been present in the case at hand, there is no evidence that they played any part in the court’s decision. This is not without consequences. If parties did not make sunk relationship-specific investments, then the antitrust case against Qualcomm should have turned upon the alleged exclusion of competitors, not the level of Qualcomm’s royalties. The DOJ said this much in its statement of interest concerning Qualcomm’s motion for partial stay of injunction pending appeal. Conversely, if these investments existed, then patent holdout (whereby implementers refuse to license key pieces of intellectual property) was just as much of a risk as patent holdup (here and here). And yet the court completely overlooked this possibility.

The misguided push for component level pricing

The court also erred by objecting to Qualcomm’s practice of basing license fees on the value of handsets, rather than that of modem chips. In simplified terms, implementers paid Qualcomm a percentage of their devices’ resale price. The court found that this was against Federal Circuit law. Instead, it argued that royalties should be based on the value the smallest salable patent-practicing component (in this case, baseband chips). This conclusion is dubious both as a matter of law and of policy.

From a legal standpoint, the question of the appropriate royalty base seems far less clear-cut than Judge Koh’s ruling might suggest. For instance, Gregory Sidak observes that in TCL v. Ericsson Judge Selna used a device’s net selling price as a basis upon which to calculate FRAND royalties. Likewise, in CSIRO v. Cisco, the Court also declined to use the “smallest saleable practicing component” as a royalty base. And finally, as Jonathan Barnett observes, the Circuit Laser Dynamics case law cited  by Judge Koh relates to the calculation of damages in patent infringement suits. There is no legal reason to believe that its findings should hold any sway outside of that narrow context. It is one thing for courts to decide upon the methodology that they will use to calculate damages in infringement cases – even if it is a contested one. It is a whole other matter to shoehorn private parties into adopting this narrow methodology in their private dealings. 

More importantly, from a policy standpoint, there are important advantages to basing royalty rates on the price of an end-product, rather than that of an intermediate component. This type of pricing notably enables parties to better allocate the risk that is inherent in launching a new product. In simplified terms: implementers want to avoid paying large (fixed) license fees for failed devices; and patent holders want to share in the benefits of successful devices that rely on their inventions. The solution, as Alain Bousquet and his co-authors explain, is to agree on royalty payments that are contingent on success in the market:

Because the demand for a new product is uncertain and/or the potential cost reduction of a new technology is not perfectly known, both seller and buyer may be better off if the payment for the right to use an innovation includes a state-contingent royalty (rather than consisting of just a fixed fee). The inventor wants to benefit from a growing demand for a new product, and the licensee wishes to avoid high payments in case of disappointing sales.

While this explains why parties might opt for royalty-based payments over fixed fees, it does not entirely elucidate the practice of basing royalties on the price of an end device. One explanation is that a technology’s value will often stem from its combination with other goods or technologies. Basing royalties on the value of an end-device enables patent holders to more effectively capture the social benefits that flow from these complementarities.

Imagine the price of the smallest saleable component is identical across all industries, despite it being incorporated into highly heterogeneous devices. For instance, the same modem chip could be incorporated into smartphones (of various price ranges), tablets, vehicles, and other connected devices. The Bousquet line of reasoning (above) suggests that it is efficient for the patent holder to earn higher royalties (from the IP that underpins the modem chips) in those segments where market demand is strongest (i.e. where there are stronger complementarities between the modem chip and the end device).

One way to make royalties more contingent on market success is to use the price of the modem (which is presumably identical across all segments) as a royalty base and negotiate a separate royalty rate for each end device (charging a higher rate for devices that will presumably benefit from stronger consumer demand). But this has important drawbacks. For a start, identifying those segments (or devices) that are most likely to be successful is informationally cumbersome for the inventor. Moreover, this practice could land the patent holder in hot water. Antitrust authorities might naïvely conclude that these varying royalty rates violate the “non-discriminatory” part of FRAND.

A much simpler solution is to apply a single royalty rate (or at least attempt to do so) but use the price of the end device as a royalty base. This ensures that the patent holder’s rewards are not just contingent on the number of devices sold, but also on their value. Royalties will thus more closely track the end-device’s success in the marketplace.   

In short, basing royalties on the value of an end-device is an informationally light way for the inventor to capture some of the unforeseen value that might stem from the inclusion of its technology in an end device. Mandating that royalty rates be based on the value of the smallest saleable component ignores this complex reality.

Prices are almost impossible to reconstruct

Judge Koh was similarly imperceptive when assessing Qualcomm’s contribution to the value of key standards, such as LTE and CDMA. 

For a start, she reasoned that Qualcomm’s royalties were large compared to the number of patents it had contributed to these technologies:

Moreover, Qualcomm’s own documents also show that Qualcomm is not the top standards contributor, which confirms Qualcomm’s own statements that QCT’s monopoly chip market share rather than the value of QTL’s patents sustain QTL’s unreasonably high royalty rates.

Given the tremendous heterogeneity that usually exists between the different technologies that make up a standard, simply counting each firm’s contributions is a crude and misleading way to gauge the value of their patent portfolios. Accordingly, Qualcomm argued that it had made pioneering contributions to technologies such as CDMA, and 4G/5G. Though the value of Qualcomm’s technologies is ultimately an empirical question, the court’s crude patent counting  was unlikely to provide a satisfying answer.

Just as problematically, the court also concluded that Qualcomm’s royalties were unreasonably high because “modem chips do not drive handset value.” In its own words:

Qualcomm’s intellectual property is for communication, and Qualcomm does not own intellectual property on color TFT LCD panel, mega-pixel DSC module, user storage memory, decoration, and mechanical parts. The costs of these non-communication-related components have become more expensive and now contribute 60-70% of the phone value. The phone is not just for communication, but also for computing, movie-playing, video-taking, and data storage.

As Luke Froeb and his co-authors have also observed, the court’s reasoning on this point is particularly unfortunate. Though it is clearly true that superior LCD panels, cameras, and storage increase a handset’s value – regardless of the modem chip that is associated with them – it is equally obvious that improvements to these components are far more valuable to consumers when they are also associated with high-performance communications technology.

For example, though there is undoubtedly standalone value in being able to take improved pictures on a smartphone, this value is multiplied by the ability to instantly share these pictures with friends, and automatically back them up on the cloud. Likewise, improving a smartphone’s LCD panel is more valuable if the device is also equipped with a cutting edge modem (both are necessary for consumers to enjoy high-definition media online).

In more technical terms, the court fails to acknowledge that, in the presence of perfect complements, each good makes an incremental contribution of 100% to the value of the whole. A smartphone’s components would be far less valuable to consumers if they were not associated with a high-performance modem, and vice versa. The fallacy to which the court falls prey is perfectly encapsulated by a quote it cites from Apple’s COO:

Apple invests heavily in the handset’s physical design and enclosures to add value, and those physical handset features clearly have nothing to do with Qualcomm’s cellular patents, it is unfair for Qualcomm to receive royalty revenue on that added value.

The question the court should be asking, however, is whether Apple would have gone to the same lengths to improve its devices were it not for Qualcomm’s complementary communications technology. By ignoring this question, Judge Koh all but guaranteed that her assessment of Qualcomm’s royalty rates would be wide of the mark.

Concluding remarks

In short, the FTC v. Qualcomm case shows that courts will often struggle when they try to act as makeshift price regulators. It thus lends further credence to Gergory Werden and Luke Froeb’s conclusion that:

Nothing is more alien to antitrust than enquiring into the reasonableness of prices. 

This is especially true in complex industries, such as the standardization space. The colossal number of parameters that affect the price for a technology are almost impossible to reproduce in a top-down fashion, as the court attempted to do in the Qualcomm case. As a result, courts will routinely draw poor inferences from factors such as the royalty base agreed upon by parties, the number of patents contributed by a firm, and the complex manner in which an individual technology may contribute to the value of an end-product. Antitrust authorities and courts would thus do well to recall the wise words of Friedrich Hayek:

If we can agree that the economic problem of society is mainly one of rapid adaptation to changes in the particular circumstances of time and place, it would seem to follow that the ultimate decisions must be left to the people who are familiar with these circumstances, who know directly of the relevant changes and of the resources immediately available to meet them. We cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders. We must solve it by some form of decentralization.