Archives For antitrust

Will the merger between T-Mobile and Sprint make consumers better or worse off? A central question in the review of this merger—as it is in all merger reviews—is the likely effects that the transaction will have on consumers. In this post, we look at one study that opponents of the merger have been using to support their claim that the merger will harm consumers.

Along with my earlier posts on data problems and public policy (1, 2, 3, 4, 5), this provides an opportunity to explore why seemingly compelling studies can be used to muddy the discussion and fool observers into seeing something that isn’t there.

This merger—between the third and fourth largest mobile wireless providers in the United States—has been characterized as a “4-to-3” merger, on the grounds that it will reduce the number of large, ostensibly national carriers from four to three. This, in turn, has led to concerns that further concentration in the wireless telecommunications industry will harm consumers. Specifically, some opponents of the merger claim that “it’s going to be hard for someone to make a persuasive case that reducing four firms to three is actually going to improve competition for the benefit of American consumers.”

A number of previous mergers around the world can or have also been characterized as 4-to-3 mergers in the wireless telecommunications industry. Several econometric studies have attempted to evaluate the welfare effects of 4-to-3 mergers in other countries, as well as the effects of market concentration in the wireless industry more generally. These studies have been used by both proponents and opponents of the proposed merger of T-Mobile and Sprint to support their respective contentions that the merger will benefit or harm consumer welfare.

One particular study has risen to prominence among opponents of 4-to-3 mergers in telecom in general and the T-Mobile/Sprint merger in specific. This is worrying because the study has several fundamental flaws. 

This study, by Finnish consultancy Rewheel, has been cited by, among others, Phillip Berenbroick of Public Knowledge, who in Senate testimony, asserted that “Rewheel found that consumers in markets with three facilities-based providers paid twice as much per gigabyte as consumers in four firm markets.”

The Rewheel report upon which Mr. Berenbroick relied, is, however, marred by a number of significant flaws, which undermine its usefulness.

The Rewheel report

Rewheel’s report purports to analyze the state of 4G pricing across 41 countries that are either members of the EU or the OECD or both. The report’s conclusions are based mainly on two measures:

  1. Estimates of the maximum number of gigabytes available under each plan for a specific hypothetical monthly price, ranging from €5 to €80 a month. In other words, for each plan, Rewheel asks, “How many 4G gigabytes would X euros buy?” Rewheel then ranks countries by the median amount of gigabytes available at each hypothetical price for all the plans surveyed in each country.
  2. Estimates of what Rewheel describes as “fully allocated gigabyte prices.” This is the monthly retail price (including VAT) divided by the number of gigabytes included in each plan. Rewheel then ranks countries by the median price per gigabyte across all the plans surveyed in each country.

Rewheel’s convoluted calculations

Rewheel’s use of the country median across all plans is problematic. In particular it gives all plans equal weight, regardless of consumers’ use of each plan. For example, a plan targeted for a consumer with a “high” level of usage is included with a plan targeted for a consumer with a “low” level of usage. Even though a “high” user would not purchase a “low” plan (which would be relatively expensive for a “high” user), all plans are included, thereby skewing upward the median estimates.

But even if that approach made sense as a way of measuring consumers’ willingness to pay, in execution Rewheel’s analysis contains the following key defects:

  • The Rewheel report is essentially limited to quantity effects alone (i.e., how many gigabytes available under each plan for a given hypothetical price) or price effects alone (i.e., price per included gigabyte for each plan). These measures can mislead the analysis by missing, among other things, innovation and quality effects.
  • Rewheel’s analysis is not based on an impartial assessment of relevant price data. Rather, it is based on hypothetical measures. Such comparisons say nothing about the plans actually chosen by consumers or the actual prices paid by consumers in those countries, rendering Rewheel’s comparisons virtually meaningless. As Affeldt & Nitsche (2014) note in their assessment of the effects of concentration in mobile telecom markets:

Such approaches are taken by Rewheel (2013) and also the Austrian regulator rtr (when tracking prices over time, see rtr (2014)). Such studies face the following problems: They may pick tariffs that are relatively meaningless in the country. They will have to assume one or more consumption baskets (voice minutes, data volume etc.) in order to compare tariffs. This may drive results. Apart from these difficulties such comparisons require very careful tracking of tariffs and their changes. Even if one assumes studying a sample of tariffs is potentially meaningful, a comparison across countries (or over time) would still require taking into account key differences across countries (or over time) like differences in demand, costs, network quality etc.

  • The Rewheel report bases its comparison on dissimilar service levels by not taking into account, for instance, relevant features like comparable network capacity, service security, and, perhaps most important, overall quality of service.

Rewheel’s unsupported conclusions

Rewheel uses its analysis to come to some strong conclusions, such as the conclusion on the first page of its report declaring the median gigabyte price in countries with three carriers is twice as high as in countries with four carriers.

The figure below is a revised version of the figure on the first page of Rewheel’s report. The yellow blocks (gray dots) show the range of prices in countries with three carriers the blue blocks (pink dots) shows the range of prices in countries with four carriers. The darker blocks show the overlap of the two. The figure makes clear that there is substantial overlap in pricing among three and four carrier countries. Thus, it is not obvious that three carrier countries have significantly higher prices (as measured by Rewheel) than four carrier countries.

Rewheel

A simple “eyeballing” of the data can lead to incorrect conclusions, in which case statistical analysis can provide some more certainty (or, at least, some measure of uncertainty). Yet, Rewheel provides no statistical analysis of its calculations, such as measures of statistical significance. However, information on page 5 of the Rewheel report can be used to perform some rudimentary statistical analysis.

I took the information from the columns for hypothetical monthly prices of €30 a month and €50 a month, and converted data into a price per gigabyte to generate the dependent variable. Following Rewheel’s assumption, “unlimited” is converted to 250 gigabytes per month. Greece was dropped from the analysis because Rewheel indicates that no data is available at either hypothetical price level.

My rudimentary statistical analysis includes the following independent variables:

  • Number of carriers (or mobile network operators, MNOs) reported by Rewheel in each country, ranging from three to five. Israel is the only country with five MNOs.
  • A dummy variable for EU28 countries. Rewheel performs separate analysis for EU28 countries, suggesting they think this is an important distinction.
  • GDP per capita for each country, adjusted for purchasing power parity. Several articles in the literature suggest higher GDP countries would be expected to have higher wireless prices.
  • Population density, measured by persons per square kilometer. Several articles in the literature argue that countries with lower population density would have higher costs of providing wireless service which would, in turn, be reflected in higher prices.

The tables below confirm what an eyeballing of the figure suggest: Rewheel’s data show number of MNOs in a country have no statistically significant relationship with price per gigabyte, at either the €30 a month level or the €50 a month level.

RewheelRegression

While the signs on the MNO coefficient are negative (i.e., more carriers in a country is associated with lower prices), they are not statistically significantly different from zero at any of the traditional levels of statistical significance.

Also, the regressions suffer from relatively low measures of goodness-of-fit. The independent variables in the regression explain approximately five percent of the variation in the price per gigabyte. This is likely because of the cockamamie way Rewheel measures price, but is also due to the known problems with performing cross-sectional analysis of wireless pricing, as noted by Csorba & Pápai (2015):

Many regulatory policies are based on a comparison of prices between European countries, but these simple cross-sectional analyses can lead to misleading conclusions because of at least two reasons. First, the price difference between countries of n and (n + 1) active mobile operators can be due to other factors, and the analyst can never be sure of having solved the omitted variable bias problem. Second and more importantly, the effect of an additional operator estimated from a cross-sectional comparison cannot be equated with the effect of an actual entry that might have a long-lasting effect on a single market.

The Rewheel report cannot be relied upon in assessing consumer benefits or harm associated with the T-Mobile/Sprint merger, or any other merger

Rewheel apparently has a rich dataset of wireless pricing plans. Nevertheless, the analyses presented in its report are fundamentally flawed. Moreover, Rewheel’s conclusions regarding three vs. four carrier countries are not only baseless, but clearly unsupported by closer inspection of the information presented in its report. The Rewheel report cannot be relied upon to inform regulatory oversight of the T-Mobile/Spring merger or any other. This study isn’t unique and it should serve as a caution to be wary of studies that merely eyeball information.

[TOTM: The following is the second in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The first post, by Luke Froeb, Michael Doane & Mikhael Shor is here.

This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]

[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]

The Department of Justice Antitrust Division (DOJ) and Federal Trade Commission (FTC) have spent a significant amount of time in federal court litigating major cases premised upon an anticompetitive foreclosure theory of harm. Bargaining models, a tool used commonly in foreclosure cases, have been essential to the government’s theory of harm in these cases. In vertical merger or conduct cases, the core theory of harm is usually a variant of the claim that the transaction (or conduct) strengthens the firm’s incentives to engage in anticompetitive strategies that depend on negotiations with input suppliers. Bargaining models are a key element of the agency’s attempt to establish those claims and to predict whether and how firm incentives will affect negotiations with input suppliers, and, ultimately, the impact on equilibrium prices and output. Application of bargaining models played a key role in evaluating the anticompetitive foreclosure theories in the DOJ’s litigation to block the proposed merger of AT&T and Time Warner Cable. A similar model is at the center of the FTC’s antitrust claims against Qualcomm and its patent licensing business model.

Modern antitrust analysis does not condemn business practices as anticompetitive without solid economic evidence of an actual or likely harm to competition. This cautious approach was developed in the courts for two reasons. The first is that the difficulty of distinguishing between procompetitive and anticompetitive explanations for the same conduct suggests there is a high risk of error. The second is that those errors are more likely to be false positives than false negatives because empirical evidence and judicial learning have established that unilateral conduct is usually either procompetitive or competitively neutral. In other words, while the risk of anticompetitive foreclosure is real, courts have sensibly responded by requiring plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.

An economic model can help establish the likelihood and/or magnitude of competitive harm when the model carefully captures the key institutional features of the competition it attempts to explain. Naturally, this tends to mean that the economic theories and models proffered by dueling economic experts to predict competitive effects take center stage in antitrust disputes. The persuasiveness of an economic model turns on the robustness of its assumptions about the underlying market. Model predictions that are inconsistent with actual market evidence give one serious pause before accepting the results as reliable.

For example, many industries are characterized by bargaining between providers and distributors. The Nash bargaining framework can be used to predict the outcomes of bilateral negotiations based upon each party’s bargaining leverage. The model assumes that both parties are better off if an agreement is reached, but that as the utility of one party’s outside option increases relative to the bargain, it will capture an increasing share of the surplus. Courts have had to reconcile these seemingly complicated economic models with prior case law and, in some cases, with direct evidence that is apparently inconsistent with the results of the model.

Indeed, Professor Carl Shapiro recently used bargaining models to analyze harm to competition in two prominent cases alleging anticompetitive foreclosure—one initiated by the DOJ and one by the FTC—in which he served as the government’s expert economist. In United States v. AT&T Inc., Dr. Shapiro testified that the proposed transaction between AT&T and Time Warner would give the vertically integrated company leverage to extract higher prices for content from AT&T’s rival, Dish Network. Soon after, Dr. Shapiro presented a similar bargaining model in FTC v. Qualcomm Inc. He testified that Qualcomm leveraged its monopoly power over chipsets to extract higher royalty rates from smartphone OEMs, such as Apple, wishing to license its standard essential patents (SEPs). In each case, Dr. Shapiro’s models were criticized heavily by the defendants’ expert economists for ignoring market realities that play an important role in determining whether the challenged conduct was likely to harm competition.

Judge Leon’s opinion in AT&T/Time Warner—recently upheld on appeal—concluded that Dr. Shapiro’s application of the bargaining model was significantly flawed, based upon unreliable inputs, and undermined by evidence about actual market performance presented by defendant’s expert, Dr. Dennis Carlton. Dr. Shapiro’s theory of harm posited that the combined company would increase its bargaining leverage and extract greater affiliate fees for Turner content from AT&T’s distributor rivals. The increase in bargaining leverage was made possible by the threat of a post-merger blackout of Turner content for AT&T’s rivals. This theory rested on the assumption that the combined firm would have reduced financial exposure from a long-term blackout of Turner content and would therefore have more leverage to threaten a blackout in content negotiations. The purpose of his bargaining model was to quantify how much AT&T could extract from competitors subjected to a long-term blackout of Turner content.

Judge Leon highlighted a number of reasons for rejecting the DOJ’s argument. First, Dr. Shapiro’s model failed to account for existing long-term affiliate contracts, post-litigation offers of arbitration agreements, and the increasing competitiveness of the video programming and distribution industry. Second, Dr. Carlton had demonstrated persuasively that previous vertical integration in the video programming and distribution industry did not have a significant effect on content prices. Finally, Dr. Shapiro’s model primarily relied upon three inputs: (1) the total number of subscribers the unaffiliated distributor would lose in the event of a long-term blackout of Turner content, (2) the percentage of the distributor’s lost subscribers who would switch to AT&T as a result of the blackout, and (3) the profit margin AT&T would derive from the subscribers it gained from the blackout. Many of Dr. Shapiro’s inputs necessarily relied on critical assumptions and/or third-party sources. Judge Leon considered and discredited each input in turn. 

The parties in Qualcomm are, as of the time of this posting, still awaiting a ruling. Dr. Shapiro’s model in that case attempts to predict the effect of Qualcomm’s alleged “no license, no chips” policy. He compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. In other words, the FTC’s theory of harm is based upon the premise that Qualcomm is charging a supra-FRAND rate for its SEPs (the“royalty surcharge”) that squeezes the margins of OEMs. That margin squeeze, the FTC alleges, prevents rival chipset suppliers from obtaining a sufficient return when negotiating with OEMs. The FTC predicts the end result is a reduction in competition and an increase in the price of devices to consumers.

Qualcomm, like Judge Leon in AT&T, questioned the robustness of Dr. Shapiro’s model and its predictions in light of conflicting market realities. For example, Dr. Shapiro, argued that the

leverage that Qualcomm brought to bear on the chips shifted the licensing negotiations substantially in Qualcomm’s favor and led to a significantly higher royalty than Qualcomm would otherwise have been able to achieve.

Yet, on cross-examination, Dr. Shapiro declined to move from theory to empirics when asked if he had quantified the effects of Qualcomm’s practice on any other chip makers. Instead, Dr. Shapiro responded that he had not, but he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences.” Under Dr. Shapiro’s theory, one would predict that royalty rates were higher after Qualcomm obtained market power.

As with Dr. Carlton’s testimony inviting Judge Leon to square the DOJ’s theory with conflicting historical facts in the industry, Qualcomm’s economic expert, Dr. Aviv Nevo, provided an analysis of Qualcomm’s royalty agreements from 1990-2017, confirming that there was no economic and meaningful difference between the royalty rates during the time frame when Qualcomm was alleged to have market power and the royalty rates outside of that time frame. He also presented evidence that ex ante royalty rates did not increase upon implementation of the CDMA standard or the LTE standard. Moreover, Dr.Nevo testified that the industry itself was characterized by declining prices and increasing output and quality.

Dr. Shapiro’s model in Qualcomm appears to suffer from many of the same flaws that ultimately discredited his model in AT&T/Time Warner: It is based upon assumptions that are contrary to real-world evidence and it does not robustly or persuasively identify anticompetitive effects. Some observers, including our Scalia Law School colleague and former FTC Chairman, Tim Muris, would apparently find it sufficient merely to allege a theoretical “ability to manipulate the marketplace.” But antitrust cases require actual evidence of harm. We think Professor Muris instead captured the appropriate standard in his important article rejecting attempts by the FTC to shortcut its requirement of proof in monopolization cases:

This article does reject, however, the FTC’s attempt to make it easier for the government to prevail in Section 2 litigation. Although the case law is hardly a model of clarity, one point that is settled is that injury to competitors by itself is not a sufficient basis to assume injury to competition …. Inferences of competitive injury are, of course, the heart of per se condemnation under the rule of reason. Although long a staple of Section 1, such truncation has never been a part of Section 2. In an economy as dynamic as ours, now is hardly the time to short-circuit Section 2 cases. The long, and often sorry, history of monopolization in the courts reveals far too many mistakes even without truncation.

Timothy J. Muris, The FTC and the Law of Monopolization, 67 Antitrust L. J. 693 (2000)

We agree. Proof of actual anticompetitive effects rather than speculation derived from models that are not robust to market realities are an important safeguard to ensure that Section 2 protects competition and not merely individual competitors.

The future of bargaining models in antitrust remains to be seen. Judge Leon certainly did not question the proposition that they could play an important role in other cases. Judge Leon closely dissected the testimony and models presented by both experts in AT&T/Time Warner. His opinion serves as an important reminder. As complex economic evidence like bargaining models become more common in antitrust litigation, judges must carefully engage with the experts on both sides to determine whether there is direct evidence on the likely competitive effects of the challenged conduct. Where “real-world evidence,” as Judge Leon called it, contradicts the predictions of a bargaining model, judges should reject the model rather than the reality. Bargaining models have many potentially important antitrust applications including horizontal mergers involving a bargaining component – such as hospital mergers, vertical mergers, and licensing disputes. The analysis of those models by the Ninth and D.C. Circuits will have important implications for how they will be deployed by the agencies and parties moving forward.

Near the end of her new proposal to break up Facebook, Google, Amazon, and Apple, Senator Warren asks, “So what would the Internet look like after all these reforms?”

It’s a good question, because, as she herself notes, “Twenty-five years ago, Facebook, Google, and Amazon didn’t exist. Now they are among the most valuable and well-known companies in the world.”

To Warren, our most dynamic and innovative companies constitute a problem that needs solving.

She described the details of that solution in a blog post:

First, [my administration would restore competition to the tech sector] by passing legislation that requires large tech platforms to be designated as “Platform Utilities” and broken apart from any participant on that platform.

* * *

For smaller companies…, their platform utilities would be required to meet the same standard of fair, reasonable, and nondiscriminatory dealing with users, but would not be required to structurally separate….

* * *
Second, my administration would appoint regulators committed to reversing illegal and anti-competitive tech mergers….
I will appoint regulators who are committed to… unwind[ing] anti-competitive mergers, including:

– Amazon: Whole Foods; Zappos;
– Facebook: WhatsApp; Instagram;
– Google: Waze; Nest; DoubleClick

Elizabeth Warren’s brave new world

Let’s consider for a moment what this brave new world will look like — not the nirvana imagined by regulators and legislators who believe that decimating a company’s business model will deter only the “bad” aspects of the model while preserving the “good,” as if by magic, but the inevitable reality of antitrust populism.  

Utilities? Are you kidding? For an overview of what the future of tech would look like under Warren’s “Platform Utility” policy, take a look at your water, electricity, and sewage service. Have you noticed any improvement (or reduction in cost) in those services over the past 10 or 15 years? How about the roads? Amtrak? Platform businesses operating under a similar regulatory regime would also similarly stagnate. Enforcing platform “neutrality” necessarily requires meddling in the most minute of business decisions, inevitably creating unintended and costly consequences along the way.

Network companies, like all businesses, differentiate themselves by offering unique bundles of services to customers. By definition, this means vertically integrating with some product markets and not others. Why are digital assistants like Siri bundled into mobile operating systems? Why aren’t the vast majority of third-party apps also bundled into the OS? If you want utilities regulators instead of Google or Apple engineers and designers making these decisions on the margin, then Warren’s “Platform Utility” policy is the way to go.

Grocery Stores. To take one specific case cited by Warren, how much innovation was there in the grocery store industry before Amazon bought Whole Foods? Since the acquisition, large grocery retailers, like Walmart and Kroger, have increased their investment in online services to better compete with the e-commerce champion. Many industry analysts expect grocery stores to use computer vision technology and artificial intelligence to improve the efficiency of check-out in the near future.

Smartphones. Imagine how forced neutrality would play out in the context of iPhones. If Apple can’t sell its own apps, it also can’t pre-install its own apps. A brand new iPhone with no apps — and even more importantly, no App Store — would be, well, just a phone, out of the box. How would users even access a site or app store from which to download independent apps? Would Apple be allowed to pre-install someone else’s apps? That’s discriminatory, too. Maybe it will be forced to offer a menu of all available apps in all categories (like the famously useless browser ballot screen demanded by the European Commission in its Microsoft antitrust case)? It’s hard to see how that benefits consumers — or even app developers.

Source: Free Software Magazine

Internet Search. Or take search. Calls for “search neutrality” have been bandied about for years. But most proponents of search neutrality fail to recognize that all Google’s search results entail bias in favor of its own offerings. As Geoff Manne and Josh Wright noted in 2011 at the height of the search neutrality debate:

[S]earch engines offer up results in the form not only of typical text results, but also maps, travel information, product pages, books, social media and more. To the extent that alleged bias turns on a search engine favoring its own maps, for example, over another firm’s, the allegation fails to appreciate that text results and maps are variants of the same thing, and efforts to restrain a search engine from offering its own maps is no different than preventing it from offering its own search results.

Nevermind that Google with forced non-discrimination likely means Google offering only the antiquated “ten blue links” search results page it started with in 1998 instead of the far more useful “rich” results it offers today; logically it would also mean Google somehow offering the set of links produced by any and all other search engines’ algorithms, in lieu of its own. If you think Google will continue to invest in and maintain the wealth of services it offers today on the strength of the profits derived from those search results, well, Elizabeth Warren is probably already your favorite politician.

Source: Web Design Museum  

And regulatory oversight of algorithmic content won’t just result in an impoverished digital experience; it will inevitably lead to an authoritarian one, as well:

Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access…. This sort of control is deeply problematic… [because it saddles users] with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Digital Assistants. Consider also the veritable cage match among the tech giants to offer “digital assistants” and “smart home” devices with ever-more features at ever-lower prices. Today the allegedly non-existent competition among these companies is played out most visibly in this multi-featured market, comprising advanced devices tightly integrated with artificial intelligence, voice recognition, advanced algorithms, and a host of services. Under Warren’s nondiscrimination principle this market disappears. Each device can offer only a connectivity platform (if such a service is even permitted to be bundled with a physical device…) — and nothing more.

But such a world entails not only the end of an entire, promising avenue of consumer-benefiting innovation, it also entails the end of a promising avenue of consumer-benefiting competition. It beggars belief that anyone thinks consumers would benefit by forcing technology companies into their own silos, ensuring that the most powerful sources of competition for each other are confined to their own fiefdoms by order of law.

Breaking business models

Beyond the product-feature dimension, Sen. Warren’s proposal would be devastating for innovative business models. Why is Amazon Prime Video bundled with free shipping? Because the marginal cost of distribution for video is close to zero and bundling it with Amazon Prime increases the value proposition for customers. Why is almost every Google service free to users? Because Google’s business model is supported by ads, not monthly subscription fees. Each of the tech giants has carefully constructed an ecosystem in which every component reinforces the others. Sen. Warren’s plan would not only break up the companies, it would prohibit their business models — the ones that both created and continue to sustain these products. Such an outcome would manifestly harm consumers.

Both of Warren’s policy “solutions” are misguided and will lead to higher prices and less innovation. Her cause for alarm is built on a multitude of mistaken assumptions, but let’s address just a few (Warren in bold):

  • “Nearly half of all e-commerce goes through Amazon.” Yes, but it has only 5% of total retail in the United States. As my colleague Kristian Stout says, “the Internet is not a market; it’s a distribution channel.”
  • “Amazon has used its immense market power to force smaller competitors like Diapers.com to sell at a discounted rate.” The real story, as the founders of Diapers.com freely admitted, is that they sold diapers as what they hoped would be a loss leader, intending to build out sales of other products once they had a base of loyal customers:

And so we started with selling the loss leader product to basically build a relationship with mom. And once they had the passion for the brand and they were shopping with us on a weekly or a monthly basis that they’d start to fall in love with that brand. We were losing money on every box of diapers that we sold. We weren’t able to buy direct from the manufacturers.

Like all entrepreneurs, Diapers.com’s founders took a calculated risk that didn’t pay off as hoped. Amazon subsequently acquired the company (after it had declined a similar buyout offer from Walmart). (Antitrust laws protect consumers, not inefficient competitors). And no, this was not a case of predatory pricing. After many years of trying to make the business profitable as a subsidiary, Amazon shut it down in 2017.

  • “In the 1990s, Microsoft — the tech giant of its time — was trying to parlay its dominance in computer operating systems into dominance in the new area of web browsing. The federal government sued Microsoft for violating anti-monopoly laws and eventually reached a settlement. The government’s antitrust case against Microsoft helped clear a path for Internet companies like Google and Facebook to emerge.” The government’s settlement with Microsoft is not the reason Google and Facebook were able to emerge. Neither company entered the browser market at launch. Instead, they leapfrogged the browser entirely and created new platforms for the web (only later did Google create Chrome).

    Furthermore, if the Microsoft case is responsible for “clearing a path” for Google is it not also responsible for clearing a path for Google’s alleged depredations? If the answer is that antitrust enforcement should be consistently more aggressive in order to rein in Google, too, when it gets out of line, then how can we be sure that that same more-aggressive enforcement standard wouldn’t have curtailed the extent of the Microsoft ecosystem in which it was profitable for Google to become Google? Warren implicitly assumes that only the enforcement decision in Microsoft was relevant to Google’s rise. But Microsoft doesn’t exist in a vacuum. If Microsoft cleared a path for Google, so did every decision not to intervene, which, all combined, created the legal, business, and economic environment in which Google operates.

Warren characterizes Big Tech as a weight on the American economy. In fact, nothing could be further from the truth. These superstar companies are the drivers of productivity growth, all ranking at or near the top for most spending on research and development. And while data may not be the new oil, extracting value from it may require similar levels of capital expenditure. Last year, Big Tech spent as much or more on capex as the world’s largest oil companies:

Source: WSJ

Warren also faults Big Tech for a decline in startups, saying,

The number of tech startups has slumped, there are fewer high-growth young firms typical of the tech industry, and first financing rounds for tech startups have declined 22% since 2012.

But this trend predates the existence of the companies she criticizes, as this chart from Quartz shows:

The exact causes of the decline in business dynamism are still uncertain, but recent research points to a much more mundane explanation: demographics. Labor force growth has been declining, which has led to an increase in average firm age, nudging fewer workers to start their own businesses.

Furthermore, it’s not at all clear whether this is actually a decline in business dynamism, or merely a change in business model. We would expect to see the same pattern, for example, if would-be startup founders were designing their software for acquisition and further development within larger, better-funded enterprises.

Will Rinehart recently looked at the literature to determine whether there is indeed a “kill zone” for startups around Big Tech incumbents. One paper finds that “an increase in fixed costs explains most of the decline in the aggregate entrepreneurship rate.” Another shows an inverse correlation across 50 countries between GDP and entrepreneurship rates. Robert Lucas predicted these trends back in 1978, pointing out that productivity increases would lead to wage increases, pushing marginal entrepreneurs out of startups and into big companies.

It’s notable that many in the venture capital community would rather not have Sen. Warren’s “help”:

Arguably, it is also simply getting harder to innovate. As economists Nick Bloom, Chad Jones, John Van Reenen and Michael Webb argue,

just to sustain constant growth in GDP per person, the U.S. must double the amount of research effort searching for new ideas every 13 years to offset the increased difficulty of finding new ideas.

If this assessment is correct, it may well be that coming up with productive and profitable innovations is simply becoming more expensive, and thus, at the margin, each dollar of venture capital can fund less of it. Ironically, this also implies that larger firms, which can better afford the additional resources required to sustain exponential growth, are a crucial part of the solution, not the problem.

Warren believes that Big Tech is the cause of our social ills. But Americans have more trust in Amazon, Facebook, and Google than in the political institutions that would break them up. It would be wise for her to reflect on why that might be the case. By punishing our most valuable companies for past successes, Warren would chill competition and decrease returns to innovation.

Finally, in what can only be described as tragic irony, the most prominent political figure who shares Warren’s feelings on Big Tech is President Trump. Confirming the horseshoe theory of politics, far-left populism and far-right populism seem less distinguishable by the day. As our colleague Gus Hurwitz put it, with this proposal Warren is explicitly endorsing the unitary executive theory and implicitly endorsing Trump’s authority to direct his DOJ to “investigate specific cases and reach specific outcomes.” Which cases will he want to have investigated and what outcomes will he be seeking? More good questions that Senator Warren should be asking. The notion that competition, consumer welfare, and growth are likely to increase in such an environment is farcical.

[TOTM: The following is the first in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California.

This post is authored by Luke Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship at the Owen Graduate School of Management at Vanderbilt University; former chief economist at the Antitrust Division of the US Department of Justice and the Federal Trade Commission), Michael Doane (Competition Economics, LLC) & Mikhael Shor (Associate Professor of Economics, University of Connecticut).]

[Froeb, Doane & Shor: This post does not attempt to answer the question of what the court should decide in FTC v. Qualcomm because we do not have access to the information that would allow us to make such a determination. Rather, we focus on economic issues confronting the court by drawing heavily from our writings in this area: Gregory Werden & Luke Froeb, Why Patent Hold-Up Does Not Violate Antitrust Law; Luke Froeb & Mikhael Shor, Innovators, Implementors and Two-sided Hold-up; Bernard Ganglmair, Luke Froeb & Gregory Werden, Patent Hold Up and Antitrust: How a Well-Intentioned Rule Could Retard Innovation.]

Not everything is “hold-up”

It is not uncommon—in fact it is expected—that parties to a negotiation would have different opinions about the reasonableness of any deal. Every buyer asks for a price as low as possible, and sellers naturally request prices at which buyers (feign to) balk. A recent movement among some lawyers and economists has been to label such disagreements in the context of standard-essential patents not as a natural part of bargaining, but as dispositive proof of “hold-up,” or the innovator’s purported abuse of newly gained market power to extort implementers. We have four primary issues with this hold-up fad.

First, such claims of “hold-up” are trotted out whenever an innovator’s royalty request offends the commentator’s sensibilities, and usually with reference to a theoretical hold-up possibility rather than any matter-specific evidence that hold-up is actually present. Second, as we have argued elsewhere, such arguments usually ignore the fact that implementers of innovations often possess significant countervailing power to “hold-out as well. This is especially true as implementers have successfully pushed to curtail injunctive relief in standard-essential patent cases. Third, as Greg Werden and Froeb have recently argued, it is not clear why patent holdup—even where it might exist—need implicate antitrust law rather than be adequately handled as a contractual dispute. Lastly, it is certainly not the case that every disagreement over the value of an innovation is an exercise in hold-up, as even economists and lawyers have not reached anything resembling a consensus on the correct interpretation of a “fair” royalty.

At the heart of this case (and many recent cases) is (1) an indictment of Qualcomm’s desire to charge royalties to the maker of consumer devices based on the value of its technology and (2) a lack (to the best of our knowledge from public documents) of well vetted theoretical models that can provide the underpinning for the theory of the case. We discuss these in turn.

The smallest component “principle”

In arguing that “Qualcomm’s royalties are disproportionately high relative to the value contributed by its patented inventions,” (Complaint, ¶ 77) a key issue is whether Qualcomm can calculate royalties as a percentage of the price of a device, rather than a small percentage of the price of a chip. (Complaint, ¶¶ 61-76).

So what is wrong with basing a royalty on the price of the final product? A fixed portion of the price is not a perfect proxy for the value of embedded intellectual property, but it is a reasonable first approximation, much like retailers use fixed markups for products rather than optimizing the price of each SKU if the cost of individual determinations negate any benefits to doing so. The FTC’s main issue appears to be that the price of a smartphone reflects “many features in addition to the cellular connectivity and associated voice and text capabilities provided by early feature phones.” (Complaint, ¶ 26). This completely misses the point. What would the value of an iPhone be if it contained all of those “many features” but without the phone’s communication abilities? We have some idea, as Apple has for years marketed its iPod Touch for a quarter of the price of its iPhone line. Yet, “[f]or most users, the choice between an iPhone 5s and an iPod touch will be a no-brainer: Being always connected is one of the key reasons anyone owns a smartphone.”

What the FTC and proponents of the smallest component principle miss is that some of the value of all components of a smartphone are derived directly from the phone’s communication ability. Smartphones didn’t initially replace small portable cameras because they were better at photography (in fact, smartphone cameras were and often continue to be much worse than devoted cameras). The value of a smartphone camera is that it combines picture taking with immediate sharing over text or through social media. Thus, unlike the FTC’s claim that most of the value of a smartphone comes from features that are not communication, many features on a smartphone derive much of their value from the communication powers of the phone.

In the alternative, what the FTC wants is for the royalty not to reflect the value of the intellectual property but instead to be a small portion of the cost of some chipset—akin to an author of a paperback negotiating royalties based on the cost of plain white paper. As a matter of economics, a single chipset royalty cannot allow an innovator to capture the value of its innovation. This, in turn, implies that innovators underinvest in future technologies. As we have previously written:

For example, imagine that the same component (incorporating the same essential patent) is used to help stabilize flight of both commercial airplanes and toy airplanes. Clearly, these industries are likely to have different values for the patent. By negotiating over a single royalty rate based on the component price, the innovator would either fail to realize the added value of its patent to commercial airlines, or (in the case that the component is targeted primary to the commercial airlines) would not realize the incremental market potential from the patent’s use in toy airplanes. In either case, the innovator will not be negotiating over the entirety of the value it creates, leading to too little innovation.

The role of economics

Modern antitrust practice is to use economic models to explain how one gets from the evidence presented in a case to an anticompetitive conclusion. As Froeb, et al. have discussed, by laying out a mapping from the evidence to the effects, the legal argument is made clear, and gains credibility because it becomes falsifiable. The FTC complaint hypothesizes that “Qualcomm has excluded competitors and harmed competition through a set of interrelated policies and practices.” (Complaint, ¶ 3). Although Qualcomm explains how each of these policies and practices, by themselves, have clear business justifications, the FTC claims that combining them leads to an anticompetitive outcome.

Without providing a formal mapping from the evidence to an effect, it becomes much more difficult for a court to determine whether the theory of harm is correct or how to weigh the evidence that feeds the conclusion. Without a model telling it “what matters, why it matters, and how much it matters,” it is much more difficult for a tribunal to evaluate the “interrelated policies and practices.” In previous work, we have modeled the bilateral bargaining between patentees and licensees and have shown that when bilateral patent contracts are subject to review by an antitrust court, bargaining in the shadow of such a court can reduce the incentive to invest and thereby reduce welfare.

Concluding policy thoughts

What the FTC makes sound nefarious seems like a simple policy: requiring companies to seek licenses to Qualcomm’s intellectual property independent of any hardware that those companies purchase, and basing the royalty of that intellectual property on (an admittedly crude measure of) the value the IP contributes to that product. High prices alone do not constitute harm to competition. The FTC must clearly explain why their complaint is not simply about the “fairness” of the outcome or its desire that Qualcomm employ different bargaining paradigms, but rather how Qualcomm’s behavior harms the process of competition.

In the late 1950s, Nobel Laureate Robert Solow attributed about seven-eighths of the growth in U.S. GDP to technical progress. As Solow later commented: “Adding a couple of tenths of a percentage point to the growth rate is an achievement that eventually dwarfs in welfare significance any of the standard goals of economic policy.” While he did not have antitrust in mind, the import of his comment is clear: whatever static gains antitrust litigation may achieve, they are likely dwarfed by the dynamic gains represented by innovation.

Patent law is designed to maintain a careful balance between the costs of short-term static losses and the benefits of long-term gains that result from new technology. The FTC should present a sound theoretical or empirical basis for believing that the proposed relief sufficiently rewards inventors and allows them to capture a reasonable share of the whole value their innovations bring to consumers, lest such antitrust intervention deter investments in innovation.

The German Bundeskartellamt’s Facebook decision is unsound from either a competition or privacy policy perspective, and will only make the fraught privacy/antitrust relationship worse.

Continue Reading...

The US Senate Subcommittee on Antitrust, Competition Policy, and Consumer Rights recently held hearings to see what, if anything, the U.S. might learn from the approaches of other countries regarding antitrust and consumer protection. US lawmakers would do well to be wary of examples from other jurisdictions, however, that are rooted in different legal and cultural traditions. Shortly before the hearing, for example, Australia’s Competition and Consumer Protection Commission (ACCC) announced that it was exploring broad new regulations, predicated on theoretical harms, that would threaten both consumer welfare and individuals’ rights to free expression that are completely at odds with American norms.

The ACCC seeks vast discretion to shape the way that online platforms operate — a regulatory venture that threatens to undermine the value which companies provide to consumers. Even more troubling are its plans to regulate free expression on the Internet, which if implemented in the US, would contravene Americans’ First Amendment guarantees to free speech.

The ACCC’s errors are fundamental, starting with the contradictory assertion that:

Australian law does not prohibit a business from possessing significant market power or using its efficiencies or skills to “out compete” its rivals. But when their dominant position is at risk of creating competitive or consumer harm, governments should stay ahead of the game and act to protect consumers and businesses through regulation.

Thus, the ACCC recognizes that businesses may work to beat out their rivals and thus gain in market share. However, this is immediately followed by the caveat that the state may prevent such activity, when such market gains are merely “at risk” of coming at the expense of consumers or business rivals. Thus, the ACCC does not need to show that harm has been done, merely that it might take place — even if the products and services being provided otherwise benefit the public.

The ACCC report then uses this fundamental error as the basis for recommending content regulation of digital platforms like Facebook and Google (who have apparently been identified by Australia’s clairvoyant PreCrime Antitrust unit as being guilty of future violations). It argues that the lack of transparency and oversight in the algorithms these companies employ could result in a range of possible social and economic damages, despite the fact that consumers continue to rely on these products. These potential issues include prioritization of the content and products of the host company, under-serving of ads within their products, and creation of “filter bubbles” that conceal content from particular users thereby limiting their full range of choice.

The focus of these concerns is the kind and quality of  information that users are receiving as a result of the “media market” that results from the “ranking and display of news and journalistic content.” As a remedy for its hypothesised concerns, the ACCC has proposed a new regulatory authority tasked with overseeing the operation of the platforms’ algorithms. The ACCC claims this would ensure that search and newsfeed results are balanced and of high quality. This policy would undermine consumer welfare  in pursuit of remedying speculative harms.

Rather than the search results or news feeds being determined by the interaction between the algorithm and the user, the results would instead be altered to comply with criteria established by the ACCC. Yet, this would substantially undermine the value of these services.  The competitive differentiation between, say, Google and Bing lies in their unique, proprietary search algorithms. The ACCC’s intervention would necessarily remove some of this differentiation between online providers, notionally to improve the “quality” of results. But such second-guessing by regulators would quickly undermine the actual quality–and utility — of these services to users.

A second, but more troubling prospect is the threat of censorship that emerges from this kind of regime. Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access. Such regulatory power thus affects not only what users can read, but what media outlets might be able to say in order to successfully offer curated content. This sort of control is deeply problematic since users are no longer merely faced with a potential “filter bubble” based on their own preferences interacting with a single provider, but with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Undoubtedly antitrust and consumer protection laws should be continually reviewed and revised. However, if we wish to uphold the principles upon which the US was founded and continue to protect consumer welfare, the US should avoid following the path Australia proposes to take.

A recent working paper by Hashmat Khan and Matthew Strathearn attempts to empirically link anticompetitive collusion to the boom and bust cycles of the economy.

The level of collusion is higher during a boom relative to a recession as collusion occurs more frequently when demand is increasing (entering into a collusive arrangement is more profitable and deviating from an existing cartel is less profitable). The model predicts that the number of discovered cartels and hence antitrust filings should be procyclical because the level of collusion is procyclical.

The first sentence—a hypothesis that collusion is more likely during a “boom” than in recession—seems reasonable. At the same time, a case can be made that collusion would be more likely during recession. For example, a reduced risk of entry from competitors would reduce the cost of collusion.

The second sentence, however, seems a stretch. Mainly because it doesn’t recognize the time delay between the collusive activity, the date the collusion is discovered by authorities, and the date the case is filed.

Perhaps, more importantly, it doesn’t acknowledge that many collusive arrangement span months, if not years. That span of time could include times of “boom” and times of recession. Thus, it can be argued that the date of the filing has little (or nothing) to do with the span over which the collusive activity occurred.

I did a very lazy man’s test of my criticisms. I looked at six of the filings cited by Khan and Strathearn for the year 2011, a “boom” year with a high number of horizontal price fixing cases filed.

khanstrathearn

My first suspicion was correct. In these six cases, an average of more than three years passed from the date of the last collusive activity and the date the case was filed. Thus, whether the economy is a boom or bust when the case is filed provides no useful information regarding the state of the economy when the collusion occurred.

Nevertheless, my lazy man’s small sample test provides some interesting—and I hope useful—information regarding Khan and Strathearn’s conclusions.

  1. From July 2001 through September 2009, 24 of the 99 months were in recession. In other words, during this period, there was a 24 percent chance the economy was in recession in any given month.
  2. Five of the six collusive arrangements began when the economy was in recovery. Only one began during a recession. This may seem to support their conclusion that collusive activity is more likely during a recovery. However, even if the arrangements began randomly, there would be a 55 percent chance that that five or more began during a recovery. So, you can’t read too much into the observation that most of the collusive agreements began during a “boom.”
  3. In two of the cases, the collusive activity occurred during a span of time that had no recession. The chances of this happening randomly is less than 1 in 20,000, supporting their conclusion regarding collusive activity and the business cycle.

Khan and Strathearn fall short in linking collusive activity to the business cycle but do a good job of linking antitrust enforcement activities to the business cycle. The information they use from the DOJ website is sufficient to determine when the collusive activity occurred—but it’ll take more vigorous “scrubbing” (their word) of the site to get the relevant data.

The bigger question, however, is the relevance of this research. Naturally, one could argue this line of research indicates that competition authorities should be extra vigilant during a booming economy. Yet, Adam Smith famously noted, “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.” This suggests that collusive activity—or the temptation to engage in such activity—is always and everywhere present, regardless of the business cycle.

 

A recent NBER working paper by Gutiérrez & Philippon has attracted attention from observers who see oligopoly everywhere and activists who want governments to more actively “manage” competition. The analysis in the paper is fundamentally flawed and should not be relied upon by policymakers, regulators, or anyone else.

As noted in my earlier post, Gutiérrez & Philippon attempt to craft a causal linkage between differences in U.S. and EU antitrust enforcement and product market regulation to differences in market concentration and corporate profits. Their paper’s abstract leads with a bold assertion:

Until the 1990’s, US markets were more competitive than European markets. Today, European markets have lower concentration, lower excess profits, and lower regulatory barriers to entry.

This post focuses on Gutiérrez & Philippon’s claim that EU markets have lower “excess profits.” This is perhaps the most outrageous claim in the paper. If anyone bothers to read the full paper, they’ll see that claims that EU firms have lower excess profits is simply not supported by the paper itself. Aside from a passing mention of someone else’s work in a footnote, the only mention of “excess profits” is in the paper’s headline-grabbing abstract.

What’s even more outrageous is the authors don’t define (or even describe) what they mean by excess profits.

These two factors alone should be enough to toss aside the paper’s assertion about “excess” profits. But, there’s more.

Gutiérrez & Philippon define profit to be gross operating surplus and mixed income (known as “GOPS” in the OECD’s STAN Industrial Analysis dataset). GOPS is not the same thing as gross margin or gross profit as used in business and finance (for example GOPS subtracts wages, but gross margin does not). The EU defines GOPS as (emphasis added):

Operating surplus is the surplus (or deficit) on production activities before account has been taken of the interest, rents or charges paid or received for the use of assets. Mixed income is the remuneration for the work carried out by the owner (or by members of his family) of an unincorporated enterprise. This is referred to as ‘mixed income’ since it cannot be distinguished from the entrepreneurial profit of the owner.

Here’s Figure 1 from Gutiérrez & Philippon plotting GOPS as a share of gross output.

Fig1-GutierrezPhilippon

Look at the huge jump in gross operating surplus for U.S. firms!

Now, look at the scale of the y-axis. Not such a big jump after all.

Over 23 years, from 1992 to 2015, the gross operating surplus rate for U.S. firms grew by 2.5 percentage points. In the EU, the rate increased by about one percentage point.

Using the STAN dataset, I plotted the gross operating surplus rate for each EU country (blue dots) and the U.S. (red dots), along with a time trend. Three takeaways:

  1. There’s not much of a difference between the U.S. and the EU average—they both hover around a gross operating surplus rate of about 19.5 percent; and
  2. There’s a huge variation in gross operating surplus rate across EU countries.
  3. Yes, gross operating surplus is trending slightly upward in the U.S. and slightly downward for the EU average, but there doesn’t appear to be a huge difference in the slope of the trendlines. In fact the slopes of the trendlines are not statistically significantly different from zero and are not statistically significantly different from each other.

GOPSprod

The use of gross profits raises some serious questions. For example, the Stigler Center’s James Traina finds that, after accounting for selling, general, and administrative expenses (SG&A), mark-ups for publicly traded firms in the U.S. have not meaningfully increased since 1980.

The figure below plots net operating surplus (NOPS equals GOPS minus consumption of fixed capital)—which is not the same thing as net income for a business.

Same three takeaways:

  1. There’s not much of a difference between the U.S. and the EU average—they both hover around a net operating surplus rate of a little more than seven percent; and
  2. There’s a huge variation in net operating surplus rate across EU countries.
  3. The slope of the trendlines for net operating surplus in the U.S. and EU are not statistically significantly different from zero and are not statistically significantly different from each other.

NOPSprod

It’s very possible that U.S. firms are achieving higher and growing “excess” profits relative to EU firms. It’s also very possible they’re not. Despite the bold assertions of Gutiérrez & Philippon, the information presented in their paper provides no useful information one way or the other.

 

A recent NBER working paper by Gutiérrez & Philippon attempts to link differences in U.S. and EU antitrust enforcement and product market regulation to differences in market concentration and corporate profits. The paper’s abstract begins with a bold assertion:

Until the 1990’s, US markets were more competitive than European markets. Today, European markets have lower concentration, lower excess profits, and lower regulatory barriers to entry.

The authors are not clear what they mean by lower, however its seems they mean lower today relative to the 1990s.

This blog post focuses on the first claim: “Today, European markets have lower concentration …”

At the risk of being pedantic, Gutiérrez & Philippon’s measures of market concentration for which both U.S. and EU data are reported cover the period from 1999 to 2012. Thus, “the 1990s” refers to 1999, and “today” refers to 2012, or six years ago.

The table below is based on Figure 26 in Gutiérrez & Philippon. In 2012, there appears to be no significant difference in market concentration between the U.S. and the EU, using either the 8-firm concentration ratio or HHI. Based on this information, it cannot be concluded broadly that EU sectors have lower concentration than the U.S.

2012U.S.EU
CR826% (+5%)27% (-7%)
HHI640 (+150)600 (-190)

Gutiérrez & Philippon focus on the change in market concentration to draw their conclusions. However, the levels of market concentration measures are strikingly low. In all but one of the industries (telecommunications) in Figure 27 of their paper, the 8-firm concentration ratios for the U.S. and the EU are below 40 percent. Similarly, the HHI measures reported in the paper are at levels that most observers would presume to be competitive. In addition, in 7 of the 12 sectors surveyed, the U.S. 8-firm concentration ratio is lower than in the EU.

The numbers in parentheses in the table above show the change in the measures of concentration since 1999. The changes suggests that U.S. markets have become more concentrated and EU markets have become less concentrated. But, how significant are the changes in concentration?

A simple regression of the relationship between CR8 and a time trend finds that in the EU, CR8 has decreased an average of 0.5 percentage point a year, while the U.S. CR8 increased by less than 0.4 percentage point a year from 1999 to 2012. Tucked in an appendix to Gutiérrez & Philippon, Figure 30 shows that CR8 in the U.S. had decreased by about 2.5 percentage points from 2012 to 2014.

A closer examination of Gutiérrez & Philippon’s 8-firm concentration ratio for the EU shows that much of the decline in EU market concentration occurred over the 1999-2002 period. After that, the change in CR8 for the EU is not statistically significantly different from zero.

A regression of the relationship between HHI and a time trend finds that in the EU, HHI has decreased an average of 12.5 points a year, while the U.S. HHI increased by less than 16.4 points a year from 1999 to 2012.

As with CR8, a closer examination of Gutiérrez & Philippon’s HHI for the EU shows that much of the decline in EU market concentration occurred over the 1999-2002 period. After that, the change in HHI for the EU is not statistically significantly different from zero.

Readers should be cautious in relying on Gutiérrez & Philippon’s data to conclude that the U.S. is “drifting” toward greater market concentration while the EU is “drifting” toward lower market concentration. Indeed, the limited data presented in the paper point toward a convergence in market concentration between the two regions.

 

 

REGISTER HERE for the much-anticipated 2018 ICLE/Leeds competition law conference, this Friday and Saturday in Washington, DC.

NB: We’ve been approved for 8 credit hours of VA MCLE

The conference agenda is below. We hope to see you there!

ICLE/Leeds 2018 Competition Law Conference: Have We Exceeded the Limits of Antirust?
Agenda Day 1
Agenda Day 2

The dust has barely settled on the European Commission’s record-breaking €4.3 Billion Google Android fine, but already the European Commission is gearing up for its next high-profile case. Last month, Margrethe Vestager dropped a competition bombshell: the European watchdog is looking into the behavior of Amazon. Should the Commission decide to move further with the investigation, Amazon will likely join other US tech firms such as Microsoft, Intel, Qualcomm and, of course, Google, who have all been on the receiving end of European competition enforcement.

The Commission’s move – though informal at this stage – is not surprising. Over the last couples of years, Amazon has become one of the world’s largest and most controversial companies. The animosity against it is exemplified in a paper by Lina Khan, which uses the example of Amazon to highlight the numerous ills that allegedly plague modern antitrust law. The paper is widely regarded as the starting point of the so-called “hipster antitrust” movement.

But is there anything particularly noxious about Amazon’s behavior, or is it just the latest victim of a European crusade against American tech companies?

Where things stand so far

As is often the case in such matters, publicly available information regarding the Commission’s “probe” (the European watchdog is yet to open a formal investigation) is particularly thin. What we know so far comes from a number of declarations made by Margrethe Vestager (here and here) and a leaked questionnaire that was sent to Amazon’s rivals. Going on this limited information, it appears that the Commission is preoccupied about the manner in which Amazon uses the data that it gathers from its online merchants. In Vestager’s own words:

The question here is about the data, because if you as Amazon get the data from the smaller merchants that you host […] do you then also use this data to do your own calculations? What is the new big thing, what is it that people want, what kind of offers do they like to receive, what makes them buy things.

These concerns relate to the fact that Amazon acts as both a retailer in its own right and a platform for other retailers, which allegedly constitutes a “conflict of interest”. As a retailer, Amazon sells a wide range of goods directly to consumers. Meanwhile, its marketplace platform enables third party merchants to offer their goods in exchange for referral fees when items are sold (these fees typically range from 8% to 15%, depending on the type of good). Merchants can either execute theses orders themselves or opt for fulfilment by Amazon, in which case it handles storage and shipping. In addition to its role as a platform operator,  As of 2017, more than 50% of units sold on the Amazon marketplace where fulfilled by third-party sellers, although Amazon derived three times more revenue from its own sales than from those of third parties (note that Amazon Web Services is still by far its largest source of profits).

Mirroring concerns raised by Khan, the Commission worries that Amazon uses the data it gathers from third party retailers on its platform to outcompete them. More specifically, the concern is that Amazon might use this data to identify and enter the most profitable segments of its online platform, excluding other retailers in the process (or deterring them from joining the platform in the first place). Although there is some empirical evidence to support such claims, it is far from clear that this is in any way harmful to competition or consumers. Indeed, the authors of the paper that found evidence in support of the claims note:

Amazon is less likely to enter product spaces that require greater seller efforts to grow, suggesting that complementors’ platform‐specific investments influence platform owners’ entry decisions. While Amazon’s entry discourages affected third‐party sellers from subsequently pursuing growth on the platform, it increases product demand and reduces shipping costs for consumers.

Thou shalt not punish efficient behavior

The question is whether Amazon using data on rivals’ sales to outcompete them should raise competition concerns? After all, this is a standard practice in the brick-and-mortar industry, where most large retailers use house brands to go after successful, high-margin third-party brands. Some, such as Costco, even eliminate some third-party products from their shelves once they have a successful own-brand product. Granted, as Khan observes, Amazon may be doing this more effectively because it has access to vastly superior data. But does that somehow make Amazon’s practice harmful to social social welfare? Absent further evidence, I believe not.

The basic problem is the following. Assume that Amazon does indeed have a monopoly in the market for online retail platforms (or, in other words, that the Amazon marketplace is a bottleneck for online retailers). Why would it move into direct retail competition against its third party sellers if it is less efficient than them? Amazon would either have to sell at a loss or hope that consumers saw something in its products that warrants a higher price. A more profitable alternative would be to stay put and increase its fees. It could thereby capture all the profits of its independent retailers. Not that Amazon would necessarily want to do so, as this could potentially deter other retailers from joining its platform. The upshot is that Amazon has little incentive to exclude more efficient retailers.

Astute readers, will have observed that this is simply a restatement of the Chicago school’s Single Monopoly Theory, which broadly holds that, absent efficiencies, a monopolist in one line of commerce cannot increase its profits by entering the competitive market for a complementary good. Although the theory has drawn some criticism, it remains a crucial starting point with which enforcers must contend before they conclude that a monopolist’s behavior is anticompetitive.

So why does Amazon move into retail segments that are already occupied by its rivals? The most likely explanation is simply that it can source and sell these goods more efficiently than them, and that these efficiencies cannot be achieved through contracts with the said rivals. Once we accept the possibility that Amazon is simply more efficient, the picture changes dramatically. The sooner it overthrows less efficient rivals the better. Doing so creates valuable surplus that can flow to either itself or its consumers. This is true regardless of whether Amazon has a marketplace monopoly or not. Even if it does have a monopoly (which is doubtful given competition from the likes of Zalando, AliExpress, Google Search and eBay), at least some of these efficiencies will likely be passed on to consumers. Such a scenario is also perfectly compatible with increased profits for Amazon. The real test is whether output increases when Amazon enters segments that were previously occupied by rivals.

Of course, the usual critiques voiced against the “Single Monopoly Profit” theory apply here. It is plausible that, by excluding its retail rivals, Amazon is simply seeking to protect its alleged platform monopoly. However, the anecdotal evidence that has been raised thus far does not support this conclusion.

But what about innovation?

Possibly sensing the weakness of the “inefficiency” line of arguments against Amazon, critics will likely put put forward a second theory of harm. The claim is that by capturing the rents of potentially innovative retailers, Amazon may hamper their incentives to innovate and will therefore harm consumer choice. Margrethe Vestager intimated this much in a Bloomberg interview. Though this framing might seem tempting at first, it falters under close inspection.

The effects of Amazon’s behavior could first be framed in terms of appropriability — that is: the extent to which an innovator captures the social benefits of its innovation. The higher its share of those benefits, the larger its incentives to innovate. By forcing out its retail rivals, it is plausible that Amazon is reducing the returns which they earn on their potential innovations.

Another potential framing is that of holdup theory. Applied to this case, one could argue that rival retailers made sunk investments (potentially innovation-related) to join the Amazon platform, and that Amazon is behaving opportunistically by capturing their surplus. With hindsight, merchants might thus have opted to stay out of the Amazon marketplace.

Unfortunately for Amazon’s critics, there are numerous objections to these two framings. For a start, the business implication of both the appropriability and holdup theories is that firms can and should take sensible steps to protect their investments. The recent empirical paper mentioned above stresses that these actions are critical for the sake of Amazon’s retailers.

Potential solutions abound. Retailers could in principle enter into long-term exclusivity agreements with their suppliers (which would keep Amazon out of the market if there are no alternative suppliers). Alternatively, they could sign non-compete clauses with Amazon, exchange assets, or even outright merge. In fact, there is at least some evidence of this last possibility occurring, as Amazon has acquired some of its online retailers. The fact that some retailers have not opted for these safety measures (or other methods of appropriability) suggests that they either don’t perceive a threat or are unwilling to make the necessary investments. It might also be due to bad business judgement on their part).

Which brings us to the big question. Should competition law step into the breach in those cases where firms have refused to take even basic steps to protect their investments? The answer is probably no.

For a start, condoning this poor judgement encourages firms to rely on competition enforcement rather than private solutions  to solve appropriability and holdup issues. This is best understood with reference to moral hazard. By insuring firms against the capture of their profits, competition authorities disincentivize all forms of risk-mitigation on the part of those firms. This will ultimately raise enforcement costs (as firms become increasingly reliant on the antitrust system for protection).

It is also informationally much more burdensome, as authorities will systematically have to rule on the appropriate share of profits between parties to a case.

Finally, overprotecting these investments would go against the philosophy of the European Court of Justice’s Huawei ruling.  Albeit in the specific context of injunctions relating to SEPs, the Court conditioned competition liability on firms showing that they have taken a series of reasonable steps to sort out their disputes privately.

Concluding remarks

This is not to say that competition intervention should categorically be proscribed. But rather that the capture of a retailer’s investments by Amazon is an insufficient condition for enforcement actions. Instead, the Commission should question whether Amazon’s actions are truly detrimental to consumer welfare and output. Absent strong evidence that an excluded retailer offered superior products, or that Amazon’s move was merely a strategic play to prevent entry, competition authorities should let the chips fall where they may.

As things stand, there is simply no evidence to indicate that anything out of the ordinary is occurring on the Amazon marketplace. By shining the spotlight on Amazon, the Commission is putting itself under tremendous political pressure to move forward with a formal investigation (all the more so, given the looming European Parliament elections). This is regrettable, as there are surely more pressing matters for the European regulator to deal with. The Commission would thus do well to recall the words of Shakespeare in the Merchant of Venice: “All that glisters is not gold”. Applied in competition circles this translates to “all that is big is not inefficient”.

Last week, the DOJ cleared the merger of CVS Health and Aetna (conditional on Aetna’s divesting its Medicare Part D business), a merger that, as I previously noted at a House Judiciary hearing, “presents a creative effort by two of the most well-informed and successful industry participants to try something new to reform a troubled system.” (My full testimony is available here).

Of course it’s always possible that the experiment will fail — that the merger won’t “revolutioniz[e] the consumer health care experience” in the way that CVS and Aetna are hoping. But it’s a low (antitrust) risk effort to address some of the challenges confronting the healthcare industry — and apparently the DOJ agrees.

I discuss the weakness of the antitrust arguments against the merger at length in my testimony. What I particularly want to draw attention to here is how this merger — like many vertical mergers — represents business model innovation by incumbents.

The CVS/Aetna merger is just one part of a growing private-sector movement in the healthcare industry to adopt new (mostly) vertical arrangements that seek to move beyond some of the structural inefficiencies that have plagued healthcare in the United States since World War II. Indeed, ambitious and interesting as it is, the merger arises amidst a veritable wave of innovative, vertical healthcare mergers and other efforts to integrate the healthcare services supply chain in novel ways.

These sorts of efforts (and the current DOJ’s apparent support for them) should be applauded and encouraged. I need not rehash the economic literature on vertical restraints here (see, e.g., Lafontaine & Slade, etc.). But especially where government interventions have already impaired the efficient workings of a market (as they surely have, in spades, in healthcare), it is important not to compound the error by trying to micromanage private efforts to restructure around those constraints.   

Current trends in private-sector-driven healthcare reform

In the past, the most significant healthcare industry mergers have largely been horizontal (i.e., between two insurance providers, or two hospitals) or “traditional” business model mergers for the industry (i.e., vertical mergers aimed at building out managed care organizations). This pattern suggests a sort of fealty to the status quo, with insurers interested primarily in expanding their insurance business or providers interested in expanding their capacity to provide medical services.

Today’s health industry mergers and ventures seem more frequently to be different in character, and they portend an industry-wide experiment in the provision of vertically integrated healthcare that we should enthusiastically welcome.

Drug pricing and distribution innovations

To begin with, the CVS/Aetna deal, along with the also recently approved Cigna-Express Scripts deal, solidifies the vertical integration of pharmacy benefit managers (PBMs) with insurers.

But a number of other recent arrangements and business models center around relationships among drug manufacturers, pharmacies, and PBMs, and these tend to minimize the role of insurers. While not a “vertical” arrangement, per se, Walmart’s generic drug program, for example, offers $4 prescriptions to customers regardless of insurance (the typical generic drug copay for patients covered by employer-provided health insurance is $11), and Walmart does not seek or receive reimbursement from health plans for these drugs. It’s been offering this program since 2006, but in 2016 it entered into a joint buying arrangement with McKesson, a pharmaceutical wholesaler (itself vertically integrated with Rexall pharmacies), to negotiate lower prices. The idea, presumably, is that Walmart will entice consumers to its stores with the lure of low-priced generic prescriptions in the hope that they will buy other items while they’re there. That prospect presumably makes it worthwhile to route around insurers and PBMs, and their reimbursements.

Meanwhile, both Express Scripts and CVS Health (two of the country’s largest PBMs) have made moves toward direct-to-consumer sales themselves, establishing pricing for a small number of drugs independently of health plans and often in partnership with drug makers directly.   

Also apparently focused on disrupting traditional drug distribution arrangements, Amazon has recently purchased online pharmacy PillPack (out from under Walmart, as it happens), and with it received pharmacy licenses in 49 states. The move introduces a significant new integrated distributor/retailer, and puts competitive pressure on other retailers and distributors and potentially insurers and PBMs, as well.

Whatever its role in driving the CVS/Aetna merger (and I believe it is smaller than many reports like to suggest), Amazon’s moves in this area demonstrate the fluid nature of the market, and the opportunities for a wide range of firms to create efficiencies in the market and to lower prices.

At the same time, the differences between Amazon and CVS/Aetna highlight the scope of product and service differentiation that should contribute to the ongoing competitiveness of these markets following mergers like this one.

While Amazon inarguably excels at logistics and the routinizing of “back office” functions, it seems unlikely for the foreseeable future to be able to offer (or to be interested in offering) a patient interface that can rival the service offerings of a brick-and-mortar CVS pharmacy combined with an outpatient clinic and its staff and bolstered by the capabilities of an insurer like Aetna. To be sure, online sales and fulfillment may put price pressure on important, largely mechanical functions, but, like much technology, it is first and foremost a complement to services offered by humans, rather than a substitute. (In this regard it is worth noting that McKesson has long been offering Amazon-like logistics support for both online and brick-and-mortar pharmacies. “‘To some extent, we were Amazon before it was cool to be Amazon,’ McKesson CEO John Hammergren said” on a recent earnings call).

Treatment innovations

Other efforts focus on integrating insurance and treatment functions or on bringing together other, disparate pieces of the healthcare industry in interesting ways — all seemingly aimed at finding innovative, private solutions to solve some of the costly complexities that plague the healthcare market.

Walmart, for example, announced a deal with Quest Diagnostics last year to experiment with offering diagnostic testing services and potentially other basic healthcare services inside of some Walmart stores. While such an arrangement may simply be a means of making doctor-prescribed diagnostic tests more convenient, it may also suggest an effort to expand the availability of direct-to-consumer (patient-initiated) testing (currently offered by Quest in Missouri and Colorado) in states that allow it. A partnership with Walmart to market and oversee such services has the potential to dramatically expand their use.

Capping off (for now) a buying frenzy in recent years that included the purchase of PBM, CatamaranRx, UnitedHealth is seeking approval from the FTC for the proposed merger of its Optum unit with the DaVita Medical Group — a move that would significantly expand UnitedHealth’s ability to offer medical services (including urgent care, outpatient surgeries, and health clinic services), give it a significant group of doctors’ clinics throughout the U.S., and turn UnitedHealth into the largest employer of doctors in the country. But of course this isn’t a traditional managed care merger — it represents a significant bet on the decentralized, ambulatory care model that has been slowly replacing significant parts of the traditional, hospital-centric care model for some time now.

And, perhaps most interestingly, some recent moves are bringing together drug manufacturers and diagnostic and care providers in innovative ways. Swiss pharmaceutical company, Roche, announced recently that “it would buy the rest of U.S. cancer data company Flatiron Health for $1.9 billion to speed development of cancer medicines and support its efforts to price them based on how well they work.” Not only is the deal intended to improve Roche’s drug development process by integrating patient data, it is also aimed at accommodating efforts to shift the pricing of drugs, like the pricing of medical services generally, toward an outcome-based model.

Similarly interesting, and in a related vein, early this year a group of hospital systems including Intermountain Health, Ascension, and Trinity Health announced plans to begin manufacturing generic prescription drugs. This development further reflects the perceived benefits of vertical integration in healthcare markets, and the move toward creative solutions to the unique complexity of coordinating the many interrelated layers of healthcare provision. In this case,

[t]he nascent venture proposes a private solution to ensure contestability in the generic drug market and consequently overcome the failures of contracting [in the supply and distribution of generics]…. The nascent venture, however it solves these challenges and resolves other choices, will have important implications for the prices and availability of generic drugs in the US.

More enforcement decisions like CVS/Aetna and Bayer/Monsanto; fewer like AT&T/Time Warner

In the face of all this disruption, it’s difficult to credit anticompetitive fears like those expressed by the AMA in opposing the CVS-Aetna merger and a recent CEA report on pharmaceutical pricing, both of which are premised on the assumption that drug distribution is unavoidably dominated by a few PBMs in a well-defined, highly concentrated market. Creative arrangements like the CVS-Aetna merger and the initiatives described above (among a host of others) indicate an ease of entry, the fluidity of traditional markets, and a degree of business model innovation that suggest a great deal more competitiveness than static PBM market numbers would suggest.

This kind of incumbent innovation through vertical restructuring is an increasingly important theme in antitrust, and efforts to tar such transactions with purported evidence of static market dominance is simply misguided.

While the current DOJ’s misguided (and, remarkably, continuing) attempt to stop the AT&T/Time Warner merger is an aberrant step in the wrong direction, the leadership at the Antitrust Division generally seems to get it. Indeed, in spite of strident calls for stepped-up enforcement in the always-controversial ag-biotech industry, the DOJ recently approved three vertical ag-biotech mergers in fairly rapid succession.

As I noted in a discussion of those ag-biotech mergers, but equally applicable here, regulatory humility should continue to carry the day when it comes to structural innovation by incumbent firms:

But it is also important to remember that innovation comes from within incumbent firms, as well, and, often, that the overall level of innovation in an industry may be increased by the presence of large firms with economies of scope and scale.

In sum, and to paraphrase Olympia Dukakis’ character in Moonstruck: “what [we] don’t know about [the relationship between innovation and market structure] is a lot.”

What we do know, however, is that superficial, concentration-based approaches to antitrust analysis will likely overweight presumed foreclosure effects and underweight innovation effects.

We shouldn’t fetishize entry, or access, or head-to-head competition over innovation, especially where consumer welfare may be significantly improved by a reduction in the former in order to get more of the latter.