Archives For Barriers to Entry

Today, Reuters reports that Germany-based ThyssenKrupp has received bids from three bidding groups for a majority stake in the firm’s elevator business. Finland’s Kone teamed with private equity firm CVC to bid on the company. Private equity firms Blackstone and Carlyle joined with the Canada Pension Plan Investment Board to submit a bid. A third bid came from Advent, Cinven, and the Abu Dhabi Investment Authority.

Also today — in anticipation of the long-rumored and much-discussed sale of ThyssenKrupp’s elevator business — the International Center for Law & Economics released The Antitrust Risks of Four To Three Mergers: Heightened Scrutiny of a Potential ThyssenKrupp/Kone Merger, by Eric Fruits and Geoffrey A. Manne. This study examines the heightened scrutiny of four to three mergers by competition authorities in the current regulatory environment, using a potential ThyssenKrupp/Kone merger as a case study. 

In recent years, regulators have become more aggressive in merger enforcement in response to populist criticisms that lax merger enforcement has led to the rise of anticompetitive “big business.” In this environment, it is easy to imagine regulators intensely scrutinizing and challenging or conditioning nearly any merger that substantially increases concentration. 

This potential deal provides an opportunity to highlight the likely challenges, complexity, and cost that regulatory scrutiny of such mergers actually entails — and it is likely to be a far cry from the lax review and permissive decisionmaking of antitrust critics’ imagining.

In the case of a potential ThyssenKrupp/Kone merger, the combined entity would face lengthy, costly, and duplicative review in multiple jurisdictions, any one of which could effectively block the merger or impose onerous conditions. It would face the automatic assumption of excessive concentration in several of these, including the US, EU, and Canada. In the US, the deal would also face heightened scrutiny based on political considerations, including the perception that the deal would strengthen a foreign firm at the expense of a domestic supplier. It would also face the risk of politicized litigation from state attorneys general, and potentially the threat of extractive litigation by competitors and customers.

Whether the merger would actually entail anticompetitive risk may, unfortunately, be of only secondary importance in determining the likelihood and extent of a merger challenge or the imposition of onerous conditions.

A “highly concentrated” market

In many jurisdictions, the four to three merger would likely trigger a “highly concentrated” market designation. With the merging firms having a dominant share of the market for elevators, the deal would be viewed as problematic in several areas:

  • The US (share > 35%, HHI > 3,000, HHI increase > 700), 
  • Canada (share of approximately 50%, HHI > 2,900, HHI increase of 1,000), 
  • Australia (share > 40%, HHI > 3,100, HHI increase > 500), 
  • Europe (shares of 33–65%, HHIs in excess of 2,700, and HHI increases of 270 or higher in Sweden, Finland, Netherlands, Austria, France, and Luxembourg).

As with most mergers, a potential ThyssenKrupp/Kone merger would likely generate “hot docs” that would be used to support the assumption of anticompetitive harm from the increase in concentration, especially in light of past allegations of price fixing in the industry and a decision by the European Commission in 2007 to fine certain companies in the industry for alleged anticompetitive conduct.

Political risks

The merger would also surely face substantial political risks in the US and elsewhere from the perception the deal would strengthen a foreign firm at the expense of a domestic supplier. President Trump’s administration has demonstrated a keen interest in protecting what it sees as US interests vis-à-vis foreign competition. As a high-rise and hotel developer who has shown a willingness to intervene in antitrust enforcement to protect his interests, President Trump may have a heightened personal interest in a ThyssenKrupp/Kone merger. 

To the extent that US federal, state, and local governments purchase products from the merging parties, the deal would likely be subjected to increased attention from federal antitrust regulators as well as states’ attorneys general. Indeed, the US Department of Justice (DOJ) has created a “Procurement Collusion Strike Force” focused on “deterring, detecting, investigating and prosecuting antitrust crimes . . . which undermine competition in government procurement. . . .”

The deal may also face scrutiny from EC, UK, Canadian, and Australian competition authorities, each of which has exhibited increased willingness to thwart such mergers. For example, the EU recently blocked a proposed merger between the transport (rail) services of EU firms, Siemens and Alstom. The UK recently blocked a series of major deals that had only limited competitive effects on the UK. In one of these, Thermo Fisher Scientific’s proposed acquisition of Roper Technologies’ Gatan subsidiary was not challenged in the US, but the deal was abandoned after the UK CMA decided to block the deal despite its limited connections to the UK.

Economic risks

In addition to the structural and political factors that may lead to blocking a four to three merger, several economic factors may further exacerbate the problem. While these, too, may be wrongly deemed problematic in particular cases by reviewing authorities, they are — relatively at least — better-supported by economic theory in the abstract. Moreover, even where wrongly applied, they are often impossible to refute successfully given the relevant standards. And such alleged economic concerns can act as an effective smokescreen for blocking a merger based on the sorts of political and structural considerations discussed above. Some of these economic factors include:

  • Barriers to entry. IBISWorld identifies barriers to entry to include economies of scale, long-standing relationships with existing buyers, as well as long records of safety and reliability. Strictly speaking, these are not costs borne only by a new entrant, and thus should not be deemed competitively-relevant entry barriers. Yet merger review authorities the world over fail to recognize this distinction, and routinely scuttle mergers based simply on the costs faced by additional competitors entering the market.
  • Potential unilateral effects. The extent of direct competition between the products and services sold by the merging parties is a key part of the evaluation of unilateral price effects. Competition authorities would likely consider a significant range of information to evaluate the extent of direct competition between the products and services sold by ThyssenKrupp and its merger partner. In addition to “hot docs,” this information could include won/lost bid reports as well as evidence from discount approval processes and customer switching patterns. Because the purchase of elevator and escalator products and services involves negotiation by sophisticated and experienced buyers, it is likely that this type of bid information would be readily available for review.
  • A history of coordinated conduct involving ThyssenKrupp and Kone. Competition authorities will also consider the risk that a four to three merger will increase the ability and likelihood for the remaining, smaller number of firms to collude. In 2007 the European Commission imposed a €992 million cartel fine on five elevator firms: ThyssenKrupp, Kone, Schindler, United Technologies, and Mitsubishi. At the time, it was the largest-ever cartel fine. Several companies, including Kone and UTC, admitted wrongdoing.

Conclusion

As “populist” antitrust gains more traction among enforcers aiming to stave off criticisms of lax enforcement, superficial and non-economic concerns have increased salience. The simple benefit of a resounding headline — “The US DOJ challenges increased concentration that would stifle the global construction boom” — signaling enforcers’ efforts to thwart further increases in concentration and save blue collar jobs is likely to be viewed by regulators as substantial. 

Coupled with the arguably more robust, potential economic arguments involving unilateral and coordinated effects arising from such a merger, a four to three merger like a potential ThyssenKrupp/Kone transaction would be sure to attract significant scrutiny and delay. Any arguments that such a deal might actually decrease prices and increase efficiency are — even if valid — less likely to gain as much traction in today’s regulatory environment.

Source: New York Magazine

When she rolled out her plan to break up Big Tech, Elizabeth Warren paid for ads (like the one shown above) claiming that “Facebook and Google account for 70% of all internet traffic.” This statistic has since been repeated in various forms by Rolling Stone, Vox, National Review, and Washingtonian. In my last post, I fact checked this claim and found it wanting.

Warren’s data

As supporting evidence, Warren cited a Newsweek article from 2017, which in turn cited a blog post from an open-source freelancer, who was aggregating data from a 2015 blog post published by Parse.ly, a web analytics company, which said: “Today, Facebook remains a top referring site to the publishers in Parse.ly’s network, claiming 39 percent of referral traffic versus Google’s share of 34 percent.” At the time, Parse.ly had “around 400 publisher domains” in its network. To put it lightly, this is not what it means to “account for” or “control” or “directly influence” 70 percent of all internet traffic, as Warren and others have claimed.

Internet traffic measured in bytes

In an effort to contextualize how extreme Warren’s claim was, in my last post I used a common measure of internet traffic — total volume in bytes — to show that Google and Facebook account for less than 20 percent of global internet traffic. Some Warren defenders have correctly pointed out that measuring internet traffic in bytes will weight the results toward data-heavy services, such as video streaming. It’s not obvious a priori, however, whether this would bias the results in favor of Facebook and Google or against them, given that users stream lots of video using those companies’ sites and apps (hello, YouTube).

Internet traffic measured by time spent by users

As I said in my post, there are multiple ways to measure total internet traffic, and no one of them is likely to offer a perfect measure. So, to get a fuller picture, we could also look at how users are spending their time on the internet. While there is no single source for global internet time use statistics, we can combine a few to reach an estimate (NB: this analysis includes time spent in apps as well as on the web). 

According to the Global Digital report by Hootsuite and We Are Social, in 2018 there were 4.021 billion active internet users, and the worldwide average for time spent using the internet was 6 hours and 42 minutes per day. That means there were 1,616 billion internet user-minutes per day.

Data from Apptopia shows that, in the three months from May through July 2018, users spent 300 billion hours in Facebook-owned apps and 118 billion hours in Google-owned apps. In other words, all Facebook-owned apps consume, on average, 197 billion user-minutes per day and all Google-owned apps consume, on average, 78 billion user-minutes per day. And according to SimilarWeb data for the three months from June to August 2019, web users spent 11 billion user-minutes per day visiting Facebook domains (facebook.com, whatsapp.com, instagram.com, messenger.com) and 52 billion user-minutes per day visiting Google domains, including google.com (and all subdomains) and youtube.com.

If you add up all app and web user-minutes for Google and Facebook, the total is 338 billion user minutes per day. A staggering number. But as a share of all internet traffic (in this case measured in terms of time spent)? Google- and Facebook-owned sites and apps account for about 21 percent of user-minutes.

Internet traffic measured by “connections”

In my last post, I cited a Sandvine study that measured total internet traffic by volume of upstream and downstream bytes. The same report also includes numbers for what Sandvine calls “connections,” which is defined as “the number of conversations occurring for an application.” Sandvine notes that while “some applications use a single connection for all traffic, others use many connections to transfer data or video to the end user.” For example, a video stream on Netflix uses a single connection, while every item on a webpage, such as loading images, may require a distinct connection.

Cam Cullen, Sandvine’s VP of marketing, also implored readers to “never forget Google connections include YouTube, Search, and DoubleClick — all of which are very noisy applications and universally consumed,” which would bias this statistic toward inflating Google’s share. With these caveats in mind, Sandvine’s data shows that Google is responsible for 30 percent of these connections, while Facebook is responsible for under 8 percent of connections. Note that Netflix’s share is less than 1 percent, which implies this statistic is not biased toward data-heavy services. Again, the numbers for Google and Facebook are a far cry from what Warren and others are claiming.

Source: Sandvine

Internet traffic measured by sources

I’m not sure whether either of these measures is preferable to what I offered in my original post, but each is at least a plausible measure of internet traffic — and all of them fall well short of Waren’s claimed 70 percent. What I do know is that the preferred metric offered by the people most critical of my post — external referrals to online publishers (content sites) — is decidedly not a plausible measure of internet traffic.

In defense of Warren, Jason Kint, the CEO of a trade association for digital content publishers, wrote, “I just checked actual benchmark data across our members (most publishers) and 67% of their external traffic comes through Google or Facebook.” Rand Fishkin cites his own analysis of data from Jumpshot showing that 66.0 percent of external referral visits were sent by Google and 5.1 percent were sent by Facebook.

In another response to my piece, former digital advertising executive, Dina Srinivasan, said, “[Percentage] of referrals is relevant because it is pointing out that two companies control a large [percentage] of business that comes through their door.” 

In my opinion, equating “external referrals to publishers” with “internet traffic” is unacceptable for at least two reasons.

First, the internet is much broader than traditional content publishers — it encompasses everything from email and Yelp to TikTok, Amazon, and Netflix. The relevant market is consumer attention and, in that sense, every internet supplier is bidding for scarce time. In a recent investor letter, Netflix said, “We compete with (and lose to) ‘Fortnite’ more than HBO,” adding: “There are thousands of competitors in this highly fragmented market vying to entertain consumers and low barriers to entry for those great experiences.” Previously, CEO Reed Hastings had only half-jokingly said, “We’re competing with sleep on the margin.” In this debate over internet traffic, the opposing side fails to grasp the scope of the internet market. It is unsuprising, then, that the one metric that does best at capturing attention — time spent — is about the same as bytes.

Second, and perhaps more important, even if we limit our analysis to publisher traffic, the external referral statistic these critics cite completely (and conveniently?) omits direct and internal traffic — traffic that represents the majority of publisher traffic. In fact, according to Parse.ly’s most recent data, which now includes more than 3,000 “high-traffic sites,” only 35 percent of total traffic comes from search and social referrers (as the graph below shows). Of course, Google and Facebook drive the majority of search and social referrals. But given that most users visit webpages without being referred at all, Google and Facebook are responsible for less than a third of total traffic

Source: Parse.ly

It is simply incorrect to say, as Srinivasan does, that external referrals offers a useful measurement of internet traffic because it captures a “large [percentage] of business that comes through [publishers’] door.” Well, “large” is relative, but the implication that these external referrals from Facebook and Google explain Warren’s 70%-of-internet-traffic claim is both factually incorrect and horribly misleading — especially in an antitrust context. 

It is factually incorrect because, at most, Google and Facebook are responsible for a third of the traffic on these sites; it is misleading because if our concern is ensuring that users can reach content sites without passing through Google or Facebook, the evidence is clear that they can and do — at least twice as often as they follow links from Google or Facebook to do so.

Conclusion

As my colleague Gus Hurwitz said, Warren is making a very specific and very alarming claim: 

There may be ‘softer’ versions of [Warren’s claim] that are reasonably correct (e.g., digital ad revenue, visibility into traffic). But for 99% of people hearing (and reporting on) these claims, they hear the hard version of the claim: Google and Facebook control 70% of what you do online. That claim is wrong, alarmist, misinformation, intended to foment fear, uncertainty, and doubt — to bootstrap the argument that ‘everything is terrible, worse, really!, and I’m here to save you.’ This is classic propaganda.

Google and Facebook do account for a 59 percent (and declining) share of US digital advertising. But that’s not what Warren said (nor would anyone try to claim with a straight face that “volume of advertising” was the same thing as “internet traffic”). And if our concern is with competition, it’s hard to look at the advertising market and conclude that it’s got a competition problem. Prices are falling like crazy (down 42 percent in the last decade), and volume is only increasing. If you add in offline advertising (which, whatever you think about market definition here, certainly competes with online advertising at the very least on some dimensions) Google and Facebook are responsible for only about 32 percent.

In her comments criticizing my article, Dina Srinivasan mentioned another of these “softer” versions:

Also, each time a publisher page loads, what [percentage] then queries Google or Facebook servers during the page loads? About 98+% of every page load. That stat is not even in Warren or your analysis. That is 1000% relevant.

It’s true that Google and Facebook have visibility into a great deal of all internet traffic (beyond their own) through a variety of products and services: browsers, content delivery networks (CDNs), web beacons, cloud computing, VPNs, data brokers, single sign-on (SSO), and web analytics services. But seeing internet traffic is not the same thing as “account[ing] for” — or controlling or even directly influencing — internet traffic. The first is a very different claim than the latter, and one with considerably more attenuated competitive relevance (if any). It certainly wouldn’t be a sufficient basis for advocating that Google and Facebook be broken up — which is probably why, although arguably accurate, it’s not the statistic upon which Warren based her proposal to do so.

Zoom, one of Silicon Valley’s lesser-known unicorns, has just gone public. At the time of writing, its shares are trading at about $65.70, placing the company’s value at $16.84 billion. There are good reasons for this success. According to its Form S-1, Zoom’s revenue rose from about $60 million in 2017 to a projected $330 million in 2019, and the company has already surpassed break-even . This growth was notably fueled by a thriving community of users who collectively spend approximately 5 billion minutes per month in Zoom meetings.

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects. For instance, the value of Skype to one user depends – at least to some extent – on the number of other people that might be willing to use the network. In these settings, it is often said that positive feedback loops may cause the market to tip in favor of a single firm that is then left with an unassailable market position. Although Zoom still faces significant competitive challenges, it has nonetheless established a strong position in a market previously dominated by powerful incumbents who could theoretically count on network effects to stymie its growth.

Further complicating matters, Zoom chose to compete head-on with these incumbents. It did not create a new market or a highly differentiated product. Zoom’s Form S-1 is quite revealing. The company cites the quality of its product as its most important competitive strength. Similarly, when listing the main benefits of its platform, Zoom emphasizes that its software is “easy to use”, “easy to deploy and manage”, “reliable”, etc. In its own words, Zoom has thus gained a foothold by offering an existing service that works better than that of its competitors.

And yet, this is precisely the type of story that a literal reading of the network effects literature would suggest is impossible, or at least highly unlikely. For instance, the foundational papers on network effects often cite the example of the DVORAK keyboard (David, 1985; and Farrell & Saloner, 1985). These early scholars argued that, despite it being the superior standard, the DVORAK layout failed to gain traction because of the network effects protecting the QWERTY standard. In other words, consumers failed to adopt the superior DVORAK layout because they were unable to coordinate on their preferred option. It must be noted, however, that the conventional telling of this story was forcefully criticized by Liebowitz & Margolis in their classic 1995 article, The Fable of the Keys.

Despite Liebowitz & Margolis’ critique, the dominance of the underlying network effects story persists in many respects. And in that respect, the emergence of Zoom is something of a cautionary tale. As influential as it may be, the network effects literature has tended to overlook a number of factors that may mitigate, or even eliminate, the likelihood of problematic outcomes. Zoom is yet another illustration that policymakers should be careful when they make normative inferences from positive economics.

A Coasian perspective

It is now widely accepted that multi-homing and the absence of switching costs can significantly curtail the potentially undesirable outcomes that are sometimes associated with network effects. But other possibilities are often overlooked. For instance, almost none of the foundational network effects papers pay any notice to the application of the Coase theorem (though it has been well-recognized in the two-sided markets literature).

Take a purported market failure that is commonly associated with network effects: an installed base of users prevents the market from switching towards a new standard, even if it is superior (this is broadly referred to as “excess inertia,” while the opposite scenario is referred to as “excess momentum”). DVORAK’s failure is often cited as an example.

Astute readers will quickly recognize that this externality problem is not fundamentally different from those discussed in Ronald Coase’s masterpiece, “The Problem of Social Cost,” or Steven Cheung’s “The Fable of the Bees” (to which Liebowitz & Margolis paid homage in their article’s title). In the case at hand, there are at least two sets of externalities at play. First, early adopters of the new technology impose a negative externality on the old network’s installed base (by reducing its network effects), and a positive externality on other early adopters (by growing the new network). Conversely, installed base users impose a negative externality on early adopters and a positive externality on other remaining users.

Describing these situations (with a haughty confidence reminiscent of Paul Samuelson and Arthur Cecil Pigou), Joseph Farrell and Garth Saloner conclude that:

In general, he or she [i.e. the user exerting these externalities] does not appropriately take this into account.

Similarly, Michael Katz and Carl Shapiro assert that:

In terms of the Coase theorem, it is very difficult to design a contract where, say, the (potential) future users of HDTV agree to subsidize today’s buyers of television sets to stop buying NTSC sets and start buying HDTV sets, thereby stimulating the supply of HDTV programming.

And yet it is far from clear that consumers and firms can never come up with solutions that mitigate these problems. As Daniel Spulber has suggested, referral programs offer a case in point. These programs usually allow early adopters to receive rewards in exchange for bringing new users to a network. One salient feature of these programs is that they do not simply charge a lower price to early adopters; instead, in order to obtain a referral fee, there must be some agreement between the early adopter and the user who is referred to the platform. This leaves ample room for the reallocation of rewards. Users might, for instance, choose to split the referral fee. Alternatively, the early adopter might invest time to familiarize the switching user with the new platform, hoping to earn money when the user jumps ship. Both of these arrangements may reduce switching costs and mitigate externalities.

Danial Spulber also argues that users may coordinate spontaneously. For instance, social groups often decide upon the medium they will use to communicate. Families might choose to stay on the same mobile phone network. And larger groups (such as an incoming class of students) may agree upon a social network to share necessary information, etc. In these contexts, there is at least some room to pressure peers into adopting a new platform.

Finally, firms and other forms of governance may also play a significant role. For instance, employees are routinely required to use a series of networked goods. Common examples include office suites, email clients, social media platforms (such as Slack), or video communications applications (Zoom, Skype, Google Hangouts, etc.). In doing so, firms presumably act as islands of top-down decision-making and impose those products that maximize the collective preferences of employers and employees. Similarly, a single firm choosing to join a network (notably by adopting a standard) may generate enough momentum for a network to gain critical mass. Apple’s decisions to adopt USB-C connectors on its laptops and to ditch headphone jacks on its iPhones both spring to mind. Likewise, it has been suggested that distributed ledger technology and initial coin offerings may facilitate the creation of new networks. The intuition is that so-called “utility tokens” may incentivize early adopters to join a platform, despite initially weak network effects, because they expect these tokens to increase in value as the network expands.

A combination of these arrangements might explain how Zoom managed to grow so rapidly, despite the presence of powerful incumbents. In its own words:

Our rapid adoption is driven by a virtuous cycle of positive user experiences. Individuals typically begin using our platform when a colleague or associate invites them to a Zoom meeting. When attendees experience our platform and realize the benefits, they often become paying customers to unlock additional functionality.

All of this is not to say that network effects will always be internalized through private arrangements, but rather that it is equally wrong to assume that transaction costs systematically prevent efficient coordination among users.

Misguided regulatory responses

Over the past couple of months, several antitrust authorities around the globe have released reports concerning competition in digital markets (UK, EU, Australia), or held hearings on this topic (US). A recurring theme throughout their published reports is that network effects almost inevitably weaken competition in digital markets.

For instance, the report commissioned by the European Commission mentions that:

Because of very strong network externalities (especially in multi-sided platforms), incumbency advantage is important and strict scrutiny is appropriate. We believe that any practice aimed at protecting the investment of a dominant platform should be minimal and well targeted.

The Australian Competition & Consumer Commission concludes that:

There are considerable barriers to entry and expansion for search platforms and social media platforms that reinforce and entrench Google and Facebook’s market power. These include barriers arising from same-side and cross-side network effects, branding, consumer inertia and switching costs, economies of scale and sunk costs.

Finally, a panel of experts in the United Kingdom found that:

Today, network effects and returns to scale of data appear to be even more entrenched and the market seems to have stabilised quickly compared to the much larger degree of churn in the early days of the World Wide Web.

To address these issues, these reports suggest far-reaching policy changes. These include shifting the burden of proof in competition cases from authorities to defendants, establishing specialized units to oversee digital markets, and imposing special obligations upon digital platforms.

The story of Zoom’s emergence and the important insights that can be derived from the Coase theorem both suggest that these fears may be somewhat overblown.

Rivals do indeed find ways to overthrow entrenched incumbents with some regularity, even when these incumbents are shielded by network effects. Of course, critics may retort that this is not enough, that competition may sometimes arrive too late (excess inertia, i.e., “ a socially excessive reluctance to switch to a superior new standard”) or too fast (excess momentum, i.e., “the inefficient adoption of a new technology”), and that the problem is not just one of network effects, but also one of economies of scale, information asymmetry, etc. But this comes dangerously close to the Nirvana fallacy. To begin, it assumes that regulators are able to reliably navigate markets toward these optimal outcomes — which is questionable, at best. Moreover, the regulatory cost of imposing perfect competition in every digital market (even if it were possible) may well outweigh the benefits that this achieves. Mandating far-reaching policy changes in order to address sporadic and heterogeneous problems is thus unlikely to be the best solution.

Instead, the optimal policy notably depends on whether, in a given case, users and firms can coordinate their decisions without intervention in order to avoid problematic outcomes. A case-by-case approach thus seems by far the best solution.

And competition authorities need look no further than their own decisional practice. The European Commission’s decision in the Facebook/Whatsapp merger offers a good example (this was before Margrethe Vestager’s appointment at DG Competition). In its decision, the Commission concluded that the fast-moving nature of the social network industry, widespread multi-homing, and the fact that neither Facebook nor Whatsapp controlled any essential infrastructure, prevented network effects from acting as a barrier to entry. Regardless of its ultimate position, this seems like a vastly superior approach to competition issues in digital markets. The Commission adopted a similar reasoning in the Microsoft/Skype merger. Unfortunately, the Commission seems to have departed from this measured attitude in more recent decisions. In the Google Search case, for example, the Commission assumes that the mere existence of network effects necessarily increases barriers to entry:

The existence of positive feedback effects on both sides of the two-sided platform formed by general search services and online search advertising creates an additional barrier to entry.

A better way forward

Although the positive economics of network effects are generally correct and most definitely useful, some of the normative implications that have been derived from them are deeply flawed. Too often, policymakers and commentators conclude that these potential externalities inevitably lead to stagnant markets where competition is unable to flourish. But this does not have to be the case. The emergence of Zoom shows that superior products may prosper despite the presence of strong incumbents and network effects.

Basing antitrust policies on sweeping presumptions about digital competition – such as the idea that network effects are rampant or the suggestion that online platforms necessarily imply “extreme returns to scale” – is thus likely to do more harm than good. Instead, Antitrust authorities should take a leaf out of Ronald Coase’s book, and avoid blackboard economics in favor of a more granular approach.

A recent NBER working paper by Gutiérrez & Philippon has attracted attention from observers who see oligopoly everywhere and activists who want governments to more actively “manage” competition. The analysis in the paper is fundamentally flawed and should not be relied upon by policymakers, regulators, or anyone else.

As noted in my earlier post, Gutiérrez & Philippon attempt to craft a causal linkage between differences in U.S. and EU antitrust enforcement and product market regulation to differences in market concentration and corporate profits. Their paper’s abstract leads with a bold assertion:

Until the 1990’s, US markets were more competitive than European markets. Today, European markets have lower concentration, lower excess profits, and lower regulatory barriers to entry.

This post focuses on Gutiérrez & Philippon’s claim that EU markets have lower “excess profits.” This is perhaps the most outrageous claim in the paper. If anyone bothers to read the full paper, they’ll see that claims that EU firms have lower excess profits is simply not supported by the paper itself. Aside from a passing mention of someone else’s work in a footnote, the only mention of “excess profits” is in the paper’s headline-grabbing abstract.

What’s even more outrageous is the authors don’t define (or even describe) what they mean by excess profits.

These two factors alone should be enough to toss aside the paper’s assertion about “excess” profits. But, there’s more.

Gutiérrez & Philippon define profit to be gross operating surplus and mixed income (known as “GOPS” in the OECD’s STAN Industrial Analysis dataset). GOPS is not the same thing as gross margin or gross profit as used in business and finance (for example GOPS subtracts wages, but gross margin does not). The EU defines GOPS as (emphasis added):

Operating surplus is the surplus (or deficit) on production activities before account has been taken of the interest, rents or charges paid or received for the use of assets. Mixed income is the remuneration for the work carried out by the owner (or by members of his family) of an unincorporated enterprise. This is referred to as ‘mixed income’ since it cannot be distinguished from the entrepreneurial profit of the owner.

Here’s Figure 1 from Gutiérrez & Philippon plotting GOPS as a share of gross output.

Fig1-GutierrezPhilippon

Look at the huge jump in gross operating surplus for U.S. firms!

Now, look at the scale of the y-axis. Not such a big jump after all.

Over 23 years, from 1992 to 2015, the gross operating surplus rate for U.S. firms grew by 2.5 percentage points. In the EU, the rate increased by about one percentage point.

Using the STAN dataset, I plotted the gross operating surplus rate for each EU country (blue dots) and the U.S. (red dots), along with a time trend. Three takeaways:

  1. There’s not much of a difference between the U.S. and the EU average—they both hover around a gross operating surplus rate of about 19.5 percent; and
  2. There’s a huge variation in gross operating surplus rate across EU countries.
  3. Yes, gross operating surplus is trending slightly upward in the U.S. and slightly downward for the EU average, but there doesn’t appear to be a huge difference in the slope of the trendlines. In fact the slopes of the trendlines are not statistically significantly different from zero and are not statistically significantly different from each other.

GOPSprod

The use of gross profits raises some serious questions. For example, the Stigler Center’s James Traina finds that, after accounting for selling, general, and administrative expenses (SG&A), mark-ups for publicly traded firms in the U.S. have not meaningfully increased since 1980.

The figure below plots net operating surplus (NOPS equals GOPS minus consumption of fixed capital)—which is not the same thing as net income for a business.

Same three takeaways:

  1. There’s not much of a difference between the U.S. and the EU average—they both hover around a net operating surplus rate of a little more than seven percent; and
  2. There’s a huge variation in net operating surplus rate across EU countries.
  3. The slope of the trendlines for net operating surplus in the U.S. and EU are not statistically significantly different from zero and are not statistically significantly different from each other.

NOPSprod

It’s very possible that U.S. firms are achieving higher and growing “excess” profits relative to EU firms. It’s also very possible they’re not. Despite the bold assertions of Gutiérrez & Philippon, the information presented in their paper provides no useful information one way or the other.

 

A recent NBER working paper by Gutiérrez & Philippon attempts to link differences in U.S. and EU antitrust enforcement and product market regulation to differences in market concentration and corporate profits. The paper’s abstract begins with a bold assertion:

Until the 1990’s, US markets were more competitive than European markets. Today, European markets have lower concentration, lower excess profits, and lower regulatory barriers to entry.

The authors are not clear what they mean by lower, however its seems they mean lower today relative to the 1990s.

This blog post focuses on the first claim: “Today, European markets have lower concentration …”

At the risk of being pedantic, Gutiérrez & Philippon’s measures of market concentration for which both U.S. and EU data are reported cover the period from 1999 to 2012. Thus, “the 1990s” refers to 1999, and “today” refers to 2012, or six years ago.

The table below is based on Figure 26 in Gutiérrez & Philippon. In 2012, there appears to be no significant difference in market concentration between the U.S. and the EU, using either the 8-firm concentration ratio or HHI. Based on this information, it cannot be concluded broadly that EU sectors have lower concentration than the U.S.

2012U.S.EU
CR826% (+5%)27% (-7%)
HHI640 (+150)600 (-190)

Gutiérrez & Philippon focus on the change in market concentration to draw their conclusions. However, the levels of market concentration measures are strikingly low. In all but one of the industries (telecommunications) in Figure 27 of their paper, the 8-firm concentration ratios for the U.S. and the EU are below 40 percent. Similarly, the HHI measures reported in the paper are at levels that most observers would presume to be competitive. In addition, in 7 of the 12 sectors surveyed, the U.S. 8-firm concentration ratio is lower than in the EU.

The numbers in parentheses in the table above show the change in the measures of concentration since 1999. The changes suggests that U.S. markets have become more concentrated and EU markets have become less concentrated. But, how significant are the changes in concentration?

A simple regression of the relationship between CR8 and a time trend finds that in the EU, CR8 has decreased an average of 0.5 percentage point a year, while the U.S. CR8 increased by less than 0.4 percentage point a year from 1999 to 2012. Tucked in an appendix to Gutiérrez & Philippon, Figure 30 shows that CR8 in the U.S. had decreased by about 2.5 percentage points from 2012 to 2014.

A closer examination of Gutiérrez & Philippon’s 8-firm concentration ratio for the EU shows that much of the decline in EU market concentration occurred over the 1999-2002 period. After that, the change in CR8 for the EU is not statistically significantly different from zero.

A regression of the relationship between HHI and a time trend finds that in the EU, HHI has decreased an average of 12.5 points a year, while the U.S. HHI increased by less than 16.4 points a year from 1999 to 2012.

As with CR8, a closer examination of Gutiérrez & Philippon’s HHI for the EU shows that much of the decline in EU market concentration occurred over the 1999-2002 period. After that, the change in HHI for the EU is not statistically significantly different from zero.

Readers should be cautious in relying on Gutiérrez & Philippon’s data to conclude that the U.S. is “drifting” toward greater market concentration while the EU is “drifting” toward lower market concentration. Indeed, the limited data presented in the paper point toward a convergence in market concentration between the two regions.

 

 

regulation-v41n3-coverCalm Down about Common Ownership” is the title of a piece Thom Lambert and I published in the Fall 2018 issue of Regulation, which just hit online. The article is a condensed version our recent paper, “The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms.” In short, we argue that concern about common ownership lacks a theoretically sound foundation and is built upon faulty empirical support. We also explain why proposed “fixes” would do more harm than good.

Over the past several weeks we wrote a series of blog posts here that summarize or expand upon different parts of our argument. To pull them all into one place:

Carl Shapiro, the government’s economics expert opposing the AT&T-Time Warner merger, seems skeptical of much of the antitrust populists’ Amazon rhetoric: “Simply saying that Amazon has grown like a weed, charges very low prices, and has driven many smaller retailers out of business is not sufficient. Where is the consumer harm?”

On its face, there was nothing about the Amazon/Whole Foods merger that should have raised any antitrust concerns. While one year is too soon to fully judge the competitive impacts of the Amazon-Whole Foods merger, nevertheless, it appears that much of the populist antitrust movement’s speculation that the merger would destroy competition and competitors and impoverish workers has failed to materialize.

Continue Reading...

One of the hottest topics in antitrust these days is institutional investors’ common ownership of the stock of competing firms. Large investment companies like BlackRock, Vanguard, State Street, and Fidelity offer index and actively managed mutual funds that are invested in thousands of companies. In many concentrated industries, these institutional investors are “intra-industry diversified,” meaning that they hold stakes in all the significant competitors within the industry.

Recent empirical studies (e.g., here and here) purport to show that this intra-industry diversification has led to a softening of competition in concentrated markets. The theory is that firm managers seek to maximize the profits of their largest and most powerful shareholders, all of which hold stakes in all the major firms in the market and therefore prefer maximization of industry, not firm-specific, profits. (For example, an investor that owns stock in all the airlines servicing a route would not want those airlines to engage in aggressive price competition to win business from each other. Additional sales to one airline would come at the expense of another, and prices—and thus profit margins—would be lower.)

The empirical studies on common ownership, which have received a great deal of attention in the academic literature and popular press and have inspired antitrust scholars to propose a number of policy solutions, have employed a complicated measurement known as “MHHI delta” (MHHI∆). MHHI∆ is a component of the “modified Herfindahl–Hirschman Index” (MHHI), which, as the name suggests, is an adaptation of the Herfindahl–Hirschman Index (HHI).

HHI, which ranges from near zero to 10,000 and is calculated by summing the squares of the market shares of the firms competing in a market, assesses the degree to which a market is concentrated and thus susceptible to collusion or oligopolistic coordination. MHHI endeavors to account for both market concentration (HHI) and the reduced competition incentives occasioned by common ownership of the firms within a market. MHHI∆ is the part of MHHI that accounts for common ownership incentives, so MHHI = HHI + MHHI∆.  (Notably, neither MHHI nor MHHI∆ is bounded by the 10,000 upper limit applicable to HHI.  At the end of this post, I offer an example of a market in which MHHI and MHHI∆ both exceed 10,000.)

In the leading common ownership study, which looked at the airline industry, the authors first calculated the MHHI∆ on each domestic airline route from 2001 to 2014. They then examined, for each route, how changes in the MHHI∆ over time correlated with changes in airfares on that route. To control for route-specific factors that might influence both fares and the MHHI∆, the authors ran a number of regressions. They concluded that common ownership of air carriers resulted in a 3%–7% increase in fares.

As should be apparent, it is difficult to understand the common ownership issue—the extent to which there is a competitive problem and the propriety of proposed solutions—without understanding MHHI∆. Unfortunately, the formula for the metric is extraordinarily complex. Posner et al. express it as follows:

Where:

  • βij is the fraction of shares in firm j controlled by investor I,
  • the shares are both cash flow and control shares (so control rights are assumed to be proportionate to the investor’s share of firm profits), and
  • sj is the market share of firm j.

The complexity of this formula is, for less technically oriented antitrusters (like me!), a barrier to entry into the common ownership debate.  In the paragraphs that follow, I attempt to lower that entry barrier by describing the overall process for determining MHHI∆, cataloguing the specific steps required to calculate the measure, and offering a concrete example.

Overview of the Process for Calculating MHHI∆

Determining the MHHI∆ for a market involves three primary tasks. The first is to assess, for each coupling of competing firms in the market (e.g., Southwest Airlines and United Airlines), the degree to which the investors in one of the competitors would prefer that it not attempt to win business from the other by lowering prices, etc. This assessment must be completed twice for each coupling. With the Southwest and United coupling, for example, one must assess both the degree to which United’s investors would prefer that the company not win business from Southwest and the degree to which Southwest’s investors would prefer that the company not win business from United. There will be different answers to those two questions if, for example, United has a significant shareholder who owns no Southwest stock (and thus wants United to win business from Southwest), but Southwest does not have a correspondingly significant shareholder who owns no United stock (and would thus want Southwest to win business from United).

Assessing the incentive of one firm, Firm J (to correspond to the formula above), to pull its competitive punches against another, Firm K, requires calculating a fraction that compares the interest of the first firm’s owners in “coupling” profits (the combined profits of J and K) to their interest in “own-firm” profits (J profits only). The numerator of that fraction is based on data from the coupling—i.e., the firm whose incentive to soften competition one is assessing (J) and the firm with which it is competing (K). The fraction’s denominator is based on data for the single firm whose competition-reduction incentive one is assessing (J). Specifically:

  • The numerator assesses the degree to which the firms in the coupling are commonly owned, such that their owners would not benefit from price-reducing, head-to-head competition and would instead prefer that the firms compete less vigorously so as to maximize coupling profits. To determine the numerator, then, one must examine all the investors who are invested in both firms; for each, multiply their ownership percentages in the two firms; and then sum those products for all investors with common ownership. (If an investor were invested in only one firm in the coupling, its ownership percentage would be multiplied by zero and would thus drop out; after all, an investor in only one of the firms has no interest in maximization of coupling versus own-firm profits.)
  • The denominator assesses the degree to which the investor base (weighted by control) of the firm whose competition-reduction incentive is under consideration (J) would prefer that it maximize its own profits, not the profits of the coupling. Determining the denominator requires summing the squares of the ownership percentages of investors in that firm. Squaring means that very small investors essentially drop out and that the denominator grows substantially with large ownership percentages by particular investors. Large ownership percentages suggest the presence of shareholders that are more likely able to influence management, whether those shareholders also own shares in the second company or not.

Having assessed, for each firm in a coupling, the incentive to soften competition with the other, one must proceed to the second primary task: weighing the significance of those firms’ incentives not to compete with each other in light of the coupling’s shares of the market. (The idea here is that if two small firms reduced competition with one another, the effect on overall market competition would be less significant than if two large firms held their competitive fire.) To determine the significance to the market of the two coupling members’ incentives to reduce competition with each other, one must multiply each of the two fractions determined above (in Task 1) times the product of the market shares of the two firms in the coupling. This will generate two “cross-MHHI deltas,” one for each of the two firms in the coupling (e.g., one cross-MHHI∆ for Southwest/United and another for United/Southwest).

The third and final task is to aggregate the effect of common ownership-induced competition-softening throughout the market as a whole by summing the softened competition metrics (i.e., two cross-MHHI deltas for each coupling of competitors within the market). If decimals were used to account for the firms’ market shares (e.g., if a 25% market share was denoted 0.25), the sum should be multiplied by 10,000.

Following is a detailed list of instructions for assessing the MHHI∆ for a market (assuming proportionate control—i.e., that investors’ control rights correspond to their shares of firm profits).

A Nine-Step Guide to Calculating the MHHI∆ for a Market

  1. List the firms participating in the market and the market share of each.
  2. List each investor’s ownership percentage of each firm in the market.
  3. List the potential pairings of firms whose incentives to compete with each other must be assessed. There will be two such pairings for each coupling of competitors in the market (e.g., Southwest/United and United/Southwest) because one must assess the incentive of each firm in the coupling to compete with the other, and that incentive may differ for the two firms (e.g., United may have less incentive to compete with Southwest than Southwest with United). This implies that the number of possible pairings will always be n(n-1), where n is the number of firms in the market.
  4. For each investor, perform the following for each pairing of firms: Multiply the investor’s percentage ownership of the two firms in each pairing (e.g., Institutional Investor 1’s percentage ownership in United * Institutional Investor 1’s percentage ownership in Southwest for the United/Southwest pairing).
  5. For each pairing, sum the amounts from item four across all investors that are invested in both firms. (This will be the numerator in the fraction used in Step 7 to determine the pairing’s cross-MHHI∆.)
  6. For the first firm in each pairing (the one whose incentive to compete with the other is under consideration), sum the squares of the ownership percentages of that firm held by each investor. (This will be the denominator of the fraction used in Step 7 to determine the pairing’s cross-MHHI∆.)
  7. Figure the cross-MHHI∆ for each pairing of firms by doing the following: Multiply the market shares of the two firms, and then multiply the resulting product times a fraction consisting of the relevant numerator (from Step 5) divided by the relevant denominator (from Step 6).
  8. Add together the cross-MHHI∆s for each pairing of firms in the market.
  9. Multiply that amount times 10,000.

I will now illustrate this nine-step process by working through a concrete example.

An Example

Suppose four airlines—American, Delta, Southwest, and United—service a particular market. American and Delta each have 30% of the market; Southwest and United each have a market share of 20%.

Five funds are invested in the market, and each holds stock in all four airlines. Fund 1 owns 1% of each airline’s stock. Fund 2 owns 2% of American and 1% of each of the others. Fund 3 owns 2% of Delta and 1% of each of the others. Fund 4 owns 2% of Southwest and 1% of each of the others. And Fund 4 owns 2% of United and 1% of each of the others. None of the airlines has any other significant stockholder.

Step 1: List firms and market shares.

  1. American – 30% market share
  2. Delta – 30% market share
  3. Southwest – 20% market share
  4. United – 20% market share

Step 2: List investors’ ownership percentages.

Step 3: Catalogue potential competitive pairings.

  1. American-Delta (AD)
  2. American-Southwest (AS)
  3. American-United (AU)
  4. Delta-American (DA)
  5. Delta-Southwest (DS)
  6. Delta-United (DU)
  7. Southwest-American (SA)
  8. Southwest-Delta (SD)
  9. Southwest-United (SU)
  10. United-American (UA)
  11. United-Delta (UD)
  12. United-Southwest (US)

Steps 4 and 5: Figure numerator for determining cross-MHHI∆s.

Step 6: Figure denominator for determining cross-MHHI∆s.

Steps 7 and 8: Determine cross-MHHI∆s for each potential pairing, and then sum all.

  1. AD: .09(.0007/.0008) = .07875
  2. AS: .06(.0007/.0008) = .0525
  3. AU: .06(.0007/.0008) = .0525
  4. DA: .09(.0007/.0008) = .07875
  5. DS: .06(.0007/.0008) = .0525
  6. DU: .06(.0007/.0008) = .0525
  7. SA: .06(.0007/.0008) = .0525
  8. SD: .06(.0007/.0008) = .0525
  9. SU: .04(.0007/.0008) = .035
  10. UA: .06(.0007/.0008) = .0525
  11. UD: .06(.0007/.0008) = .0525
  12. US: .04(.0007/.0008) = .035
    SUM = .6475

Step 9: Multiply by 10,000.

MHHI∆ = 6475.

(NOTE: HHI in this market would total (30)(30) + (30)(30) + (20)(20) + (20)(20) = 2600. MHHI would total 9075.)

***

I mentioned earlier that neither MHHI nor MHHI∆ is subject to an upper limit of 10,000. For example, if there are four firms in a market, five institutional investors that each own 5% of the first three firms and 1% of the fourth, and no other investors holding significant stakes in any of the firms, MHHI∆ will be 15,500 and MHHI 18,000.  (Hat tip to Steve Salop, who helped create the MHHI metric, for reminding me to point out that MHHI and MHHI∆ are not limited to 10,000.)

Although not always front page news, International Trade Commission (“ITC”) decisions can have major impacts on trade policy and antitrust law. Scott Kieff, a former ITC Commissioner, recently published a thoughtful analysis of Certain Carbon and Alloy Steel Products — a potentially important ITC investigation that implicates the intersection of these two policy areas. Scott was on the ITC when the investigation was initiated in 2016, but left in 2017 before the decision was finally issued in March of this year.

Perhaps most important, the case highlights an uncomfortable truth:

Sometimes (often?) Congress writes really bad laws and promotes really bad policies, but administrative agencies can do more harm to the integrity of our legal system by abusing their authority in an effort to override those bad policies.

In this case, that “uncomfortable truth” plays out in the context of the ITC majority’s effort to override Section 337 of the Tariff Act of 1930 by limiting the ability of the ITC to investigate alleged violations of the Act rooted in antitrust.

While we’re all for limiting the ability of competitors to use antitrust claims in order to impede competition (as one of us has noted: “Erecting barriers to entry and raising rivals’ costs through regulation are time-honored American political traditions”), it is inappropriate to make an end-run around valid and unambiguous legislation in order to do so — no matter how desirable the end result. (As the other of us has noted: “Attempts to [effect preferred policies] through any means possible are rational actions at an individual level, but writ large they may undermine the legal fabric of our system and should be resisted.”)

Brief background

Under Section 337, the ITC is empowered to, among other things, remedy

Unfair methods of competition and unfair acts in the importation of articles… into the United States… the threat or effect of which is to destroy or substantially injure an industry in the United States… or to restrain or monopolize trade and commerce in the United States.

In Certain Carbon and Alloy Steel Products, the ITC undertook an investigation — at the behest of U.S. Steel Corporation — into alleged violations of Section 337 by the Chinese steel industry. The complaint was based upon a number of claims, including allegations of price fixing.

As ALJ Lord succinctly summarizes in her Initial Determination:

For many years, the United States steel industry has complained of unfair trade practices by manufacturers of Chinese steel. While such practices have resulted in the imposition of high tariffs on certain Chinese steel products, U.S. Steel seeks additional remedies. The complaint by U.S. Steel in this case attempts to use section 337 of the Tariff Act of 1930 to block all Chinese carbon and alloy steel from coming into the United States. One of the grounds that U.S. Steel relies on is the allegation that the Chinese steel industry violates U.S. antitrust laws.

The ALJ dismissed the antitrust claims (alleging violations of the Sherman Act), however, concluding that they failed to allege antitrust injury as required by US courts deciding Sherman Act cases brought by private parties under the Clayton Act’s remedial provisions:

Under federal antitrust law, it is firmly established that a private complainant must show antitrust standing [by demonstrating antitrust injury]. U.S. Steel has not alleged that it has antitrust standing or the facts necessary to establish antitrust standing and erroneously contends it need not have antitrust standing to allege the unfair trade practice of restraining trade….

In its decision earlier this year, a majority of ITC commissioners agreed, and upheld the ALJ’s Initial Determination.

In comments filed with the ITC following the ALJ’s Initial Determination, we argued that the ALJ erred in her analysis:

Because antitrust injury is not an express requirement imposed by Congress, because ITC processes differ substantially from those of Article III courts, and because Section 337 is designed to serve different aims than private antitrust litigation, the Commission should reinstate the price fixing claims and allow the case to proceed.

Unfortunately, in upholding the Initial Determination, the Commission compounded this error, and also failed to properly understand the goals of the Tariff Act, and, by extension, its own role as arbiter of “unfair” trade practices.

A tale of two statutes

The case appears to turn on an arcane issue of adjudicative process in antitrust claims brought under the antitrust laws in federal court, on the one hand, versus antitrust claims brought under the Section 337 of the Tariff Act at the ITC, on the other. But it is actually about much more: the very purposes and structures of those laws.

The ALJ notes that

[The Chinese steel manufacturers contend that] under antitrust law as currently applied in federal courts, it has become very difficult for a private party like U.S. Steel to bring an antitrust suit against its competitors. Steel accepts this but says the law under section 337 should be different than in federal courts.

And as the ALJ further notes, this highlights the differences between the two regimes:

The dispute between U.S. Steel and the Chinese steel industry shows the conflict between section 337, which is intended to protect American industry from unfair competition, and U.S. antitrust laws, which are intended to promote competition for the benefit of consumers, even if such competition harms competitors.

Nevertheless, the ALJ (and the Commission) holds that antitrust laws must be applied in the same way in federal court as under Section 337 at the ITC.

It is this conclusion that is in error.

Judging from his article, it’s clear that Kieff agrees and would have dissented from the Commission’s decision. As he writes:

Unlike the focus in Section 16 of the Clayton Act on harm to the plaintiff, the provisions in the ITC’s statute — Section 337 — explicitly require the ITC to deal directly with harms to the industry or the market (rather than to the particular plaintiff)…. Where the statute protects the market rather than the individual complainant, the antitrust injury doctrine’s own internal logic does not compel the imposition of a burden to show harm to the particular private actor bringing the complaint. (Emphasis added)

Somewhat similar to the antitrust laws, the overall purpose of Section 337 focuses on broader, competitive harm — injury to “an industry in the United States” — not specific competitors. But unlike the Clayton Act, the Tariff Act does not accomplish this by providing a remedy for private parties alleging injury to themselves as a proxy for this broader, competitive harm.

As Kieff writes:

One stark difference between the two statutory regimes relates to the explicit goals that the statutes state for themselves…. [T]he Clayton Act explicitly states it is to remedy harm to only the plaintiff itself. This difference has particular significance for [the Commission’s decision in Certain Carbon and Alloy Steel Products] because the Supreme Court’s source of the private antitrust injury doctrine, its decision in Brunswick, explicitly tied the doctrine to this particular goal.

More particularly, much of the Court’s discussion in Brunswick focuses on the role the [antitrust injury] doctrine plays in mitigating the risk of unjustly enriching the plaintiff with damages awards beyond the amount of the particular antitrust harm that plaintiff actually suffered. The doctrine makes sense in the context of the Clayton Act proceedings in federal court because it keeps the cause of action focused on that statute’s stated goal of protecting a particular litigant only in so far as that party itself is a proxy for the harm to the market.

By contrast, since the goal of the ITC’s statute is to remedy for harm to the industry or to trade and commerce… there is no need to closely tie such broader harms to the market to the precise amounts of harms suffered by the particular complainant. (Emphasis and paragraph breaks added)

The mechanism by which the Clayton Act works is decidedly to remedy injury to competitors (including with treble damages). But because its larger goal is the promotion of competition, it cabins that remedy in order to ensure that it functions as an appropriate proxy for broader harms, and not simply a tool by which competitors may bludgeon each other. As Kieff writes:

The remedy provisions of the Clayton Act benefit much more than just the private plaintiff. They are designed to benefit the public, echoing the view that the private plaintiff is serving, indirectly, as a proxy for the market as a whole.

The larger purpose of Section 337 is somewhat different, and its remedial mechanism is decidedly different:

By contrast, the provisions in Section 337[] are much more direct in that they protect against injury to the industry or to trade and commerce more broadly. Harm to the particular complainant is essentially only relevant in so far as it shows harm to the industry or to trade and commerce more broadly. In turn, the remedies the ITC’s statute provides are more modest and direct in stopping any such broader harm that is determined to exist through a complete investigation.

The distinction between antitrust laws and trade laws is firmly established in the case law. And, in particular, trade laws not only focus on effects on industry rather than consumers or competition, per se, but they also contemplate a different kind of economic injury:

The “injury to industry” causation standard… focuses explicitly upon conditions in the U.S. industry…. In effect, Congress has made a judgment that causally related injury to the domestic industry may be severe enough to justify relief from less than fair value imports even if from another viewpoint the economy could be said to be better served by providing no relief. (Emphasis added)

Importantly, under Section 337 such harms to industry would ultimately have to be shown before a remedy would be imposed. In other words, demonstration of injury to competition is a constituent part of a case under Section 337. By contrast, such a demonstration is brought into an action under the antitrust laws by the antitrust injury doctrine as a function of establishing that the plaintiff has standing to sue as a proxy for broader harm to the market.

Finally, it should be noted, as ITC Commissioner Broadbent points out in her dissent from the Commission’s majority opinion, that U.S. Steel alleged in its complaint a violation of the Sherman Act, not the Clayton Act. Although its ability to enforce the Sherman Act arises from the remedial provisions of the Clayton Act, the substantive analysis of its claims is a Sherman Act matter. And the Sherman Act does not contain any explicit antitrust injury requirement. This is a crucial distinction because, as Commissioner Broadbent notes (quoting the Federal Circuit’s Tianrui case):

The “antitrust injury” standing requirement stems, not from the substantive antitrust statutes like the Sherman Act, but rather from the Supreme Court’s interpretation of the injury elements that must be proven under sections 4 and 16 of the Clayton Act.

* * *

Absent [] express Congressional limitation, restricting the Commission’s consideration of unfair methods of competition and unfair acts in international trade “would be inconsistent with the congressional purpose of protecting domestic commerce from unfair competition in importation….”

* * *

Where, as here, no such express limitation in the Sherman Act has been shown, I find no legal justification for imposing the insurmountable hurdle of demonstrating antitrust injury upon a typical U.S. company that is grappling with imports that benefit from the international unfair methods of competition that have been alleged in this case.

Section 337 is not a stand-in for other federal laws, even where it protects against similar conduct, and its aims diverge in important ways from those of other federal laws. It is, in other words, a trade protection provision, first and foremost, not an antitrust law, patent law, or even precisely a consumer protection statute.

The ITC hamstrings Itself

Kieff lays out a number of compelling points in his paper, including an argument that the ITC was statutorily designed as a convenient forum with broad powers in order to enable trade harms to be remedied without resort to expensive and protracted litigation in federal district court.

But, perhaps even more important, he points to a contradiction in the ITC’s decision that is directly related to its statutory design.

Under the Tariff Act, the Commission is entitled to self-initiate a Section 337 investigation identical to the one in Certain Alloy and Carbon Steel Products. And, as in this case, private parties are also entitled to file complaints with the Commission that can serve as the trigger for an investigation. In both instances, the ITC itself decides whether there is sufficient basis for proceeding, and, although an investigation unfolds much like litigation in federal court, it is, in fact, an investigation (and decision) undertaken by the ITC itself.

Although the Commission is statutorily mandated to initiate an investigation once a complaint is properly filed, this is subject to a provision requiring the Commission to “examine the complaint for sufficiency and compliance with the applicable sections of this Chapter.” Thus, the Commission conducts a preliminary investigation to determine if the complaint provides a sound basis for institution of an investigation, not unlike an assessment of standing and evaluation of the sufficiency of a complaint in federal court — all of which happens before an official investigation is initiated.

Yet despite the fact that, before an investigation begins, the ITC either 1) decides for itself that there is sufficient basis to initiate its own action, or else 2) evaluates the sufficiency of a private complaint to determine if the Commission should initiate an action, the logic of the decision in Certain Alloy and Carbon Steel Products would apply different standards in each case. Writes Kieff:

There appears to be broad consensus that the ITC can self-initiate an antitrust case under Section 337 and in such a proceeding would not be required to apply the antitrust injury doctrine to itself or to anyone else…. [I]t seems odd to make [this] legal distinction… After all, if it turned out there really were harm to a domestic industry or trade and commerce in this case, it would be strange for the ITC to have to dismiss this action and deprive itself of the benefit of the advance work and ongoing work of the private party [just because it was brought to the ITC’s attention by a private party complaint], only to either sit idle or expend the resources to — flying solo that time — reinitiate and proceed to completion.

Odd indeed, because, in the end, what is instituted is an investigation undertaken by the ITC — whether it originates from a private party or from its own initiative. The role of a complaining party before the ITC is quite distinct from that of a plaintiff in an Article III court.

In trade these days, it always comes down to China

We are hesitant to offer justifications for Congress’ decision to grant the ITC a sweeping administrative authority to prohibit the “unfair” importation of articles into the US, but there could be good reasons that Congress enacted the Tariff Act as a protectionist statute.

In a recent Law360 article, Kieff noted that analyzing anticompetitive behavior in the trade context is more complicated than in the domestic context. To take the current example: By limiting the complainant’s ability to initiate an ITC action based on a claim that foreign competitors are conspiring to keep prices artificially low, the ITC majority decision may be short-sighted insofar as keeping prices low might actually be part of a larger industrial and military policy for the Chinese government:

The overlooked problem is that, as the ITC petitioners claim, the Chinese government is using its control over many Chinese steel producers to accomplish full-spectrum coordination on both price and quantity. Mere allegations of course would have to be proven; but it’s not hard to imagine that such coordination could afford the Chinese government effective surveillance and control over  almost the entire worldwide supply chain for steel products.

This access would help the Chinese government run significant intelligence operations…. China is allegedly gaining immense access to practically every bid and ask up and down the supply chain across the global steel market in general, and our domestic market in particular. That much real-time visibility across steel markets can in turn give visibility into defense, critical infrastructure and finance.

Thus, by taking it upon itself to artificially narrow its scope of authority, the ITC could be undermining a valid congressional concern: that trade distortions not be used as a way to allow a foreign government to gain a more pervasive advantage over diplomatic and military operations.

No one seriously doubts that China is, at the very least, a supportive partner to much of its industry in a way that gives that industry some potential advantage over competitors operating in countries that receive relatively less assistance from national governments.

In certain industries — notably semiconductors and patent-intensive industries more broadly — the Chinese government regularly imposes onerous conditions (including mandatory IP licensing and joint ventures with Chinese firms, invasive audits, and obligatory software and hardware “backdoors”) on foreign tech companies doing business in China. It has long been an open secret that these efforts, ostensibly undertaken for the sake of national security, are actually aimed at protecting or bolstering China’s domestic industry.

And China could certainly leverage these partnerships to obtain information on a significant share of important industries and their participants throughout the world. After all, we are well familiar with this business model: cheap or highly subsidized access to a desired good or service in exchange for user data is the basic description of modern tech platform companies.

Only Congress can fix Congress

Stepping back from the ITC context, a key inquiry when examining antitrust through a trade lens is the extent to which countries will use antitrust as a non-tariff barrier to restrain trade. It is certainly the case that a sort of “mutually assured destruction” can arise where every country chooses to enforce its own ambiguously worded competition statute in a way that can favor its domestic producers to the detriment of importers. In the face of that concern, the impetus to try to apply procedural constraints on open-ended competition laws operating in the trade context is understandable.

And as a general matter, it also makes sense to be concerned when producers like U.S. Steel try to use our domestic antitrust laws to disadvantage Chinese competitors or keep them out of the market entirely.

But in this instance the analysis is more complicated. Like it or not, what amounts to injury in the international trade context, even with respect to anticompetitive conduct, is different than what’s contemplated under the antitrust laws. When the Tariff Act of 1922 was passed (which later became Section 337) the Senate Finance Committee Report that accompanied it described the scope of its unfair methods of competition authority as “broad enough to prevent every type and form of unfair practice” involving international trade. At the same time, Congress pretty clearly gave the ITC the discretion to proceed on a much less-constrained basis than that on which Article III courts operate.

If these are problems, Congress needs to fix them, not the ITC acting sua sponte.

Moreover, as Kieff’s paper (and our own comments in the Certain Alloy and Carbon Steel Products investigation) make clear, there are also a number of relevant, practical distinctions between enforcement of the antitrust laws in a federal court in a case brought by a private plaintiff and an investigation of alleged anticompetitive conduct by the ITC under Section 337. Every one of these cuts against importing an antitrust injury requirement from federal court into ITC adjudication.

Instead, understandable as its motivation may be, the ITC majority’s approach in Certain Alloy and Carbon Steel Products requires disregarding Congressional intent, and that’s simply not a tenable interpretive approach for administrative agencies to take.

Protectionism is a terrible idea, but if that’s how Congress wrote the Tariff Act, the ITC is legally obligated to enforce the protectionist law it is given.

Following is the (slightly expanded and edited) text of my remarks from the panel, Antitrust and the Tech Industry: What Is at Stake?, hosted last Thursday by CCIA. Bruce Hoffman (keynote), Bill Kovacic, Nicolas Petit, and Christine Caffarra also spoke. If we’re lucky Bruce will post his remarks on the FTC website; they were very good.

(NB: Some of these comments were adapted (or lifted outright) from a forthcoming Cato Policy Report cover story co-authored with Gus Hurwitz, so Gus shares some of the credit/blame.)

 

The urge to treat antitrust as a legal Swiss Army knife capable of correcting all manner of social and economic ills is apparently difficult for some to resist. Conflating size with market power, and market power with political power, many recent calls for regulation of industry — and the tech industry in particular — are framed in antitrust terms. Take Senator Elizabeth Warren, for example:

[T]oday, in America, competition is dying. Consolidation and concentration are on the rise in sector after sector. Concentration threatens our markets, threatens our economy, and threatens our democracy.

And she is not alone. A growing chorus of advocates are now calling for invasive, “public-utility-style” regulation or even the dissolution of some of the world’s most innovative companies essentially because they are “too big.”

According to critics, these firms impose all manner of alleged harms — from fake news, to the demise of local retail, to low wages, to the veritable destruction of democracy — because of their size. What is needed, they say, is industrial policy that shackles large companies or effectively mandates smaller firms in order to keep their economic and political power in check.

But consider the relationship between firm size and political power and democracy.

Say you’re successful in reducing the size of today’s largest tech firms and in deterring the creation of new, very-large firms: What effect might we expect this to have on their political power and influence?

For the critics, the effect is obvious: A re-balancing of wealth and thus the reduction of political influence away from Silicon Valley oligarchs and toward the middle class — the “rudder that steers American democracy on an even keel.”

But consider a few (and this is by no means all) countervailing points:

To begin, at the margin, if you limit firm growth as a means of competing with rivals, you make correspondingly more important competition through political influence. Erecting barriers to entry and raising rivals’ costs through regulation are time-honored American political traditions, and rent-seeking by smaller firms could both be more prevalent, and, paradoxically, ultimately lead to increased concentration.

Next, by imbuing antitrust with an ill-defined set of vague political objectives, you also make antitrust into a sort of “meta-legislation.” As a result, the return on influencing a handful of government appointments with authority over antitrust becomes huge — increasing the ability and the incentive to do so.

And finally, if the underlying basis for antitrust enforcement is extended beyond economic welfare effects, how long can we expect to resist calls to restrain enforcement precisely to further those goals? All of a sudden the effort and ability to get exemptions will be massively increased as the persuasiveness of the claimed justifications for those exemptions, which already encompass non-economic goals, will be greatly enhanced. We might even find, again, that we end up with even more concentration because the exceptions could subsume the rules.

All of which of course highlights the fundamental, underlying problem: If you make antitrust more political, you’ll get less democratic, more politically determined, results — precisely the opposite of what proponents claim to want.

Then there’s democracy, and calls to break up tech in order to save it. Calls to do so are often made with reference to the original intent of the Sherman Act and Louis Brandeis and his “curse of bigness.” But intentional or not, these are rallying cries for the assertion, not the restraint, of political power.

The Sherman Act’s origin was ambivalent: although it was intended to proscribe business practices that harmed consumers, it was also intended to allow politically-preferred firms to maintain high prices in the face of competition from politically-disfavored businesses.

The years leading up to the adoption of the Sherman Act in 1890 were characterized by dramatic growth in the efficiency-enhancing, high-tech industries of the day. For many, the purpose of the Sherman Act was to stem this growth: to prevent low prices — and, yes, large firms — from “driving out of business the small dealers and worthy men whose lives have been spent therein,” in the words of Trans-Missouri Freight, one of the early Supreme Court decisions applying the Act.

Left to the courts, however, the Sherman Act didn’t quite do the trick. By 1911 (in Standard Oil and American Tobacco) — and reflecting consumers’ preferences for low prices over smaller firms — only “unreasonable” conduct was actionable under the Act. As one of the prime intellectual engineers behind the Clayton Antitrust Act and the Federal Trade Commission in 1914, Brandeis played a significant role in the (partial) legislative and administrative overriding of the judiciary’s excessive support for economic efficiency.

Brandeis was motivated by the belief that firms could become large only by illegitimate means and by deceiving consumers. But Brandeis was no advocate for consumer sovereignty. In fact, consumers, in Brandeis’ view, needed to be saved from themselves because they were, at root, “servile, self-indulgent, indolent, ignorant.”

There’s a lot that today we (many of us, at least) would find anti-democratic in the underpinnings of progressivism in US history: anti-consumerism; racism; elitism; a belief in centrally planned, technocratic oversight of the economy; promotion of social engineering, including through eugenics; etc. The aim of limiting economic power was manifestly about stemming the threat it posed to powerful people’s conception of what political power could do: to mold and shape the country in their image — what economist Thomas Sowell calls “the vision of the anointed.”

That may sound great when it’s your vision being implemented, but today’s populist antitrust resurgence comes while Trump is in the White House. It’s baffling to me that so many would expand and then hand over the means to design the economy and society in their image to antitrust enforcers in the executive branch and presidentially appointed technocrats.

Throughout US history, it is the courts that have often been the bulwark against excessive politicization of the economy, and it was the courts that shepherded the evolution of antitrust away from its politicized roots toward rigorous, economically grounded policy. And it was progressives like Brandeis who worked to take antitrust away from the courts. Now, with efforts like Senator Klobuchar’s merger bill, the “New Brandeisians” want to rein in the courts again — to get them out of the way of efforts to implement their “big is bad” vision.

But the evidence that big is actually bad, least of all on those non-economic dimensions, is thin and contested.

While Zuckerberg is grilled in Congress over perceived, endemic privacy problems, politician after politician and news article after news article rushes to assert that the real problem is Facebook’s size. Yet there is no convincing analysis (maybe no analysis of any sort) that connects its size with the problem, or that evaluates whether the asserted problem would actually be cured by breaking up Facebook.

Barry Lynn claims that the origins of antitrust are in the checks and balances of the Constitution, extended to economic power. But if that’s right, then the consumer welfare standard and the courts are the only things actually restraining the disruption of that order. If there may be gains to be had from tweaking the minutiae of the process of antitrust enforcement and adjudication, by all means we should have a careful, lengthy discussion about those tweaks.

But throwing the whole apparatus under the bus for the sake of an unsubstantiated, neo-Brandeisian conception of what the economy should look like is a terrible idea.

The terms of the United Kingdom’s (UK) exit from the European Union (EU) – “Brexit” – are of great significance not just to UK and EU citizens, but for those in the United States and around the world who value economic liberty (see my Heritage Foundation memorandum giving the reasons why, here).

If Brexit is to promote economic freedom and enhanced economic welfare, Brexit negotiations between the UK and the EU must not limit the ability of the United Kingdom to pursue (1) efficiency-enhancing regulatory reform and (2) trade liberalizing agreements with non-EU nations.  These points are expounded upon in a recent economic study (The Brexit Inflection Point) by the non-profit UK think tank the Legatum Institute, which has produced an impressive body of research on the benefits of Brexit, if implemented in a procompetitive, economically desirable fashion.  (As a matter of full disclosure, I am a member of Legatum’s “Special Trade Commission,” which “seeks to re-focus the public discussion on Brexit to a positive conversation on opportunities, rather than challenges, while presenting empirical evidence of the dangers of not following an expansive trade negotiating path.”  Members of the Special Trade Commission are unpaid – they serve on a voluntary pro bono basis.)

Unfortunately, however, leading UK press commentators have urged the UK Government to accede to a full harmonization of UK domestic regulations and trade policy with the EU.  Such a deal would be disastrous.  It would prevent the UK from entering into mutually beneficial trade liberalization pacts with other nations or groups of nations (e.g., with the U.S. and with the members of the Transpacific Partnership (TPP) trade agreement), because such arrangements by necessity would lead to a divergence with EU trade strictures.  It would also preclude the UK from unilaterally reducing harmful regulatory burdens that are a byproduct of economically inefficient and excessive EU rules.  In short, it would be antithetical to economic freedom and economic welfare.

Notably, in a November 30 article (Six Impossible Notions About “Global Britain”), a well-known business journalist, Martin Wolf of the Financial Times, sharply criticized The Brexit Inflection Point’s recommendation that the UK should pursue trade and regulatory policies that would diverge from EU standards.  Notably, Wolf characterized as an “impossible thing” Legatum’s point that the UK should not “’allow itself to be bound by the EU’s negotiating mandate.’  We all now know this is infeasible.  The EU holds the cards and it knows it holds the cards. The Legatum authors still do not.”

Shanker Singham, Director of Economic Policy and Prosperity Studies at Legatum, brilliantly responded to Wolf’s critique in a December 4 article (published online by CAPX) entitled A Narrow-Minded Brexit Is Doomed to Fail.  Singham’s trenchant analysis merits being set forth in its entirety (by permission of the author):

“Last week, the Financial Times’s chief economics commentator, Martin Wolf, dedicated his column to criticising The Brexit Inflection Point, a report for the Legatum Institute in which Victoria Hewson, Radomir Tylecote and I discuss what would constitute a good end state for the UK as it seeks to exercise an independent trade and regulatory policy post Brexit, and how we get from here to there.

We write these reports to advance ideas that we think will help policymakers as they tackle the single biggest challenge this country has faced since the Second World War. We believe in a market place of ideas, and we welcome challenge. . . .

[W]e are thankful that Martin Wolf, an eminent economist, has chosen to engage with the substance of our arguments. However, his article misunderstands the nature of modern international trade negotiations, as well as the reality of the European Union’s regulatory system – and so his claim that, like the White Queen, we “believe in impossible things” simply doesn’t stack up.

Mr Wolf claims there are six impossible things that we argue. We will address his rebuttals in turn.

But first, in discussions about the UK’s trade policy, it is important to bear in mind that the British government is currently discussing the manner in which it will retake its independent WTO membership. This includes agricultural import quotas, and its WTO rectification processes with other WTO members.

If other countries believe that the UK will adopt the position of maintaining regulatory alignment with the EU, as advocated by Mr Wolf and others, the UK’s negotiating strategy would be substantially weaker. It would quite wrongly suggest that the UK will be unable to lower trade barriers and offer the kind of liberalisation that our trading partners seek and that would work best for the UK economy. This could negatively impact both the UK and the EU’s ongoing discussions in the WTO.

Has the EU’s trading system constrained growth in the World?

The first impossible thing Mr Wolf claims we argue is that the EU system of protectionism and harmonised regulation has constrained economic growth for Britain and the world. He is right to point out that the volume of world trade has increased, and the UK has, of course, experienced GDP growth while a member of the EU.

However, as our report points out, the EU’s prescriptive approach to regulation, especially in the recent past (for example, its approach on data protection, audio-visual regulation, the restrictive application of the precautionary principle, REACH chemicals regulation, and financial services regulations to name just a few) has led to an increase in anti-competitive regulation and market distortions that are wealth destructive.

As the OECD notes in various reports on regulatory reform, regulation can act as a behind-the-border barrier to trade and impede market openness for trade and investment. Inefficient regulation imposes unnecessary burdens on firms, increases barriers to entry, impacts on competition and incentives for innovation, and ultimately hurts productivity. The General Data Protection Regulation (GDPR) is an example of regulation that is disproportionate to its objectives; it is highly prescriptive and imposes substantial compliance costs for business that want to use data to innovate.

Rapid growth during the post-war period is in part thanks to the progressive elimination of border trade barriers. But, in terms of wealth creation, we are no longer growing at that rate. Since before the financial crisis, measures of actual wealth creation (not GDP which includes consumer and government spending) such as industrial output have stalled, and the number of behind-the-border regulatory barriers has been increasing.

The global trading system is in difficulty. The lack of negotiation of a global trade round since the Uruguay Round, the lack of serious services liberalisation in either the built-in agenda of the WTO or sectorally following on from the Basic Telecoms Agreement and its Reference Paper on Competition Safeguards in 1997 has led to an increase in behind-the-border barriers and anti-competitive distortions and regulation all over the world. This stasis in international trade negotiations is an important contributory factor to what many economists have talked about as a “new normal” of limited growth, and a global decline in innovation.

Meanwhile the EU has sought to force its regulatory system on the rest of the world (the GDPR is an example of this). If it succeeds, the result would be the kind of wealth destruction that pushes more people into poverty. It is against this backdrop that the UK is negotiating with both the EU and the rest of the world.

The question is whether an independent UK, the world’s sixth biggest economy and second biggest exporter of services, is able to contribute to improving the dynamics of the global economic architecture, which means further trade liberalisation. The EU is protectionist against outside countries, which is antithetical to the overall objectives of the WTO. This is true in agriculture and beyond. For example, the EU imposes tariffs on cars at four times the rate applied by the US, while another large auto manufacturing country, Japan, has unilaterally removed its auto tariffs.

In addition, the EU27 represents a declining share of UK exports, which is rather counter-intuitive for a Customs Union and single market. In 1999, the EU represented 55 per cent of UK exports, and by 2016, this was 43 per cent. That said, the EU will remain an important, albeit declining, market for the UK, which is why we advocate a comprehensive free trade agreement with it.

Can the UK secure meaningful regulatory recognition from the EU without being identical to it?

Second, Mr Wolf suggests that regulatory recognition between the UK and EU is possible only if there is harmonisation or identical regulation between the UK and EU.

This is at odds with WTO practice, stretching back to its rules on domestic laws and regulation as encapsulated in Article III of the GATT and Article VI of the GATS, and as expressed in the Technical Barriers to Trade (TBT) and Sanitary and Phytosanitary (SPS) agreements.

This is the critical issue. The direction of travel of international trade thinking is towards countries recognising each other’s regulatory systems if they achieve the same ultimate goal of regulation, even if the underlying regulation differs, and to regulate in ways that are least distortive to international trade and competition. There will be areas where this level of recognition will not be possible, in which case UK exports into the EU will of course have to satisfy the standards of the EU. But even here we can mitigate the trade costs to some extent by Mutual Recognition Agreements on conformity assessment and market surveillance.

Had the US taken the view that it would not receive regulatory recognition unless their regulatory systems were the same, the recent agreement on prudential measures in insurance and reinsurance services between the EU and US would not exist. In fact this point highlights the crucial issue which the UK must successfully negotiate, and one in which its interests are aligned with other countries and with the direction of travel of the WTO itself. The TBT and SPS agreements broadly provide that mutual recognition should not be denied where regulatory goals are aligned but technical regulation differs.

Global trade and regulatory policy increasingly looks for regulation that promotes competition. The EU is on a different track, as the GDPR demonstrates. This is the reason that both the Canada-EU agreement (CETA) and the EU offer in the Trade in Services agreement (TiSA) does not include new services. If GDPR were to become the global standard, trade in data would be severely constrained, slowing the development of big data solutions, the fourth industrial revolution, and new services trade generally.

As many firms recognise, this would be extremely damaging to global prosperity. In arguing that regulatory recognition is only available if the UK is fully harmonised with the EU, Mr Wolf may be in harmony with the EU approach to regulation. But that is exactly the approach that is damaging the global trading environment.

Can the UK exercise trade policy leadership?

Third, Mr Wolf suggests that other countries do not, and will not, look to the UK for trade leadership. He cites the US’s withdrawal from the trade negotiating space as an example. But surely the absence of the world’s biggest services exporter means that the world’s second biggest exporter of services will be expected to advocate for its own interests, and argue for greater services liberalisation.

Mr Wolf believes that the UK is a second-rank power in decline. We take a different view of the world’s sixth biggest economy, the financial capital of the world and the second biggest exporter of services. As former New Zealand High Commissioner, Sir Lockwood Smith, has said, the rest of the world does not see the UK as the UK too often seems to see itself.

The global companies that have their headquarters in the UK do not see things the same way as Mr Wolf. In fact, the lack of trade leadership since 1997 means that a country with significant services exports would be expected to show some leadership.

Mr Wolf’s point is that far from seeking to grandiosely lead global trade negotiations, the UK should stick to its current knitting, which consists of its WTO rectification, and includes the negotiation of its agricultural import quotas and production subsidies in agriculture. This is perhaps the most concerning part of his argument. Yes, the UK must rectify its tariff schedules, but for that process to be successful, especially on agricultural import quotas, it must be able to demonstrate to its partners that it will be able to grant further liberalisation in the near term future. If it can’t, then its trading partners will have no choice but to demand as much liberalisation as they can secure right now in the rectification process.

This will complicate that process, and cause damage to the UK as it takes up its independent WTO membership. Those WTO partners who see the UK as vulnerable on this point will no doubt see validation in Mr Wolf’s article and assume it means that no real liberalisation will be possible from the UK. The EU should note that complicating this process for the UK will not help the EU in its own WTO processes, where it is vulnerable.

Trade negotiations are dynamic not static and the UK must act quickly

Fourth, Mr Wolf suggests that the UK is not under time pressure to “escape from the EU”.  This statement does not account for how international trade negotiations work in practice. In order for countries to cooperate with the UK on its WTO rectification, and its TRQ negotiations, as well to seriously negotiate with it, they have to believe that the UK will have control over tariff schedules and regulatory autonomy from day one of Brexit (even if we may choose not to make changes to it for an implementation period).

If non-EU countries think that the UK will not be able to exercise its freedom for several years, they will simply demand their pound of flesh in the negotiations now, and get on with the rest of their trade policy agenda. Trade negotiations are not static. The US executive could lose trade-negotiating authority in the summer of next year if the NAFTA renegotiation is not going well. Other countries will seek to accede to the Trans Pacific Partnership (TPP). China is moving forward with its Regional Cooperation and Economic Partnership, which does not meaningfully touch on domestic regulatory barriers. Much as we might criticise Donald Trump, his administration has expressed strong political will for a UK-US agreement, and in that regard has broken with traditional US trade policy thinking. The UK has an opportunity to strike and must take it.

The UK should prevail on the EU to allow Customs Agencies to be inter-operable from day one

Fifth, with respect to the challenges raised on customs agencies working together, our report argued that UK customs and the customs agencies of the EU member states should discuss customs arrangements at a practical and technical level now. What stands in the way of this is the EU’s stubbornness. Customs agencies are in regular contact on a business-as-usual basis, so the inability of UK and member-state customs agencies to talk to each other about the critical issue of new arrangements would seem to border on negligence. Of course, the EU should allow member states to have these critical conversations now.  Given the importance of customs agencies interoperating smoothly from day one, the UK Government must press its case with the European Commission to allow such conversations to start happening as a matter of urgency.

Does the EU hold all the cards?

Sixth, Mr Wolf argues that the EU holds all the cards and knows it holds all the cards, and therefore disagrees with our claim that the the UK should “not allow itself to be bound by the EU’s negotiating mandate”. As with his other claims, Mr Wolf finds himself agreeing with the EU’s negotiators. But that does not make him right.

While absence of a trade deal will of course damage UK industries, the cost to EU industries is also very significant. Beef and dairy in Ireland, cars and dairy in Bavaria, cars in Catalonia, textiles and dairy in Northern Italy – all over Europe (and in politically sensitive areas), industries stands to lose billions of Euros and thousands of jobs. This is without considering the impact of no financial services deal, which would increase the cost of capital in the EU, aborting corporate transactions and raising the cost of the supply chain. The EU has chosen a mandate that risks neither party getting what it wants.

The notion that the EU is a masterful negotiator, while the UK’s negotiators are hopeless is not the global view of the EU and the UK. Far from it. The EU in international trade negotiations has a reputation for being slow moving, lacking in creative vision, and unable to conclude agreements. Indeed, others have generally gone to the UK when they have been met with intransigence in Brussels.

What do we do now?

Mr Wolf’s argument amounts to a claim that the UK is not capable of the kind of further and deeper liberalisation that its economy would suggest is both possible and highly desirable both for the UK and the rest of the world. According to Mr Wolf, the UK can only consign itself to a highly aligned regulatory orbit around the EU, unable to realise any other agreements, and unable to influence the regulatory system around which it revolves, even as that system becomes ever more prescriptive and anti-competitive. Such a position is at odds with the facts and would guarantee a poor result for the UK and also cause opportunities to be lost for the rest of the world.

In all of our [Legatum Brexit-related] papers, we have started from the assumption that the British people have voted to leave the EU, and the government is implementing that outcome. We have then sought to produce policy recommendations based on what would constitute a good outcome as a result of that decision. This can be achieved only if we maximise the opportunities and minimise the disruptions.

We all recognise that the UK has embarked on a very difficult process. But there is a difference between difficult and impossible. There is also a difference between tasks that must be done and take time, and genuine negotiation points. We welcome the debate that comes from constructive challenge of our proposals; and we ask in turn that those who criticise us suggest alternative plans that might achieve positive outcomes. We look forward to the opportunity of a broader debate so that collectively the country can find the best path forward.”

 

Yesterday Learfield and IMG College inked their recently announced merger. Since the negotiations were made public several weeks ago, the deal has garnered some wild speculation and potentially negative attention. Now that the merger has been announced, it’s bound to attract even more attention and conjecture.

On the field of competition, however, the market realities that support the merger’s approval are compelling. And, more importantly, the features of this merger provide critical lessons on market definition, barriers to entry, and other aspects of antitrust law related to two-sided and advertising markets that can be applied to numerous matters vexing competition commentators.

First, some background

Learfield and IMG specialize in managing multimedia rights (MMRs) for intercollegiate sports. They are, in effect, classic advertising intermediaries, facilitating the monetization by colleges of radio broadcast advertising and billboard, program, and scoreboard space during games (among other things), and the purchase by advertisers of access to these valuable outlets.

Although these transactions can certainly be (and very often are) entered into by colleges and advertisers directly, firms like Learfield and IMG allow colleges to outsource the process — as one firm’s tag line puts it, “We Work | You Play.” Most important, by bringing multiple schools’ MMRs under one roof, these firms can reduce the transaction costs borne by advertisers in accessing multiple outlets as part of a broad-based marketing plan.

Media rights and branding are a notable source of revenue for collegiate athletic departments: on average, they account for about 3% of these revenues. While they tend to pale in comparison to TV rights, ticket sales, and fundraising, for major programs, MMRs may be the next most important revenue source after these.

Many collegiate programs retain some or all of their multimedia rights and use in-house resources to market them. In some cases schools license MMRs through their athletic conference. In other cases, schools ink deals to outsource their MMRs to third parties, such as Learfield, IMG, JMI Sports, Outfront Media, and Fox Sports, among several others. A few schools even use professional sports teams to manage their MMRs (the owner of the Red Sox manages Boston College’s MMRs, for example).

Schools switch among MMR managers with some regularity, and, in most cases apparently, not among the merging parties. Michigan State, for example, was well known for handling its MMRs in-house. But in 2016 the school entered into a 15-year deal with Fox Sports, estimated at minimum guaranteed $150 million. In 2014 Arizona State terminated its MMR deal with IMG and took it MMRs in-house. Then, in 2016, the Sun Devils entered into a first-of-its-kind arrangement with the Pac 12 in which the school manages and sells its own marketing and media rights while the conference handles core business functions for the sales and marketing team (like payroll, accounting, human resources, and employee benefits). The most successful new entrant on the block, JMI Sports, won Kentucky, Clemson, and the University of Pennsylvania from Learfield or IMG. Outfront Media was spun off from CBS in 2014 and has become one of the strongest MMR intermediary competitors, handling some of the biggest names in college sports, including LSU, Maryland, and Virginia. All told, eight recent national Division I champions are served by MMR managers other than IMG and Learfield.

The supposed problem

As noted above, the most obvious pro-competitive benefit of the merger is in the reduction in transaction costs for firms looking to advertise in multiple markets. But, in order to confer that benefit (which, of course, also benefits the schools, whose marketing properties become easier to access), that also means a dreaded increase in size, measured by number of schools’ MMRs managed. So is this cause for concern?

Jason Belzer, a professor at Rutgers University and founder of sports consulting firm, GAME, Inc., has said that the merger will create a juggernaut — yes, “a massive inexorable force… that crushes whatever is in its path” — that is likely to invite antitrust scrutiny. The New York Times opines that the deal will allow Learfield to “tighten its grip — for nearly total control — on this niche but robust market,” “surely” attracting antitrust scrutiny. But these assessments seem dramatically overblown, and insufficiently grounded in the dynamics of the market.

Belzer’s concerns seem to be merely the size of the merging parties — again, measured by the number of schools’ rights they manage — and speculation that the merger would bring to an end “any” opportunity for entry by a “major” competitor. These are misguided concerns.

To begin, the focus on the potential entry of a “major” competitor is an odd standard that ignores the actual and potential entry of many smaller competitors that are able to win some of the most prestigious and biggest schools. In fact, many in the industry argue — rightly — that there are few economies of scale for colleges. Most of these firms’ employees are dedicated to a particular school and those costs must be incurred for each school, no matter the number, and borne by new entrants and incumbents alike. That means a small firm can profitably compete in the same market as larger firms — even “juggernauts.” Indeed, every college that brings MMR management in-house is, in fact, an entrant — and there are some big schools in big conferences that manage their MMRs in-house.

The demonstrated entry of new competitors and the transitions of schools from one provider to another or to in-house MMR management indicate that no competitor has any measurable market power that can disadvantage schools or advertisers.

Indeed, from the perspective of the school, the true relevant market is no broader than each school’s own rights. Even after the merger there will be at least five significant firms competing for those rights, not to mention each school’s conference, new entrants, and the school itself.

The two-sided market that isn’t really two-sided

Standard antitrust analysis, of course, focuses on consumer benefits: Will the merger make consumers better off (or no worse off)? But too often casual antitrust analysis of two-sided markets trips up on identifying just who the consumer is — and what the relevant market is. For a shopping mall, is the consumer the retailer or the shopper? For newspapers and search engines, is the customer the advertiser or the reader? For intercollegiate sports multimedia rights licensing, is the consumer the college or the advertiser?

Media coverage of the anticipated IMG/Learfield merger largely ignores advertisers as consumers and focuses almost exclusively on the the schools’ relationship with intermediaries — as purchasers of marketing services, rather than sellers of advertising space.

Although it’s difficult to identify the source of this odd bias, it seems to be based on the notion that, while corporations like Coca-Cola and General Motors have some sort of countervailing market power against marketing intermediaries, universities don’t. With advertisers out of the picture, media coverage suggests that, somehow, schools may be worse off if the merger were to proceed. But missing from this assessment are two crucial facts that undermine the story: First, schools actually have enormous market power; and, second, schools compete in the business of MMR management.

This second factor suggests, in fact, that sometimes there may be nothing special about two-sided markets sufficient to give rise to a unique style of antitrust analysis.

Much of the antitrust confusion seems to be based on confusion over the behavior of two-sided markets. A two-sided market is one in which two sets of actors interact through an intermediary or platform, which, in turn, facilitates the transactions, often enabling transactions to take place that otherwise would be too expensive absent the platform. A shopping mall is a two-sided market where shoppers can find their preferred stores. Stores would operate without the platform, but perhaps not as many, and not as efficiently. Newspapers, search engines, and other online platforms are two-sided markets that bring together advertisers and eyeballs that might not otherwise find each other absent the platform. And a collegiate multimedia rights management firms is a two-sided market where colleges that want to sell advertising space get together with firms that want to advertise their goods and services.

Yet there is nothing particularly “transformative” about the outsourcing of MMR management. Credit cards, for example are qualitatively different than in-store credit operations. They are two-sided platforms that substitute for in-house operations — but they also create an entirely new product and product market. MMR marketing firms do lower some transaction costs and reduce risk for collegiate sports marketing, but the product is not substantially changed — in fact, schools must have the knowledge and personnel to assess and enter into the initial sale of MMRs to an intermediary and, because of ongoing revenue-sharing and coordination with the intermediary, must devote ongoing resources even after the initial sale.

But will a merged entity have “too much” power? Imagine if a single firm owned the MMRs for nearly all intercollegiate competitors. How would it be able to exercise its supposed market power? Because each deal is negotiated separately, and, other than some mundane, fixed back-office expenses, the costs of rights management must be incurred whether a firm negotiates one deal or 100, there are no substantial economies of scale in the purchasing of MMRs. As a result, the existence of deals with other schools won’t automatically translate into better deals with subsequent schools.

Now, imagine if one school retained its own MMRs, but decided it might want to license them to an intermediary. Does it face anticompetitive market conditions if there is only a single provider of such services? To begin with, there is never only a single provider, as each school can provide the services in-house. This is not even the traditional monopoly constraint of simply “not buying,” which makes up the textbook “deadweight loss” from monopoly: In this case “not buying” does not mean going without; it simply means providing for oneself.

More importantly, because the school has a monopoly on access to its own marketing rights (to say nothing of access to its own physical facilities) unless and until it licenses them, its own bargaining power is largely independent of an intermediary’s access to other schools’ rights. If it were otherwise, each school would face anticompetitive market conditions simply by virtue of other schools’ owning their own rights!

It is possible that a larger, older firm will have more expertise and will be better able to negotiate deals with other schools — i.e., it will reap the benefits of learning by doing. But the returns to learning by doing derive from the ability to offer higher-quality/lower-cost services over time — which are a source of economic benefit, not cost. At the same time, the bulk of the benefits of experience may be gained over time with even a single set of MMRs, given the ever-varying range of circumstances even a single school will create: There may be little additional benefit (and, to be sure, there is additional cost) from managing multiple schools’ MMRs. And whatever benefits specialized firms offer, they also come with agency costs, and an intermediary’s specialized knowledge about marketing MMRs may or may not outweigh a school’s own specialized knowledge about the nuances of its particular circumstances. Moreover, because of knowledge spillovers and employee turnover this marketing expertise is actually widely distributed; not surprisingly, JMI Sports’ MMR unit, one of the most recent and successful entrants into the business was started by a former employee of IMG. Several other firms started out the same way.

The right way to begin thinking about the issue is this: Imagine if MMR intermediaries didn’t exist — what would happen? In this case, the answer is readily apparent because, for a significant number of schools (about 37% of Division I schools, in fact) MMR licensing is handled in-house, without the use of intermediaries. These schools do, in fact, attract advertisers, and there is little indication that they earn less net profit for going it alone. Schools with larger audiences, better targeted to certain advertisers’ products, command higher prices. Each school enjoys an effective monopoly over advertising channels around its own games, and each has bargaining power derived from its particular attractiveness to particular advertisers.

In effect, each school faces a number of possible options for MMR monetization — most notably a) up-front contracting to an intermediary, which then absorbs the risk, expense, and possible up-side of ongoing licensing to advertisers, or b) direct, ongoing licensing to advertisers. The presence of the intermediary doesn’t appreciably change the market, nor the relative bargaining power of sellers (schools) and buyers (advertisers) of advertising space any more than the presence of temp firms transforms the fundamental relationship between employers and potential part-time employees.

In making their decisions, schools always have the option of taking their MMR management in-house. In facing competing bids from firms such as IMG or Learfield, from their own conferences, or from professional sports teams, the opening bid, in a sense, comes from the school itself. Even the biggest intermediary in the industry must offer the school a deal that is at least as good as managing the MMRs in-house.

The true relevant market: Advertising

According to economist Andy Schwarz, if the relevant market is “college-based marketing services to Power 5 schools, the antitrust authorities may have more concerns than if it’s marketing services in sports.” But this entirely misses the real market exchange here. Sure, marketing services are purchased by schools, but their value to the schools is independent of the number of other schools an intermediary also markets.

Advertisers always have the option of deploying their ad dollars elsewhere. If Coca-Cola wants to advertise on Auburn’s stadium video board, it’s because Auburn’s video board is a profitable outlet for advertising, not because the Auburn ads are bundled with advertising at dozens of other schools (although that bundling may reduce the total cost of advertising on Auburn’s scoreboard as well as other outlets). Similarly, Auburn is seeking the highest bidder for space on its video board. It does not matter to Auburn that the University of Georgia is using the same intermediary to sell ads on its stadium video board.

The willingness of purchasers — say, Coca-Cola or Toyota — to pay for collegiate multimedia advertising is a function of the school that licenses it (net transaction costs) — and MMR agents like IMG and Learfield commit substantial guaranteed sums and a share of any additional profits for the rights to sell that advertising: For example, IMG recently agreed to pay $150 million over 10 years to renew its MMR contract at UCLA. But this is the value of a particular, niche form of advertising, determined within the context of the broader advertising market. How much pricing power over scoreboard advertising does any university, or even any group of universities under the umbrella of an intermediary have, in a world in which Coke and Toyota can advertise virtually anywhere — including during commercial breaks in televised intercollegiate games, which are licensed separately from the MMRs licensed by companies like IMG and Learfield?

There is, in other words, a hard ceiling on what intermediaries can charge schools for MMR marketing services: The schools’ own cost of operating a comparable program in-house.

To be sure, for advertisers, large MMR marketing firms lower the transaction costs of buying advertising space across a range of schools, presumably increasing demand for intercollegiate sports advertising and sponsorship. But sponsors and advertisers have a wide range of options for spending their marketing dollars. Intercollegiate sports MMRs are a small slice of the sports advertising market, which, in turn, is a small slice of the total advertising market. Even if one were to incorrectly describe the combined entity as a “juggernaut” in intercollegiate sports, the MMR rights it sells would still be a flyspeck in the broader market of multimedia advertising.

According to one calculation (by MoffettNathanson), total ad spending in the U.S. was about $191 billion in 2016 (Pew Research Center estimates total ad revenue at $240 billion) and the global advertising market was estimated to be worth about $493 billion. The intercollegiate MMR segment represents a minuscule fraction of that. According to Jason Belzer, “[a]t the time of its sale to WME in 2013, IMG College’s yearly revenue was nearly $500 million….” Another source puts it at $375 million. Either way, it’s a fraction of one percent of the total market, and even combined with Learfield it will remain a minuscule fraction. Even if one were to define a far narrower sports sponsorship market, which a Price Waterhouse estimate puts at around $16 billion, the combined companies would still have a tiny market share.

As sellers of MMRs, colleges are competing with each other, professional sports such as the NFL and NBA, and with non-sports marketing opportunities. And it’s a huge and competitive market.

Barriers to entry

While capital requirements and the presence of long-term contracts may present challenges to potential entrants into the business of marketing MMRs, these potential entrants face virtually no barriers that are not, or have not been, faced by incumbent providers. In this context, one should keep in mind two factors. First, barriers to entry are properly defined as costs incurred by new entrants that are not incurred by incumbents (no matter what Joe Bain says; Stigler always wins this dispute…). Every firm must bear the cost of negotiating and managing each schools’ MMRs, and, as noted, these costs don’t vary significantly with the number of schools being managed. And every entrant needs approximately the same capital and human resources per similarly sized school as every incumbent. Thus, in this context, neither the need for capital nor dedicated employees is properly construed as a barrier to entry.

Second, as the DOJ and FTC acknowledge in the Horizontal Merger Guidelines, any merger can be lawful under the antitrust laws, no matter its market share, where there are no significant barriers to entry:

The prospect of entry into the relevant market will alleviate concerns about adverse competitive effects… if entry into the market is so easy that the merged firm and its remaining rivals in the market, either unilaterally or collectively, could not profitably raise price or otherwise reduce competition compared to the level that would prevail in the absence of the merger.

As noted, there are low economies of scale in the business, with most of the economies occurring in the relatively small “back office” work of payroll, accounting, human resources, and employee benefits. Since the 2000s, the entry of several significant competitors — many entering with only one or two schools or specializing in smaller or niche markets — strongly suggests that there are no economically important barriers to entry. And these firms have entered and succeeded with a wide range of business models and firm sizes:

  • JMI Sports — a “rising boutique firm” — hired Tom Stultz, the former senior vice president and managing director of IMG’s MMR business, in 2012. JMI won its first (and thus, at the time, only) MMR bid in 2014 at the University of Kentucky, besting IMG to win the deal.
  • Peak Sports MGMT, founded in 2012, is a small-scale MMR firm that focuses on lesser Division I and II schools in Texas and the Midwest. It manages just seven small properties, including Southland Conference schools like the University of Central Arkansas and Southeastern Louisiana University.
  • Fox Sports entered the business in 2008 with a deal with the University of Florida. It now handles MMRs for schools like Georgetown, Auburn, and Villanova. Fox’s entry suggests that other media companies — like ESPN — that may already own TV broadcast rights are also potential entrants.
  • In 2014 the sports advertising firm, Van Wagner, hired three former Nelligan employees to make a play for the college sports space. In 2015 the company won its first MMR bid at Florida International University, reportedly against seven other participants. It now handles more than a dozen schools including Georgia State (which it won from IMG), Loyola Marymount, Pepperdine, Stony Brook, and Santa Clara.
  • In 2001 Fenway Sports Group, parent company of the Boston Red Sox and Liverpool Football Club, entered into an MMR agreement with Boston College. And earlier this year the Tampa Bay Lightning hockey team began handling multimedia marketing for the University of South Florida.

Potential new entrants abound. Most obviously, sports networks like ESPN could readily follow Fox Sports’ lead and advertising firms could follow Van Wagner’s. These companies have existing relationships and expertise that position them for easy entry into the MMR business. Moreover, there are already several companies that handle the trademark licensing for schools, any of which could move into the MMR management business, as well; both IMG and Learfield already handle licensing for a number of schools. Most notably, Fermata Partners, founded in 2012 by former IMG employees and acquired in 2015 by CAA Sports (a division of Creative Artists Agency), has trademark licensing agreements with Georgia, Kentucky, Miami, Notre Dame, Oregon, Virginia, and Wisconsin. It could easily expand into selling MMR rights for these and other schools. Other licensing firms like Exemplar (which handles licensing at Columbia) and 289c (which handles licensing at Texas and Ohio State) could also easily expand into MMR.

Given the relatively trivial economies of scale, the minimum viable scale for a new entrant appears to be approximately one school — a size that each school’s in-house operations, of course, automatically meets. Moreover, the Peak Sports, Fenway, and Tampa Bay Lightning examples suggest that there may be particular benefits to local, regional, or category specialization, suggesting that innovative, new entry is not only possible, but even likely, as the business continues to evolve.

Conclusion

A merger between IMG and Learfield should not raise any antitrust issues. College sports is a small slice of the total advertising market. Even a so-called “juggernaut” in college sports multimedia rights is a small bit in the broader market of multimedia marketing.

The demonstrated entry of new competitors and the transitions of schools from one provider to another or to bringing MMR management in-house, indicates that no competitor has any measurable market power that can disadvantage schools or advertisers.

The term “juggernaut” entered the English language because of misinterpretation and exaggeration of actual events. Fears of the IMG/Learfield merger crushing competition is similarly based on a misinterpretation of two-sided markets and misunderstanding of the reality of the of the market for college multimedia rights management. Importantly, the case is also a cautionary tale for those who would identify narrow, contract-, channel-, or platform-specific relevant markets in circumstances where a range of intermediaries and direct relationships can compete to offer the same service as those being scrutinized. Antitrust advocates have a long and inglorious history of defining markets by channels of distribution or other convenient, yet often economically inappropriate, combinations of firms or products. Yet the presence of marketing or other intermediaries does not automatically transform a basic, commercial relationship into a novel, two-sided market necessitating narrow market definitions and creative economics.