Archives For antitrust

A panelist brought up an interesting tongue-in-cheek observation about the rising populist antitrust movement at a Heritage antitrust event this week. To the extent that the new populist antitrust movement is broadly concerned about effects on labor and wage depression, then, in principle, it should also be friendly to cartels. Although counterintuitive, employees have long supported and benefited from cartels, because cartels generally afford both job security and higher wages than competitive firms. And, of course, labor itself has long sought the protection of cartels – in the form of unions – to secure the same benefits.   

For instance, in the days before widespread foreign competition in domestic auto markets, native unionized workers of the big three producers enjoyed a relatively higher wage for relatively less output. Competition from abroad changed the economic landscape for both producers and workers with the end result being a reduction in union power and relatively lower overall wages for workers. The union model — a labor cartel — can guarantee higher wages to those workers.

The same story can be seen on other industries, as well, from telecommunications to service workers to public sector employees. Generally, market power on the labor demand side (employers) tends to facilitate market power on the labor supply side: firms with market power — with supracompetitive profits — can afford to pay more for labor and often are willing to do so in order to secure political support (and also to make it more expensive for potential competitors to hire skilled employees). Labor is a substantial cost for firms in competitive markets, however, so firms without market power are always looking to economize on labor (that is, have low wages, as few employees as needed, and to substitute capital for labor wherever efficient to do so).

Therefore, if broad labor effects should be a prime concern of antitrust, perhaps enforcers should use antitrust laws to encourage cartel formation when it might increase wages, regardless of the effects on productivity, prices, and other efficiencies that may arise (or perhaps, as a possible trump card to hold against traditional efficiencies justifications).

No one will make a serious case for promoting cartels (although Former FTC Chairman Pertshuk sounded similar notes in the late 70s), but the comment makes a deeper point about ongoing efforts to undermine the consumer welfare standard. Fundamental contradictions exist in antitrust rhetoric that is unmoored from economic analysis. Professor Hovenkamp highlighted this in a recent paper as well:

The coherence problem [in antitrust populism] shows up in goals that are unmeasurable and fundamentally inconsistent, although with their contradictions rarely exposed. Among the most problematic contradictions is the one between small business protection and consumer welfare. In a nutshell, consumers benefit from low prices, high output and high quality and variety of products and services. But when a firm or a technology is able to offer these things they invariably injure rivals, typically smaller or dedicated to older technologies, who are unable to match them. Although movement antitrust rhetoric is often opaque about specifics, its general effect is invariably to encourage higher prices or reduced output or innovation, mainly for the protection of small business. Indeed, that has been a predominant feature of movement antitrust ever since the Sherman Act was passed, and it is a prominent feature of movement antitrust today. Indeed, some spokespersons for movement antitrust write as if low prices are the evil that antitrust law should be combatting.

To be fair, even with careful economic analysis, it is not always perfectly clear how to resolve the tensions between antitrust and other policy preferences.  For instance, Jonathan Adler described the collision between antitrust and environmental protection in cases where collusion might lead to better environmental outcomes. But even in cases like that, he noted it was essentially a free-rider problem and, as with intrabrand price agreements where consumer goodwill was a “commons” that had to be suitably maintained against possible free-riding retailers, what might be an antitrust violation in one context was not necessarily a violation in a second context.  

Moreover, when the purpose of apparently “collusive” conduct is to actually ensure long term, sustainable production of a good or service (like fish), the behavior may not actually be anticompetitive. Thus, antitrust remains a plausible means of evaluating economic activity strictly on its own terms (and any alteration to the doctrine itself might actually be to prefer rule of reason analysis over per se analysis when examining these sorts of mitigating circumstances).

And before contorting antitrust into a policy cure-all, it is important to remember that the consumer welfare standard evolved out of sometimes good (price fixing bans) and sometimes questionable (prohibitions on output contracts) doctrines that were subject to legal trial and error. This was an evolution that was triggered by “increasing economic sophistication” and as “the enforcement agencies and courts [began] reaching for new ways in which to weigh competing and conflicting claims.”

The vector of that evolution was toward the use of  antitrust as a reliable, testable, and clear set of legal principles that are ultimately subject to economic analysis. When the populists ask us, for instance, to return to a time when judges could “prevent the conversion of concentrated economic power into concentrated political power” via antitrust law, they are asking for much more than just adding a new gloss to existing doctrine. They are asking for us to unlearn all of the lessons of the twentieth century that ultimately led toward the maturation of antitrust law.

It’s perfectly reasonable to care about political corruption, worker welfare, and income inequality. It’s not perfectly reasonable to try to shoehorn goals based on these political concerns into a body of legal doctrine that evolved a set of tools wholly inappropriate for achieving those ends.

Introduction and Summary

On December 19, 2017, the U.S. Court of Appeals for the Second Circuit presented Broadcast Music, Inc. (BMI) with an early Christmas present.  Specifically, the Second Circuit commendably affirmed the District Court for the Southern District of New York’s September 2016 ruling rejecting the U.S. Department of Justice’s (DOJ) August 2016 reinterpretation of its longstanding antitrust consent decree with BMI.  Because the DOJ reinterpretation also covered a parallel DOJ consent decree with the American Society of Composers, Authors, and Publishers (ASCAP), the Second Circuit’s decision by necessary implication benefits ASCAP as well, although it was not a party to the suit.

The Second Circuit’s holding is sound as a matter of textual interpretation and wise as a matter of economic policy.  Indeed, DOJ’s current antitrust leadership, which recognizes the importance of vibrant intellectual property licensing in the context of patents (see here), should be pleased that the Second Circuit rescued it from a huge mistake by the Obama Administration DOJ in the context of copyright licensing.

Background

BMI and ASCAP are the two leading U.S. “performing rights organizations” (PROs).  They contract with music copyright holders to act as intermediaries that provide “blanket” licenses to music users (e.g., television and radio stations, bars, and internet music distributors) for use of their full copyrighted musical repertoires, without the need for song-specific licensing negotiations.  This greatly reduces the transactions costs of arranging for the playing of musical works, benefiting music users, the listening public, and copyright owners (all of whom are assured of at least some compensation for their endeavors).  ASCAP and BMI are big businesses, with each PRO holding licenses to over ten million works and accounting for roughly 45 percent of the domestic music licensing market (ninety percent combined).

Because both ASCAP and BMI pool copyrighted songs that could otherwise compete with each other, and both grant users a single-price “blanket license” conveying the rights to play their full set of copyrighted works, the two organizations could be seen as restricting competition among copyrighted works and fixing the prices of copyrighted substitutes – raising serious questions under section 1 of the Sherman Antitrust Act, which condemns contracts that unreasonably restrain trade.  This led the DOJ to bring antitrust suits against ASCAP and BMI over eighty years ago, which were settled by separate judicially-filed consent decrees in 1941.

The decrees imposed a variety of limitations on the two PROs’ licensing practices, aimed at preventing ASCAP and BMI from exercising anticompetitive market power (such as the setting of excessive licensing rates).  The decrees were amended twice over the years, most recently in 2001, to take account of changing market conditions.  The U.S. Supreme Court noted the constraining effect of the decrees in BMI v. CBS (1979), in ruling that the BMI and ASCAP blanket licenses did not constitute per se illegal price fixing.  The Court held, rather, that the licenses should be evaluated on a case-by-case basis under the antitrust “rule of reason,” since the licenses inherently generated great efficiency benefits (“the immediate use of covered compositions, without the delay of prior individual negotiations”) that had to be weighed against potential anticompetitive harms.

The August 4, 2016 DOJ Consent Decree Interpretation

Fast forward to 2014, when DOJ undertook a new review of the ASCAP and BMI decrees, and requested the submission of public comments to aid it in its deliberations.  This review came to an official conclusion two years later, on August 4, 2016, when DOJ decided not to amend the decrees – but announced a decree interpretation that limits ASCAP’s and BMI’s flexibility.  Specifically, DOJ stated that the decrees needed to be “more consistently applied.”  By this, the DOJ meant that BMI and ASCAP should only grant blanket licenses that cover all of the rights to 100 percent of the works in the PROs’ respective catalogs (“full-work licensing”), not licenses that cover only partial interests in those works.  DOJ stated:

Only full-work licensing can yield the substantial procompetitive benefits associated with blanket licenses that distinguish ASCAP’s and BMI’s activities from other agreements among competitors that present serious issues under the antitrust laws.

The New DOJ Interpretation Was Bad as a Matter of Policy

DOJ’s August 4 interpretation rejected industry practice.  Under it, ASCAP and BMI were only allowed to offer a license covering all of the copyright interests in a musical competition, even if the license covers a joint work.

For example, consider a band of five composer-musicians, each of whom has a fractional interest in the copyright covering the band’s new album which is a joint work.  Prior to the DOJ’s new interpretation, each musician was able to offer a partial interest in the joint work to a performance rights organization, reflecting the relative shares of the total copyright interest covering the work.  The organization could offer a partial license, and a user could aggregate different partial licenses in order to cover the whole joint work.  Following the new interpretation, however, BMI and ASCAP could not offer partial licenses to that work to users.  This denied the band’s individual members the opportunity to deal profitably with BMI and ASCAP, thereby undermining their ability to receive fair compensation.

As the two PROs warned, this approach, if upheld, would “cause unnecessary chaos in the marketplace and place unfair financial burdens and creative constraints on songwriters and composers.”  According to ASCAP President Paul Williams, “It is as if the DOJ saw songwriters struggling to stay afloat in a sea of outdated regulations and decided to hand us an anchor, in the form of 100 percent licensing, instead of a life preserver.”  Furthermore, the president and CEO of BMI, Mike O’Neill, stated:  “We believe the DOJ’s interpretation benefits no one – not BMI or ASCAP, not the music publishers, and not the music users – but we are most sensitive to the impact this could have on you, our songwriters and composers.”

The PROs’ views were bolstered by a January 2016 U.S. Copyright Office report, which concluded that “an interpretation of the consent decrees that would require 100-percent licensing or removal of a work from the ASCAP or BMI repertoire would appear to be fraught with legal and logistical problems, and might well result in a sharp decrease in repertoire available through these [performance rights organizations’] blanket licenses.”  Regrettably, during the decree review period, DOJ ignored the expert opinion of the Copyright Office, as well as the public record comments of numerous publishers and artists (see here, for example) indicating that a 100 percent licensing requirement would depress returns to copyright owners and undermine the creative music industry.

Most fundamentally, DOJ’s new interpretation of the BMI and ASCAP consent decrees involved an abridgment of economic freedom.  It further limited the flexibility of copyright music holders and music users to contract with intermediaries to promote the efficient distribution of music performance rights, in a manner that benefits the listening public while allowing creative artists sufficient compensation for their efforts.  DOJ made no compelling showing that a new consent decree constraint was needed to promote competition (100 percent licensing only).  Far from promoting competition, DOJ’s new interpretation undermined it.  DOJ micromanagement of copyright licensing by consent decree reinterpretation was a costly new regulatory initiative that reflected a lack of appreciation for intellectual property rights, which incentivize innovation.  In short, DOJ’s latest interpretation of the ASCAP and BMI decrees was terrible policy.

The New DOJ Interpretation Ran Counter to International Norms

The new DOJ interpretation had unfortunate international policy implications as well.  According to Gadi Oron, Director General of the International Confederation of Societies of Authors and Composers (CISAC), a Paris-based organization that regroups 239 rights societies from 123 countries, including ASCAP, BMI, and SESAC, the new interpretation departed from international norms in the music licensing industry and have disruptive international effects:

It is clear that the DoJ’s decisions have been made without taking the interests of creators, neither American nor international, into account. It is also clear that they were made with total disregard for the international framework, where fractional licensing is practiced, even if it’s less of a factor because many countries only have one performance rights organization representing songwriters in their territory. International copyright laws grant songwriters exclusive rights, giving them the power to decide who will license their rights in each territory and it is these rights that underpin the landscape in which authors’ societies operate. The international system of collective management of rights, which is based on reciprocal representation agreements and founded on the freedom of choice of the rights holder, would be negatively affected by such level of government intervention, at a time when it needs support more than ever.

The New DOJ Interpretation Was Defective as a Matter of Law, and the District Court and the Second Circuit So Held

As I explained in a November 2016 Heritage Foundation commentary (citing arguments made by counsel for BMI), DOJ’s new interpretation not only was bad domestic and international policy, it was inconsistent with sound textual construction of the decrees themselves.  The BMI decree (and therefore the analogous ASCAP decree as well) did not expressly require 100 percent licensing and did not unambiguously prohibit fractional licensing.  Accordingly, since a consent decree is an injunction, and any activity not expressly required or prohibited thereunder is permitted, fractional shares licensing should be authorized.  DOJ’s new interpretation ignored this principle.  It also was at odds with a report of the U.S. Copyright Office that concluded the BMI consent decree “must be understood to include partial interests in musical works.”  Furthermore, the new interpretation was belied by the fact that the PRO licensing market has developed and functioned efficiently for decades by pricing, collecting, and distributing fees for royalties on a fractional basis.  Courts view such evidence of trade practice and custom as relevant in determining the meaning of a consent decree.

The district court for the Southern District of New York accepted these textual arguments in its September 2016 ruling, granting BMI’s request for a declaratory judgment that the BMI decree did not require Decree did not require 100% (“full-work”) licensing.  The court explained:

Nothing in the Consent Decree gives support to the Division’s views. If a fractionally-licensed composition is disqualified from inclusion in BMI’s repertory, it is not for violation of any provision of the Consent Decree. While the Consent Decree requires BMI to license performances of those compositions “the right of public performances of which [BMI] has or hereafter shall have the right to license or sublicense” (Art. II(C)), it contains no provision regarding the source, extent, or nature of that right. It does not address the possibilities that BMI might license performances of a composition without sufficient legal right to do so, or under a worthless or invalid copyright, or users might perform a music composition licensed by fewer than all of its creators. . . .

The Consent Decree does not regulate the elements of the right to perform compositions. Performance of a composition under an ineffective license may infringe an author’s rights under copyright, contract or other law, but it does not infringe the Consent Decree, which does not extend to matters such as the invalidity or value of copyrights of any of the compositions in BMI’s repertory. Questions of the validity, scope and limits of the right to perform compositions are left to the congruent and competing interests in the music copyright market, and to copyright, property and other laws, to continue to resolve and enforce. Infringements (and fractional infringements) and remedies are not part of the Consent Decree’s subject-matter.

The Second Circuit affirmed, agreeing with the district court’s reading of the decree:

The decree does not address the issue of fractional versus full work licensing, and the parties agree that the issue did not arise at the time of the . . . [subsequent] amendments [to the decree]. . . .

This appeal begins and ends with the language of the consent decree. It is a “well-established principle that the language of a consent decree must dictate what a party is required to do and what it must refrain from doing.” Perez v. Danbury Hosp., 347 F.3d 419, 424 (2d Cir. 2003); United States v. Armour & Co., 402 U.S. 673, 682 (1971) (“[T]he scope of a consent decree must be discerned within its four corners…”). “[C]ourts must abide by the express terms of a consent decree and may not impose additional requirements or supplementary obligations on the parties even to fulfill the purposes of the decree more effectively.” Perez, 347 F.3d at 424; see also Barcia v. Sitkin, 367 F.3d 87, 106 (2d Cir. 2004) (internal citations omitted) (The district court may not “impose obligations on a party that are not unambiguously mandated by the decree itself.”). Accordingly, since the decree is silent on fractional licensing, BMI may (and perhaps must) offer them unless a clear and unambiguous command of the decree would thereby be violated. See United States v. Int’l Bhd. Of Teamsters, Chauffeurs, Warehousemen & Helpers of Am., AFLCIO, 998 F.2d 1101, 1107 (2d Cir. 1993); see also Armour, 402 U.S. at 681-82.

Conclusion

The federal courts wisely have put to rest an ill-considered effort by the Obama Antitrust Division to displace longstanding industry practices that allowed efficient flexibility in the licensing of copyright interests by PROs.  Let us hope that the Trump Antitrust Division will not just accept the Second Circuit’s decision, but will positively embrace it as a manifestation of enlightened antitrust-IP policy – one in harmony with broader efforts by the Division to restore sound thinking to the antitrust treatment of patent licensing and intellectual property in general.

This week the FCC will vote on Chairman Ajit Pai’s Restoring Internet Freedom Order. Once implemented, the Order will rescind the 2015 Open Internet Order and return antitrust and consumer protection enforcement to primacy in Internet access regulation in the U.S.

In anticipation of that, earlier this week the FCC and FTC entered into a Memorandum of Understanding delineating how the agencies will work together to police ISPs. Under the MOU, the FCC will review informal complaints regarding ISPs’ disclosures about their blocking, throttling, paid prioritization, and congestion management practices. Where an ISP fails to make the proper disclosures, the FCC will take enforcement action. The FTC, for its part, will investigate and, where warranted, take enforcement action against ISPs for unfair, deceptive, or otherwise unlawful acts.

Critics of Chairman Pai’s plan contend (among other things) that the reversion to antitrust-agency oversight of competition and consumer protection in telecom markets (and the Internet access market particularly) would be an aberration — that the US will become the only place in the world to move backward away from net neutrality rules and toward antitrust law.

But this characterization has it exactly wrong. In fact, much of the world has been moving toward an antitrust-based approach to telecom regulation. The aberration was the telecom-specific, common-carrier regulation of the 2015 Open Internet Order.

The longstanding, global transition from telecom regulation to antitrust enforcement

The decade-old discussion around net neutrality has morphed, perhaps inevitably, to join the larger conversation about competition in the telecom sector and the proper role of antitrust law in addressing telecom-related competition issues. Today, with the latest net neutrality rules in the US on the chopping block, the discussion has grown more fervent (and even sometimes inordinately violent).

On the one hand, opponents of the 2015 rules express strong dissatisfaction with traditional, utility-style telecom regulation of innovative services, and view the 2015 rules as a meritless usurpation of antitrust principles in guiding the regulation of the Internet access market. On the other hand, proponents of the 2015 rules voice skepticism that antitrust can actually provide a way to control competitive harms in the tech and telecom sectors, and see the heavy hand of Title II, common-carrier regulation as a necessary corrective.

While the evidence seems clear that an early-20th-century approach to telecom regulation is indeed inappropriate for the modern Internet (see our lengthy discussions on this point, e.g., here and here, as well as Thom Lambert’s recent post), it is perhaps less clear whether antitrust, with its constantly evolving, common-law foundation, is up to the task.

To answer that question, it is important to understand that for decades, the arc of telecom regulation globally has been sweeping in the direction of ex post competition enforcement, and away from ex ante, sector-specific regulation.

Howard Shelanski, who served as President Obama’s OIRA Administrator from 2013-17, Director of the Bureau of Economics at the FTC from 2012-2013, and Chief Economist at the FCC from 1999-2000, noted in 2002, for instance, that

[i]n many countries, the first transition has been from a government monopoly to a privatizing entity controlled by an independent regulator. The next transformation on the horizon is away from the independent regulator and towards regulation through general competition law.

Globally, nowhere perhaps has this transition been more clearly stated than in the EU’s telecom regulatory framework which asserts:

The aim is to progressively reduce ex ante sector-specific regulation progressively as competition in markets develops and, ultimately, for electronic communications [i.e., telecommunications] to be governed by competition law only. (Emphasis added.)

To facilitate the transition and quash regulatory inconsistencies among member states, the EC identified certain markets for national regulators to decide, consistent with EC guidelines on market analysis, whether ex ante obligations were necessary in their respective countries due to an operator holding “significant market power.” In 2003 the EC identified 18 such markets. After observing technological and market changes over the next four years, the EC reduced that number to seven in 2007 and, in 2014, the number was further reduced to four markets, all wholesale markets, that could potentially require ex ante regulation.

It is important to highlight that this framework is not uniquely achievable in Europe because of some special trait in its markets, regulatory structure, or antitrust framework. Determining the right balance of regulatory rules and competition law, whether enforced by a telecom regulator, antitrust regulator, or multi-purpose authority (i.e., with authority over both competition and telecom) means choosing from a menu of options that should be periodically assessed to move toward better performance and practice. There is nothing jurisdiction-specific about this; it is simply a matter of good governance.

And since the early 2000s, scholars have highlighted that the US is in an intriguing position to transition to a merged regulator because, for example, it has both a “highly liberalized telecommunications sector and a well-established body of antitrust law.” For Shelanski, among others, the US has been ready to make the transition since 2007.

Far from being an aberrant move away from sound telecom regulation, the FCC’s Restoring Internet Freedom Order is actually a step in the direction of sensible, antitrust-based telecom regulation — one that many parts of the world have long since undertaken.

How antitrust oversight of telecom markets has been implemented around the globe

In implementing the EU’s shift toward antitrust oversight of the telecom sector since 2003, agencies have adopted a number of different organizational reforms.

Some telecom regulators assumed new duties over competition — e.g., Ofcom in the UK. Other non-European countries, including, e.g., Mexico have also followed this model.

Other European Member States have eliminated their telecom regulator altogether. In a useful case study, Roslyn Layton and Joe Kane outline Denmark’s approach, which includes disbanding its telecom regulator and passing the regulation of the sector to various executive agencies.

Meanwhile, the Netherlands and Spain each elected to merge its telecom regulator into its competition authority. New Zealand has similarly adopted this framework.

A few brief case studies will illuminate these and other reforms:

The Netherlands

In 2013, the Netherlands merged its telecom, consumer protection, and competition regulators to form the Netherlands Authority for Consumers and Markets (ACM). The ACM’s structure streamlines decision-making on pending industry mergers and acquisitions at the managerial level, eliminating the challenges arising from overlapping agency reviews and cross-agency coordination. The reform also unified key regulatory methodologies, such as creating a consistent calculation method for the weighted average cost of capital (WACC).

The Netherlands also claims that the ACM’s ex post approach is better able to adapt to “technological developments, dynamic markets, and market trends”:

The combination of strength and flexibility allows for a problem-based approach where the authority first engages in a dialogue with a particular market player in order to discuss market behaviour and ensure the well-functioning of the market.

The Netherlands also cited a significant reduction in the risk of regulatory capture as staff no longer remain in positions for long tenures but rather rotate on a project-by-project basis from a regulatory to a competition department or vice versa. Moving staff from team to team has also added value in terms of knowledge transfer among the staff. Finally, while combining the cultures of each regulator was less difficult than expected, the government reported that the largest cause of consternation in the process was agreeing on a single IT system for the ACM.

Spain

In 2013, Spain created the National Authority for Markets and Competition (CNMC), merging the National Competition Authority with several sectoral regulators, including the telecom regulator, to “guarantee cohesion between competition rulings and sectoral regulation.” In a report to the OECD, Spain stated that moving to the new model was necessary because of increasing competition and technological convergence in the sector (i.e., the ability for different technologies to offer the substitute services (like fixed and wireless Internet access)). It added that integrating its telecom regulator with its competition regulator ensures

a predictable business environment and legal certainty [i.e., removing “any threat of arbitrariness”] for the firms. These two conditions are indispensable for network industries — where huge investments are required — but also for the rest of the business community if investment and innovation are to be promoted.

Like in the Netherlands, additional benefits include significantly lowering the risk of regulatory capture by “preventing the alignment of the authority’s performance with sectoral interests.”

Denmark

In 2011, the Danish government unexpectedly dismantled the National IT and Telecom Agency and split its duties between four regulators. While the move came as a surprise, it did not engender national debate — vitriolic or otherwise — nor did it receive much attention in the press.

Since the dismantlement scholars have observed less politicization of telecom regulation. And even though the competition authority didn’t take over telecom regulatory duties, the Ministry of Business and Growth implemented a light touch regime, which, as Layton and Kane note, has helped to turn Denmark into one of the “top digital nations” according to the International Telecommunication Union’s Measuring the Information Society Report.

New Zealand

The New Zealand Commerce Commission (NZCC) is responsible for antitrust enforcement, economic regulation, consumer protection, and certain sectoral regulations, including telecommunications. By combining functions into a single regulator New Zealand asserts that it can more cost-effectively administer government operations. Combining regulatory functions also created spillover benefits as, for example, competition analysis is a prerequisite for sectoral regulation, and merger analysis in regulated sectors (like telecom) can leverage staff with detailed and valuable knowledge. Similar to the other countries, New Zealand also noted that the possibility of regulatory capture “by the industries they regulate is reduced in an agency that regulates multiple sectors or also has competition and consumer law functions.”

Advantages identified by other organizations

The GSMA, a mobile industry association, notes in its 2016 report, Resetting Competition Policy Frameworks for the Digital Ecosystem, that merging the sector regulator into the competition regulator also mitigates regulatory creep by eliminating the prodding required to induce a sector regulator to roll back regulation as technological evolution requires it, as well as by curbing the sector regulator’s temptation to expand its authority. After all, regulators exist to regulate.

At the same time, it’s worth noting that eliminating the telecom regulator has not gone off without a hitch in every case (most notably, in Spain). It’s important to understand, however, that the difficulties that have arisen in specific contexts aren’t endemic to the nature of competition versus telecom regulation. Nothing about these cases suggests that economic-based telecom regulations are inherently essential, or that replacing sector-specific oversight with antitrust oversight can’t work.

Contrasting approaches to net neutrality in the EU and New Zealand

Unfortunately, adopting a proper framework and implementing sweeping organizational reform is no guarantee of consistent decisionmaking in its implementation. Thus, in 2015, the European Parliament and Council of the EU went against two decades of telecommunications best practices by implementing ex ante net neutrality regulations without hard evidence of widespread harm and absent any competition analysis to justify its decision. The EU placed net neutrality under the universal service and user’s rights prong of the regulatory framework, and the resulting rules lack coherence and economic rigor.

BEREC’s net neutrality guidelines, meant to clarify the EU regulations, offered an ambiguous, multi-factored standard to evaluate ISP practices like free data programs. And, as mentioned in a previous TOTM post, whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs.

Notably, while BEREC has not provided clear guidance, a 2017 report commissioned by the EU’s Directorate-General for Competition weighing competitive benefits and harms of zero rating concluded “there appears to be little reason to believe that zero-rating gives rise to competition concerns.”

The report also provides an ex post framework for analyzing such deals in the context of a two-sided market by assessing a deal’s impact on competition between ISPs and between content and application providers.

The EU example demonstrates that where a telecom regulator perceives a novel problem, competition law, grounded in economic principles, brings a clear framework to bear.

In New Zealand, if a net neutrality issue were to arise, the ISP’s behavior would be examined under the context of existing antitrust law, including a determination of whether the ISP is exercising market power, and by the Telecommunications Commissioner, who monitors competition and the development of telecom markets for the NZCC.

Currently, there is broad consensus among stakeholders, including a local content providers and networking equipment manufacturers, that there is no need for ex ante regulation of net neutrality. Wholesale ISP, Chorus, states, for example, that “in any event, the United States’ transparency and non-interference requirements [from the 2015 OIO] are arguably covered by the TCF Code disclosure rules and the provisions of the Commerce Act.”

The TCF Code is a mandatory code of practice establishing requirements concerning the information ISPs are required to disclose to consumers about their services. For example, ISPs must disclose any arrangements that prioritize certain traffic. Regarding traffic management, complaints of unfair contract terms — when not resolved by a process administered by an independent industry group — may be referred to the NZCC for an investigation in accordance with the Fair Trading Act. Under the Commerce Act, the NZCC can prohibit anticompetitive mergers, or practices that substantially lessen competition or that constitute price fixing or abuse of market power.

In addition, the NZCC has been active in patrolling vertical agreements between ISPs and content providers — precisely the types of agreements bemoaned by Title II net neutrality proponents.

In February 2017, the NZCC blocked Vodafone New Zealand’s proposed merger with Sky Network (combining Sky’s content and pay TV business with Vodafone’s broadband and mobile services) because the Commission concluded that the deal would substantially lessen competition in relevant broadband and mobile services markets. The NZCC was

unable to exclude the real chance that the merged entity would use its market power over premium live sports rights to effectively foreclose a substantial share of telecommunications customers from rival telecommunications services providers (TSPs), resulting in a substantial lessening of competition in broadband and mobile services markets.

Such foreclosure would result, the NZCC argued, from exclusive content and integrated bundles with features such as “zero rated Sky Sport viewing over mobile.” In addition, Vodafone would have the ability to prevent rivals from creating bundles using Sky Sport.

The substance of the Vodafone/Sky decision notwithstanding, the NZCC’s intervention is further evidence that antitrust isn’t a mere smokescreen for regulators to do nothing, and that regulators don’t need to design novel tools (such as the Internet conduct rule in the 2015 OIO) to regulate something neither they nor anyone else knows very much about: “not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.” Instead, with ex post competition enforcement, regulators can allow dynamic innovation and competition to develop, and are perfectly capable of intervening — when and if identifiable harm emerges.

Conclusion

Unfortunately for Title II proponents — who have spent a decade at the FCC lobbying for net neutrality rules despite a lack of actionable evidence — the FCC is not acting without precedent by enabling the FTC’s antitrust and consumer protection enforcement to police conduct in Internet access markets. For two decades, the object of telecommunications regulation globally has been to transition away from sector-specific ex ante regulation to ex post competition review and enforcement. It’s high time the U.S. got on board.

The populists are on the march, and as the 2018 campaign season gets rolling we’re witnessing more examples of political opportunism bolstered by economic illiteracy aimed at increasingly unpopular big tech firms.

The latest example comes in the form of a new investigation of Google opened by Missouri’s Attorney General, Josh Hawley. Mr. Hawley — a Republican who, not coincidentally, is running for Senate in 2018alleges various consumer protection violations and unfair competition practices.

But while Hawley’s investigation may jump start his campaign and help a few vocal Google rivals intent on mobilizing the machinery of the state against the company, it is unlikely to enhance consumer welfare — in Missouri or anywhere else.  

According to the press release issued by the AG’s office:

[T]he investigation will seek to determine if Google has violated the Missouri Merchandising Practices Act—Missouri’s principal consumer-protection statute—and Missouri’s antitrust laws.  

The business practices in question are Google’s collection, use, and disclosure of information about Google users and their online activities; Google’s alleged misappropriation of online content from the websites of its competitors; and Google’s alleged manipulation of search results to preference websites owned by Google and to demote websites that compete with Google.

Mr. Hawley’s justification for his investigation is a flourish of populist rhetoric:

We should not just accept the word of these corporate giants that they have our best interests at heart. We need to make sure that they are actually following the law, we need to make sure that consumers are protected, and we need to hold them accountable.

But Hawley’s “strong” concern is based on tired retreads of the same faulty arguments that Google’s competitors (Yelp chief among them), have been plying for the better part of a decade. In fact, all of his apparent grievances against Google were exhaustively scrutinized by the FTC and ultimately rejected or settled in separate federal investigations in 2012 and 2013.

The antitrust issues

To begin with, AG Hawley references the EU antitrust investigation as evidence that

this is not the first-time Google’s business practices have come into question. In June, the European Union issued Google a record $2.7 billion antitrust fine.

True enough — and yet, misleadingly incomplete. Missing from Hawley’s recitation of Google’s antitrust rap sheet are the following investigations, which were closed without any finding of liability related to Google Search, Android, Google’s advertising practices, etc.:

  • United States FTC, 2013. The FTC found no basis to pursue a case after a two-year investigation: “Challenging Google’s product design decisions in this case would require the Commission — or a court — to second-guess a firm’s product design decisions where plausible procompetitive justifications have been offered, and where those justifications are supported by ample evidence.” The investigation did result in a consent order regarding patent licensing unrelated in any way to search and a voluntary commitment by Google not to engage in certain search-advertising-related conduct.
  • South Korea FTC, 2013. The KFTC cleared Google after a two-year investigation. It opened a new investigation in 2016, but, as I have discussed, “[i]f anything, the economic conditions supporting [the KFTC’s 2013] conclusion have only gotten stronger since.”
  • Canada Competition Bureau, 2016. The CCB closed a three-year long investigation into Google’s search practices without taking any action.

Similar investigations have been closed without findings of liability (or simply lie fallow) in a handful of other countries (e.g., Taiwan and Brazil) and even several states (e.g., Ohio and Texas). In fact, of all the jurisdictions that have investigated Google, only the EU and Russia have actually assessed liability.

As Beth Wilkinson, outside counsel to the FTC during the Google antitrust investigation, noted upon closing the case:

Undoubtedly, Google took aggressive actions to gain advantage over rival search providers. However, the FTC’s mission is to protect competition, and not individual competitors. The evidence did not demonstrate that Google’s actions in this area stifled competition in violation of U.S. law.

The CCB was similarly unequivocal in its dismissal of the very same antitrust claims Missouri’s AG seems intent on pursuing against Google:

The Bureau sought evidence of the harm allegedly caused to market participants in Canada as a result of any alleged preferential treatment of Google’s services. The Bureau did not find adequate evidence to support the conclusion that this conduct has had an exclusionary effect on rivals, or that it has resulted in a substantial lessening or prevention of competition in a market.

Unfortunately, rather than follow the lead of these agencies, Missouri’s investigation appears to have more in common with Russia’s effort to prop up a favored competitor (Yandex) at the expense of consumer welfare.

The Yelp Claim

Take Mr. Hawley’s focus on “Google’s alleged misappropriation of online content from the websites of its competitors,” for example, which cleaves closely to what should become known henceforth as “The Yelp Claim.”

While the sordid history of Yelp’s regulatory crusade against Google is too long to canvas in its entirety here, the primary elements are these:

Once upon a time (in 2005), Google licensed Yelp’s content for inclusion in its local search results. In 2007 Yelp ended the deal. By 2010, and without a license from Yelp (asserting fair use), Google displayed small snippets of Yelp’s reviews that, if clicked on, led to Yelp’s site. Even though Yelp received more user traffic from those links as a result, Yelp complained, and Google removed Yelp snippets from its local results.

In its 2013 agreement with the FTC, Google guaranteed that Yelp could opt-out of having even snippets displayed in local search results by committing Google to:

make available a web-based notice form that provides website owners with the option to opt out from display on Google’s Covered Webpages of content from their website that has been crawled by Google. When a website owner exercises this option, Google will cease displaying crawled content from the domain name designated by the website owner….

The commitments also ensured that websites (like Yelp) that opt out would nevertheless remain in Google’s general index.

Ironically, Yelp now claims in a recent study that Google should show not only snippets of Yelp reviews, but even more of Yelp’s content. (For those interested, my colleagues and I have a paper explaining why the study’s claims are spurious).

The key bit here, of course, is that Google stopped pulling content from Yelp’s pages to use in its local search results, and that it implemented a simple mechanism for any other site wishing to opt out of the practice to do so.

It’s difficult to imagine why Missouri’s citizens might require more than this to redress alleged anticompetitive harms arising from the practice.

Perhaps AG Hawley thinks consumers would be better served by an opt-in mechanism? Of course, this is absurd, particularly if any of Missouri’s citizens — and their businesses — have websites. Most websites want at least some of their content to appear on Google’s search results pages as prominently as possible — see this and this, for example — and making this information more accessible to users is why Google exists.

To be sure, some websites may take issue with how much of their content Google features and where it places that content. But the easy opt out enables them to prevent Google from showing their content in a manner they disapprove of. Yelp is an outlier in this regard because it views Google as a direct competitor, especially to the extent it enables users to read some of Yelp’s reviews without visiting Yelp’s pages.

For Yelp and a few similarly situated companies the opt out suffices. But for almost everyone else the opt out is presumably rarely exercised, and any more-burdensome requirement would just impose unnecessary costs, harming instead of helping their websites.

The privacy issues

The Missouri investigation also applies to “Google’s collection, use, and disclosure of information about Google users and their online activities.” More pointedly, Hawley claims that “Google may be collecting more information from users than the company was telling consumers….”

Presumably this would come as news to the FTC, which, with a much larger staff and far greater expertise, currently has Google under a 20 year consent order (with some 15 years left to go) governing its privacy disclosures and information-sharing practices, thus ensuring that the agency engages in continual — and well-informed — oversight of precisely these issues.

The FTC’s consent order with Google (the result of an investigation into conduct involving Google’s short-lived Buzz social network, allegedly in violation of Google’s privacy policies), requires the company to:

  • “[N]ot misrepresent in any manner, expressly or by implication… the extent to which respondent maintains and protects the privacy and confidentiality of any [user] information…”;
  • “Obtain express affirmative consent from” users “prior to any new or additional sharing… of the Google user’s identified information with any third party” if doing so would in any way deviate from previously disclosed practices;
  • “[E]stablish and implement, and thereafter maintain, a comprehensive privacy program that is reasonably designed to [] address privacy risks related to the development and management of new and existing products and services for consumers, and (2) protect the privacy and confidentiality of [users’] information”; and
  • Along with a laundry list of other reporting requirements, “[submit] biennial assessments and reports [] from a qualified, objective, independent third-party professional…, approved by the [FTC] Associate Director for Enforcement, Bureau of Consumer Protection… in his or her sole discretion.”

What, beyond the incredibly broad scope of the FTC’s consent order, could the Missouri AG’s office possibly hope to obtain from an investigation?

Google is already expressly required to provide privacy reports to the FTC every two years. It must provide several of the items Hawley demands in his CID to the FTC; others are required to be made available to the FTC upon demand. What materials could the Missouri AG collect beyond those the FTC already receives, or has the authority to demand, under its consent order?

And what manpower and expertise could Hawley apply to those materials that would even begin to equal, let alone exceed, those of the FTC?

Lest anyone think the FTC is falling down on the job, a year after it issued that original consent order the Commission fined Google $22.5 million for violating the order in a questionable decision that was signed on to by all of the FTC’s Commissioners (both Republican and Democrat) — except the one who thought it didn’t go far enough.

That penalty is of undeniable import, not only for its amount (at the time it was the largest in FTC history) and for stemming from alleged problems completely unrelated to the issue underlying the initial action, but also because it was so easy to obtain. Having put Google under a 20-year consent order, the FTC need only prove (or threaten to prove) contempt of the consent order, rather than the specific elements of a new violation of the FTC Act, to bring the company to heel. The former is far easier to prove, and comes with the ability to impose (significant) damages.

So what’s really going on in Jefferson City?

While states are, of course, free to enforce their own consumer protection laws to protect their citizens, there is little to be gained — other than cold hard cash, perhaps — from pursuing cases that, at best, duplicate enforcement efforts already undertaken by the federal government (to say nothing of innumerable other jurisdictions).

To take just one relevant example, in 2013 — almost a year to the day following the court’s approval of the settlement in the FTC’s case alleging Google’s violation of the Buzz consent order — 37 states plus DC (not including Missouri) settled their own, follow-on litigation against Google on the same facts. Significantly, the terms of the settlement did not impose upon Google any obligation not already a part of the Buzz consent order or the subsequent FTC settlement — but it did require Google to fork over an additional $17 million.  

Not only is there little to be gained from yet another ill-conceived antitrust campaign, there is much to be lost. Such massive investigations require substantial resources to conduct, and the opportunity cost of doing so may mean real consumer issues go unaddressed. The Consumer Protection Section of the Missouri AG’s office says it receives some 100,000 consumer complaints a year. How many of those will have to be put on the back burner to accommodate an investigation like this one?

Even when not politically motivated, state enforcement of CPAs is not an unalloyed good. In fact, empirical studies of state consumer protection actions like the one contemplated by Mr. Hawley have shown that such actions tend toward overreach — good for lawyers, perhaps, but expensive for taxpayers and often detrimental to consumers. According to a recent study by economists James Cooper and Joanna Shepherd:

[I]n recent decades, this thoughtful balance [between protecting consumers and preventing the proliferation of lawsuits that harm both consumers and businesses] has yielded to damaging legislative and judicial overcorrections at the state level with a common theoretical mistake: the assumption that more CPA litigation automatically yields more consumer protection…. [C]ourts and legislatures gradually have abolished many of the procedural and remedial protections designed to cabin state CPAs to their original purpose: providing consumers with redress for actual harm in instances where tort and contract law may provide insufficient remedies. The result has been an explosion in consumer protection litigation, which serves no social function and for which consumers pay indirectly through higher prices and reduced innovation.

AG Hawley’s investigation seems almost tailored to duplicate the FTC’s extensive efforts — and to score political points. Or perhaps Mr. Hawley is just perturbed that Missouri missed out its share of the $17 million multistate settlement in 2013.

Which raises the spectre of a further problem with the Missouri case: “rent extraction.”

It’s no coincidence that Mr. Hawley’s investigation follows closely on the heels of Yelp’s recent letter to the FTC and every state AG (as well as four members of Congress and the EU’s chief competition enforcer, for good measure) alleging that Google had re-started scraping Yelp’s content, thus violating the terms of its voluntary commitments to the FTC.

It’s also no coincidence that Yelp “notified” Google of the problem only by lodging a complaint with every regulator who might listen rather than by actually notifying Google. But an action like the one Missouri is undertaking — not resolution of the issue — is almost certainly exactly what Yelp intended, and AG Hawley is playing right into Yelp’s hands.  

Google, for its part, strongly disputes Yelp’s allegation, and, indeed, has — even according to Yelp — complied fully with Yelp’s request to keep its content off Google Local and other “vertical” search pages since 18 months before Google entered into its commitments with the FTC. Google claims that the recent scraping was inadvertent, and that it would happily have rectified the problem if only Yelp had actually bothered to inform Google.

Indeed, Yelp’s allegations don’t really pass the smell test: That Google would suddenly change its practices now, in violation of its commitments to the FTC and at a time of extraordinarily heightened scrutiny by the media, politicians of all stripes, competitors like Yelp, the FTC, the EU, and a host of other antitrust or consumer protection authorities, strains belief.

But, again, identifying and resolving an actual commercial dispute was likely never the goal. As a recent, fawning New York Times article on “Yelp’s Six-Year Grudge Against Google” highlights (focusing in particular on Luther Lowe, now Yelp’s VP of Public Policy and the author of the letter):

Yelp elevated Mr. Lowe to the new position of director of government affairs, a job that more or less entails flying around the world trying to sic antitrust regulators on Google. Over the next few years, Yelp hired its first lobbyist and started a political action committee. Recently, it has started filing complaints in Brazil.

Missouri, in other words, may just be carrying Yelp’s water.

The one clear lesson of the decades-long Microsoft antitrust saga is that companies that struggle to compete in the market can profitably tax their rivals by instigating antitrust actions against them. As Milton Friedman admonished, decrying “the business community’s suicidal impulse” to invite regulation:

As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington [or is it Jefferson City?] and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.

Taking a tough line on Silicon Valley firms in the midst of today’s anti-tech-company populist resurgence may help with the electioneering in Mr. Hawley’s upcoming bid for a US Senate seat and serve Yelp, but it doesn’t offer any clear, actual benefits to Missourians. As I’ve wondered before: “Exactly when will regulators be a little more skeptical of competitors trying to game the antitrust laws for their own advantage?”

On November 10, at the University of Southern California Law School, Assistant Attorney General for Antitrust Makan Delrahim delivered an extremely important policy address on the antitrust treatment of standard setting organizations (SSOs).  Delrahim’s remarks outlined a dramatic shift in the Antitrust Division’s approach to controversies concerning the licensing of standard essential patents (SEPs, patents that “read on” SSO technical standards) that are often subject to “fair, reasonable, and non-discriminatory” (FRAND) licensing obligations imposed by SSOs.  In particular, while Delrahim noted the theoretical concerns of possible “holdups” by SEP holders (when SEP holders threaten to delay licensing until their royalty demands are met), he cogently explained why the problem of “holdouts” by implementers of SEP technologies (when implementers threaten to under-invest in the implementation of a standard, or threaten not to take a license at all, until their royalty demands are met) is a far more serious antitrust concern.  More generally, Delrahim stressed the centrality of patents as property rights, and the need for enforcers not to interfere with the legitimate unilateral exploitation of those rights (whether through licensing, refusals to license, or the filing of injunctive actions).  Underlying Delrahim’s commentary is the understanding that innovation is vitally important to the American economy, and the concern that antitrust enforcers’ efforts in recent years have threatened to undermine innovation by inappropriately interfering in free market licensing negotiations between patentees and licensees.

Important “takeaways” from Delrahim’s speech (with key quotations) are set forth below.

  • Thumb on the scale in favor of implementers: “In particular, I worry that we as enforcers have strayed too far in the direction of accommodating the concerns of technology implementers who participate in standard setting bodies, and perhaps risk undermining incentives for IP creators, who are entitled to an appropriate reward for developing break-through technologies.”
  • Striking the right balance through market forces (as opposed to government-issued best practices): “The dueling interests of innovators and implementers always are in tension, and the tension is resolved through the free market, typically in the form of freely negotiated licensing agreements for royalties or reciprocal licenses.”
  • Holdup as theoretical concern with no evidence that it’s a systemic or widespread problem: He praises Professor Carl Shapiro for his theoretical model of holdup, but stresses that “many of the proposed [antitrust] ‘solutions’ to the hold-up problem are often anathema to the policies underlying the intellectual property system envisioned by our forefathers.”
  • Rejects prior position that antitrust is only concerned with the patent-holder side of the holdup equation, stating that he’s more concerned with holdout given the nature of investments: “Too often lost in the debate over the hold-up problem is recognition of a more serious risk:  the hold-out problem. . . . I view the collective hold-out problem as a more serious impediment to innovation.  Here is why: most importantly, the hold-up and hold-out problems are not symmetric.  What do I mean by that?  It is important to recognize that innovators make an investment before they know whether that investment will ever pay off.  If the implementers hold out, the innovator has no recourse, even if the innovation is successful.  In contrast, the implementer has some buffer against the risk of hold-up because at least some of its investments occur after royalty rates for new technology could have been determined.  Because this asymmetry exists, under-investment by the innovator should be of greater concern than under-investment by the implementer.”
  • What’s at stake: “Every incremental shift in bargaining leverage toward implementers of new technologies acting in concert can undermine incentives to innovate.  I therefore view policy proposals with a one-sided focus on the hold-up issue with great skepticism because they can pose a serious threat to the innovative process.”
  • Breach of FRAND as primarily a contract or fraud, not antitrust issue: “There is a growing trend supporting what I would view as a misuse of antitrust or competition law, purportedly motivated by the fear of so-called patent hold-up, to police private commitments that IP holders make in order to be considered for inclusion in a standard.  This trend is troublesome.  If a patent holder violates its commitments to an SSO, the first and best line of defense, I submit, is the SSO itself and its participants. . . . If a patent holder is alleged to have violated a commitment to a standard setting organization, that action may have some impact on competition.  But, I respectfully submit, that does not mean the heavy hand of antitrust necessarily is the appropriate remedy for the would-be licensee—or the enforcement agency.  There are perfectly adequate and more appropriate common law and statutory remedies available to the SSO or its members.”
  • Recommends that unilateral refusals to license should be per se lawful: “The enforcement of valid patent rights should not be a violation of antitrust law.  A patent holder cannot violate the antitrust laws by properly exercising the rights patents confer, such as seeking an injunction or refusing to license such a patent.  Set aside whether taking these actions might violate the common law.  Under the antitrust laws, I humbly submit that a unilateral refusal to license a valid patent should be per se legal.  Indeed, just this Monday, Chief Judge Diane Wood, a former Deputy Assistant Attorney General at the Antitrust Division, stated that “[e]ven monopolists are almost never required to assist their competitors.”
  • Intent to investigate buyers’ cartel behavior in SSOs: “The prospect of hold-out offers implementers a crucial bargaining chip.  Unlike the unilateral hold-up problem, implementers can impose this leverage before they make significant investments in new technology.  . . . The Antitrust Division will carefully scrutinize what appears to be cartel-like anticompetitive behavior among SSO participants, either on the innovator or implementer side.  The old notion that ‘openness’ alone is sufficient to guard against cartel-like behavior in SSOs may be outdated, given the evolution of SSOs beyond strictly objective technical endeavors. . . . I likewise urge SSOs to be proactive in evaluating their own rules, both at the inception of the organization, and routinely thereafter.  In fact, SSOs would be well advised to implement and maintain internal antitrust compliance programs and regularly assess whether their rules, or the application of those rules, are or may become anticompetitive.”
  • Basing royalties on the “smallest salable component” as a requirement by a concerted agreement of implementers is a possible antitrust violation: “If an SSO pegs its definition of “reasonable” royalties to a single Georgia-Pacific factor that heavily favors either implementers or innovators, then the process that led to such a rule deserves close antitrust scrutiny.  While the so-called ‘smallest salable component’ rule may be a useful tool among many in determining patent infringement damages for multi-component products, its use as a requirement by a concerted agreement of implementers as the exclusive determinant of patent royalties may very well warrant antitrust scrutiny.”
  • Right to Injunctive Relief and holdout incentives: “Patents are a form of property, and the right to exclude is one of the most fundamental bargaining rights a property owner possesses.  Rules that deprive a patent holder from exercising this right—whether imposed by an SSO or by a court—undermine the incentive to innovate and worsen the problem of hold-out.  After all, without the threat of an injunction, the implementer can proceed to infringe without a license, knowing that it is only on the hook only for reasonable royalties.”
  • Seeking or Enforcing Injunctive Relief Generally a Contract Not Antitrust Issue: “It is just as important to recognize that a violation by a patent holder of an SSO rule that restricts a patent-holder’s right to seek injunctive relief should be appropriately the subject of a contract or fraud action, and rarely if ever should be an antitrust violation.”
  • FRAND is Not a Compulsory Licensing Scheme: “We should not transform commitments to license on FRAND terms into a compulsory licensing scheme.  Indeed, we have had strong policies against compulsory licensing, which effectively devalues intellectual property rights, including in most of our trade agreements, such as the TRIPS agreement of the WTO.  If an SSO requires innovators to submit to such a scheme as a condition for inclusion in a standard, we should view the SSO’s rule and the process leading to it with suspicion, and certainly not condemn the use of such injunctive relief as an antitrust violation where a contract remedy is perfectly adequate.”

Yesterday Learfield and IMG College inked their recently announced merger. Since the negotiations were made public several weeks ago, the deal has garnered some wild speculation and potentially negative attention. Now that the merger has been announced, it’s bound to attract even more attention and conjecture.

On the field of competition, however, the market realities that support the merger’s approval are compelling. And, more importantly, the features of this merger provide critical lessons on market definition, barriers to entry, and other aspects of antitrust law related to two-sided and advertising markets that can be applied to numerous matters vexing competition commentators.

First, some background

Learfield and IMG specialize in managing multimedia rights (MMRs) for intercollegiate sports. They are, in effect, classic advertising intermediaries, facilitating the monetization by colleges of radio broadcast advertising and billboard, program, and scoreboard space during games (among other things), and the purchase by advertisers of access to these valuable outlets.

Although these transactions can certainly be (and very often are) entered into by colleges and advertisers directly, firms like Learfield and IMG allow colleges to outsource the process — as one firm’s tag line puts it, “We Work | You Play.” Most important, by bringing multiple schools’ MMRs under one roof, these firms can reduce the transaction costs borne by advertisers in accessing multiple outlets as part of a broad-based marketing plan.

Media rights and branding are a notable source of revenue for collegiate athletic departments: on average, they account for about 3% of these revenues. While they tend to pale in comparison to TV rights, ticket sales, and fundraising, for major programs, MMRs may be the next most important revenue source after these.

Many collegiate programs retain some or all of their multimedia rights and use in-house resources to market them. In some cases schools license MMRs through their athletic conference. In other cases, schools ink deals to outsource their MMRs to third parties, such as Learfield, IMG, JMI Sports, Outfront Media, and Fox Sports, among several others. A few schools even use professional sports teams to manage their MMRs (the owner of the Red Sox manages Boston College’s MMRs, for example).

Schools switch among MMR managers with some regularity, and, in most cases apparently, not among the merging parties. Michigan State, for example, was well known for handling its MMRs in-house. But in 2016 the school entered into a 15-year deal with Fox Sports, estimated at minimum guaranteed $150 million. In 2014 Arizona State terminated its MMR deal with IMG and took it MMRs in-house. Then, in 2016, the Sun Devils entered into a first-of-its-kind arrangement with the Pac 12 in which the school manages and sells its own marketing and media rights while the conference handles core business functions for the sales and marketing team (like payroll, accounting, human resources, and employee benefits). The most successful new entrant on the block, JMI Sports, won Kentucky, Clemson, and the University of Pennsylvania from Learfield or IMG. Outfront Media was spun off from CBS in 2014 and has become one of the strongest MMR intermediary competitors, handling some of the biggest names in college sports, including LSU, Maryland, and Virginia. All told, eight recent national Division I champions are served by MMR managers other than IMG and Learfield.

The supposed problem

As noted above, the most obvious pro-competitive benefit of the merger is in the reduction in transaction costs for firms looking to advertise in multiple markets. But, in order to confer that benefit (which, of course, also benefits the schools, whose marketing properties become easier to access), that also means a dreaded increase in size, measured by number of schools’ MMRs managed. So is this cause for concern?

Jason Belzer, a professor at Rutgers University and founder of sports consulting firm, GAME, Inc., has said that the merger will create a juggernaut — yes, “a massive inexorable force… that crushes whatever is in its path” — that is likely to invite antitrust scrutiny. The New York Times opines that the deal will allow Learfield to “tighten its grip — for nearly total control — on this niche but robust market,” “surely” attracting antitrust scrutiny. But these assessments seem dramatically overblown, and insufficiently grounded in the dynamics of the market.

Belzer’s concerns seem to be merely the size of the merging parties — again, measured by the number of schools’ rights they manage — and speculation that the merger would bring to an end “any” opportunity for entry by a “major” competitor. These are misguided concerns.

To begin, the focus on the potential entry of a “major” competitor is an odd standard that ignores the actual and potential entry of many smaller competitors that are able to win some of the most prestigious and biggest schools. In fact, many in the industry argue — rightly — that there are few economies of scale for colleges. Most of these firms’ employees are dedicated to a particular school and those costs must be incurred for each school, no matter the number, and borne by new entrants and incumbents alike. That means a small firm can profitably compete in the same market as larger firms — even “juggernauts.” Indeed, every college that brings MMR management in-house is, in fact, an entrant — and there are some big schools in big conferences that manage their MMRs in-house.

The demonstrated entry of new competitors and the transitions of schools from one provider to another or to in-house MMR management indicate that no competitor has any measurable market power that can disadvantage schools or advertisers.

Indeed, from the perspective of the school, the true relevant market is no broader than each school’s own rights. Even after the merger there will be at least five significant firms competing for those rights, not to mention each school’s conference, new entrants, and the school itself.

The two-sided market that isn’t really two-sided

Standard antitrust analysis, of course, focuses on consumer benefits: Will the merger make consumers better off (or no worse off)? But too often casual antitrust analysis of two-sided markets trips up on identifying just who the consumer is — and what the relevant market is. For a shopping mall, is the consumer the retailer or the shopper? For newspapers and search engines, is the customer the advertiser or the reader? For intercollegiate sports multimedia rights licensing, is the consumer the college or the advertiser?

Media coverage of the anticipated IMG/Learfield merger largely ignores advertisers as consumers and focuses almost exclusively on the the schools’ relationship with intermediaries — as purchasers of marketing services, rather than sellers of advertising space.

Although it’s difficult to identify the source of this odd bias, it seems to be based on the notion that, while corporations like Coca-Cola and General Motors have some sort of countervailing market power against marketing intermediaries, universities don’t. With advertisers out of the picture, media coverage suggests that, somehow, schools may be worse off if the merger were to proceed. But missing from this assessment are two crucial facts that undermine the story: First, schools actually have enormous market power; and, second, schools compete in the business of MMR management.

This second factor suggests, in fact, that sometimes there may be nothing special about two-sided markets sufficient to give rise to a unique style of antitrust analysis.

Much of the antitrust confusion seems to be based on confusion over the behavior of two-sided markets. A two-sided market is one in which two sets of actors interact through an intermediary or platform, which, in turn, facilitates the transactions, often enabling transactions to take place that otherwise would be too expensive absent the platform. A shopping mall is a two-sided market where shoppers can find their preferred stores. Stores would operate without the platform, but perhaps not as many, and not as efficiently. Newspapers, search engines, and other online platforms are two-sided markets that bring together advertisers and eyeballs that might not otherwise find each other absent the platform. And a collegiate multimedia rights management firms is a two-sided market where colleges that want to sell advertising space get together with firms that want to advertise their goods and services.

Yet there is nothing particularly “transformative” about the outsourcing of MMR management. Credit cards, for example are qualitatively different than in-store credit operations. They are two-sided platforms that substitute for in-house operations — but they also create an entirely new product and product market. MMR marketing firms do lower some transaction costs and reduce risk for collegiate sports marketing, but the product is not substantially changed — in fact, schools must have the knowledge and personnel to assess and enter into the initial sale of MMRs to an intermediary and, because of ongoing revenue-sharing and coordination with the intermediary, must devote ongoing resources even after the initial sale.

But will a merged entity have “too much” power? Imagine if a single firm owned the MMRs for nearly all intercollegiate competitors. How would it be able to exercise its supposed market power? Because each deal is negotiated separately, and, other than some mundane, fixed back-office expenses, the costs of rights management must be incurred whether a firm negotiates one deal or 100, there are no substantial economies of scale in the purchasing of MMRs. As a result, the existence of deals with other schools won’t automatically translate into better deals with subsequent schools.

Now, imagine if one school retained its own MMRs, but decided it might want to license them to an intermediary. Does it face anticompetitive market conditions if there is only a single provider of such services? To begin with, there is never only a single provider, as each school can provide the services in-house. This is not even the traditional monopoly constraint of simply “not buying,” which makes up the textbook “deadweight loss” from monopoly: In this case “not buying” does not mean going without; it simply means providing for oneself.

More importantly, because the school has a monopoly on access to its own marketing rights (to say nothing of access to its own physical facilities) unless and until it licenses them, its own bargaining power is largely independent of an intermediary’s access to other schools’ rights. If it were otherwise, each school would face anticompetitive market conditions simply by virtue of other schools’ owning their own rights!

It is possible that a larger, older firm will have more expertise and will be better able to negotiate deals with other schools — i.e., it will reap the benefits of learning by doing. But the returns to learning by doing derive from the ability to offer higher-quality/lower-cost services over time — which are a source of economic benefit, not cost. At the same time, the bulk of the benefits of experience may be gained over time with even a single set of MMRs, given the ever-varying range of circumstances even a single school will create: There may be little additional benefit (and, to be sure, there is additional cost) from managing multiple schools’ MMRs. And whatever benefits specialized firms offer, they also come with agency costs, and an intermediary’s specialized knowledge about marketing MMRs may or may not outweigh a school’s own specialized knowledge about the nuances of its particular circumstances. Moreover, because of knowledge spillovers and employee turnover this marketing expertise is actually widely distributed; not surprisingly, JMI Sports’ MMR unit, one of the most recent and successful entrants into the business was started by a former employee of IMG. Several other firms started out the same way.

The right way to begin thinking about the issue is this: Imagine if MMR intermediaries didn’t exist — what would happen? In this case, the answer is readily apparent because, for a significant number of schools (about 37% of Division I schools, in fact) MMR licensing is handled in-house, without the use of intermediaries. These schools do, in fact, attract advertisers, and there is little indication that they earn less net profit for going it alone. Schools with larger audiences, better targeted to certain advertisers’ products, command higher prices. Each school enjoys an effective monopoly over advertising channels around its own games, and each has bargaining power derived from its particular attractiveness to particular advertisers.

In effect, each school faces a number of possible options for MMR monetization — most notably a) up-front contracting to an intermediary, which then absorbs the risk, expense, and possible up-side of ongoing licensing to advertisers, or b) direct, ongoing licensing to advertisers. The presence of the intermediary doesn’t appreciably change the market, nor the relative bargaining power of sellers (schools) and buyers (advertisers) of advertising space any more than the presence of temp firms transforms the fundamental relationship between employers and potential part-time employees.

In making their decisions, schools always have the option of taking their MMR management in-house. In facing competing bids from firms such as IMG or Learfield, from their own conferences, or from professional sports teams, the opening bid, in a sense, comes from the school itself. Even the biggest intermediary in the industry must offer the school a deal that is at least as good as managing the MMRs in-house.

The true relevant market: Advertising

According to economist Andy Schwarz, if the relevant market is “college-based marketing services to Power 5 schools, the antitrust authorities may have more concerns than if it’s marketing services in sports.” But this entirely misses the real market exchange here. Sure, marketing services are purchased by schools, but their value to the schools is independent of the number of other schools an intermediary also markets.

Advertisers always have the option of deploying their ad dollars elsewhere. If Coca-Cola wants to advertise on Auburn’s stadium video board, it’s because Auburn’s video board is a profitable outlet for advertising, not because the Auburn ads are bundled with advertising at dozens of other schools (although that bundling may reduce the total cost of advertising on Auburn’s scoreboard as well as other outlets). Similarly, Auburn is seeking the highest bidder for space on its video board. It does not matter to Auburn that the University of Georgia is using the same intermediary to sell ads on its stadium video board.

The willingness of purchasers — say, Coca-Cola or Toyota — to pay for collegiate multimedia advertising is a function of the school that licenses it (net transaction costs) — and MMR agents like IMG and Learfield commit substantial guaranteed sums and a share of any additional profits for the rights to sell that advertising: For example, IMG recently agreed to pay $150 million over 10 years to renew its MMR contract at UCLA. But this is the value of a particular, niche form of advertising, determined within the context of the broader advertising market. How much pricing power over scoreboard advertising does any university, or even any group of universities under the umbrella of an intermediary have, in a world in which Coke and Toyota can advertise virtually anywhere — including during commercial breaks in televised intercollegiate games, which are licensed separately from the MMRs licensed by companies like IMG and Learfield?

There is, in other words, a hard ceiling on what intermediaries can charge schools for MMR marketing services: The schools’ own cost of operating a comparable program in-house.

To be sure, for advertisers, large MMR marketing firms lower the transaction costs of buying advertising space across a range of schools, presumably increasing demand for intercollegiate sports advertising and sponsorship. But sponsors and advertisers have a wide range of options for spending their marketing dollars. Intercollegiate sports MMRs are a small slice of the sports advertising market, which, in turn, is a small slice of the total advertising market. Even if one were to incorrectly describe the combined entity as a “juggernaut” in intercollegiate sports, the MMR rights it sells would still be a flyspeck in the broader market of multimedia advertising.

According to one calculation (by MoffettNathanson), total ad spending in the U.S. was about $191 billion in 2016 (Pew Research Center estimates total ad revenue at $240 billion) and the global advertising market was estimated to be worth about $493 billion. The intercollegiate MMR segment represents a minuscule fraction of that. According to Jason Belzer, “[a]t the time of its sale to WME in 2013, IMG College’s yearly revenue was nearly $500 million….” Another source puts it at $375 million. Either way, it’s a fraction of one percent of the total market, and even combined with Learfield it will remain a minuscule fraction. Even if one were to define a far narrower sports sponsorship market, which a Price Waterhouse estimate puts at around $16 billion, the combined companies would still have a tiny market share.

As sellers of MMRs, colleges are competing with each other, professional sports such as the NFL and NBA, and with non-sports marketing opportunities. And it’s a huge and competitive market.

Barriers to entry

While capital requirements and the presence of long-term contracts may present challenges to potential entrants into the business of marketing MMRs, these potential entrants face virtually no barriers that are not, or have not been, faced by incumbent providers. In this context, one should keep in mind two factors. First, barriers to entry are properly defined as costs incurred by new entrants that are not incurred by incumbents (no matter what Joe Bain says; Stigler always wins this dispute…). Every firm must bear the cost of negotiating and managing each schools’ MMRs, and, as noted, these costs don’t vary significantly with the number of schools being managed. And every entrant needs approximately the same capital and human resources per similarly sized school as every incumbent. Thus, in this context, neither the need for capital nor dedicated employees is properly construed as a barrier to entry.

Second, as the DOJ and FTC acknowledge in the Horizontal Merger Guidelines, any merger can be lawful under the antitrust laws, no matter its market share, where there are no significant barriers to entry:

The prospect of entry into the relevant market will alleviate concerns about adverse competitive effects… if entry into the market is so easy that the merged firm and its remaining rivals in the market, either unilaterally or collectively, could not profitably raise price or otherwise reduce competition compared to the level that would prevail in the absence of the merger.

As noted, there are low economies of scale in the business, with most of the economies occurring in the relatively small “back office” work of payroll, accounting, human resources, and employee benefits. Since the 2000s, the entry of several significant competitors — many entering with only one or two schools or specializing in smaller or niche markets — strongly suggests that there are no economically important barriers to entry. And these firms have entered and succeeded with a wide range of business models and firm sizes:

  • JMI Sports — a “rising boutique firm” — hired Tom Stultz, the former senior vice president and managing director of IMG’s MMR business, in 2012. JMI won its first (and thus, at the time, only) MMR bid in 2014 at the University of Kentucky, besting IMG to win the deal.
  • Peak Sports MGMT, founded in 2012, is a small-scale MMR firm that focuses on lesser Division I and II schools in Texas and the Midwest. It manages just seven small properties, including Southland Conference schools like the University of Central Arkansas and Southeastern Louisiana University.
  • Fox Sports entered the business in 2008 with a deal with the University of Florida. It now handles MMRs for schools like Georgetown, Auburn, and Villanova. Fox’s entry suggests that other media companies — like ESPN — that may already own TV broadcast rights are also potential entrants.
  • In 2014 the sports advertising firm, Van Wagner, hired three former Nelligan employees to make a play for the college sports space. In 2015 the company won its first MMR bid at Florida International University, reportedly against seven other participants. It now handles more than a dozen schools including Georgia State (which it won from IMG), Loyola Marymount, Pepperdine, Stony Brook, and Santa Clara.
  • In 2001 Fenway Sports Group, parent company of the Boston Red Sox and Liverpool Football Club, entered into an MMR agreement with Boston College. And earlier this year the Tampa Bay Lightning hockey team began handling multimedia marketing for the University of South Florida.

Potential new entrants abound. Most obviously, sports networks like ESPN could readily follow Fox Sports’ lead and advertising firms could follow Van Wagner’s. These companies have existing relationships and expertise that position them for easy entry into the MMR business. Moreover, there are already several companies that handle the trademark licensing for schools, any of which could move into the MMR management business, as well; both IMG and Learfield already handle licensing for a number of schools. Most notably, Fermata Partners, founded in 2012 by former IMG employees and acquired in 2015 by CAA Sports (a division of Creative Artists Agency), has trademark licensing agreements with Georgia, Kentucky, Miami, Notre Dame, Oregon, Virginia, and Wisconsin. It could easily expand into selling MMR rights for these and other schools. Other licensing firms like Exemplar (which handles licensing at Columbia) and 289c (which handles licensing at Texas and Ohio State) could also easily expand into MMR.

Given the relatively trivial economies of scale, the minimum viable scale for a new entrant appears to be approximately one school — a size that each school’s in-house operations, of course, automatically meets. Moreover, the Peak Sports, Fenway, and Tampa Bay Lightning examples suggest that there may be particular benefits to local, regional, or category specialization, suggesting that innovative, new entry is not only possible, but even likely, as the business continues to evolve.

Conclusion

A merger between IMG and Learfield should not raise any antitrust issues. College sports is a small slice of the total advertising market. Even a so-called “juggernaut” in college sports multimedia rights is a small bit in the broader market of multimedia marketing.

The demonstrated entry of new competitors and the transitions of schools from one provider to another or to bringing MMR management in-house, indicates that no competitor has any measurable market power that can disadvantage schools or advertisers.

The term “juggernaut” entered the English language because of misinterpretation and exaggeration of actual events. Fears of the IMG/Learfield merger crushing competition is similarly based on a misinterpretation of two-sided markets and misunderstanding of the reality of the of the market for college multimedia rights management. Importantly, the case is also a cautionary tale for those who would identify narrow, contract-, channel-, or platform-specific relevant markets in circumstances where a range of intermediaries and direct relationships can compete to offer the same service as those being scrutinized. Antitrust advocates have a long and inglorious history of defining markets by channels of distribution or other convenient, yet often economically inappropriate, combinations of firms or products. Yet the presence of marketing or other intermediaries does not automatically transform a basic, commercial relationship into a novel, two-sided market necessitating narrow market definitions and creative economics.

In recent years, the European Union’s (EU) administrative body, the European Commission (EC), increasingly has applied European competition law in a manner that undermines free market dynamics.  In particular, its approach to “dominant” firm conduct disincentivizes highly successful companies from introducing product and service innovations that enhance consumer welfare and benefit the economy – merely because they threaten to harm less efficient competitors.

For example, the EC fined Microsoft 561 million euros in 2013 for its failure to adhere to an order that it offer a version of its Window software suite that did not include its popular Windows Media Player (WMP) – despite the lack of consumer demand for a “dumbed down” Windows without WMP.  This EC intrusion into software design has been described as a regulatory “quagmire.”

In June 2017 the EC fined Google 2.42 billion euros for allegedly favoring its own comparison shopping service over others favored in displaying Google search results – ignoring economic research that shows Google’s search policies benefit consumers.  Google also faces potentially higher EC antitrust fines due to alleged abuses involving android software (bundling of popular Google search and Chrome apps), a product that has helped spur dynamic smartphone innovations and foster new markets.

Furthermore, other highly innovative single firms, such as Apple and Amazon (favorable treatment deemed “state aids”), Qualcomm (alleged anticompetitive discounts), and Facebook (in connection with its WhatsApp acquisition), face substantial EC competition law penalties.

Underlying the EC’s current enforcement philosophy is an implicit presumption that innovations by dominant firms violate competition law if they in any way appear to disadvantage competitors.  That presumption forgoes considering the actual effects on the competitive process of dominant firm activities.  This is a recipe for reduced innovation, as successful firms “pull their competitive punches” to avoid onerous penalties.

The European Court of Justice (ECJ) implicitly recognized this problem in its September 6, 2017 decision setting aside the European General Court’s affirmance of the EC’s 2009 1.06 billion euro fine against Intel.  Intel involved allegedly anticompetitive “loyalty rebates” by Intel, which allowed buyers to achieve cost savings in Intel chip purchases.  In remanding the Intel case to the General Court for further legal and factual analysis, the ECJ’s opinion stressed that the EC needed to do more than find a dominant position and categorize the rebates in order to hold Intel liable.  The EC also needed to assess the “capacity of [Intel’s] . . . practice to foreclose competitors which are at least as efficient” and whether any exclusionary effect was outweighed by efficiencies that also benefit consumers.  In short, evidence-based antitrust analysis was required.  Mere reliance on presumptions was not enough.  Why?  Because competition on the merits is centered on the recognition that the departure of less efficient competitors is part and parcel of consumer welfare-based competition on the merits.  As the ECJ cogently put it:

[I]t must be borne in mind that it is in no way the purpose of Article 102 TFEU [which prohibits abuse of a dominant position] to prevent an undertaking from acquiring, on its own merits, the dominant position on a market.  Nor does that provision seek to ensure that competitors less efficient than the undertaking with the dominant position should remain on the market . . . .  [N]ot every exclusionary effect is necessarily detrimental to competition. Competition on the merits may, by definition, lead to the departure from the market or the marginalisation of competitors that are less efficient and so less attractive to consumers from the point of view of, among other things, price, choice, quality or innovation[.]

Although the ECJ’s recent decision is commendable, it does not negate the fact that Intel had to wait eight years to have its straightforward arguments receive attention – and the saga is far from over, since the General Court has to address this matter once again.  These sorts of long-term delays, during which firms face great uncertainty (and the threat of further EC investigations and fines), are antithetical to innovative activity by enterprises deemed dominant.  In short, unless and until the EC changes its competition policy perspective on dominant firm conduct (and there are no indications that such a change is imminent), innovation and economic dynamism will suffer.

Even if the EC dithers, the United Kingdom’s (UK) imminent withdrawal from the EU (Brexit) provides it with a unique opportunity to blaze a new competition policy trail – and perhaps in so doing influence other jurisdictions.

In particular, Brexit will enable the UK’s antitrust enforcer, the Competition and Markets Authority (CMA), to adopt an outlook on competition policy in general – and on single firm conduct in particular – that is more sensitive to innovation and economic dynamism.  What might such a CMA enforcement policy look like?  It should reject the EC’s current approach.  It should focus instead on the actual effects of competitive activity.  In particular, it should incorporate the insights of decision theory (see here, for example) and place great weight on efficiencies (see here, for example).

Let us hope that the CMA acts boldly – carpe diem.  Such action, combined with other regulatory reforms, could contribute substantially to the economic success of Brexit (see here).

Last week the editorial board of the Washington Post penned an excellent editorial responding to the European Commission’s announcement of its decision in its Google Shopping investigation. Here’s the key language from the editorial:

Whether the demise of any of [the complaining comparison shopping sites] is specifically traceable to Google, however, is not so clear. Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies. Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites…. Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

That’s actually a pretty thorough, if succinct, summary of the basic problems with the Commission’s case (based on its PR and Factsheet, at least; it hasn’t released the full decision yet).

I’ll have more to say on the decision in due course, but for now I want to elaborate on two of the points raised by the WaPo editorial board, both in service of its crucial rejoinder to the Commission that “Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies.”

First, the WaPo editorial board points out that:

Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites.

It is undoubtedly true that users “may well prefer to see a Google-generated list of vendors first.” It’s also crucial to understanding the changes in Google’s search results page that have given rise to the current raft of complaints.

As I noted in a Wall Street Journal op-ed two years ago:

It’s a mistake to consider “general search” and “comparison shopping” or “product search” to be distinct markets.

From the moment it was technologically feasible to do so, Google has been adapting its traditional search results—that familiar but long since vanished page of 10 blue links—to offer more specialized answers to users’ queries. Product search, which is what is at issue in the EU complaint, is the next iteration in this trend.

Internet users today seek information from myriad sources: Informational sites (Wikipedia and the Internet Movie Database); review sites (Yelp and TripAdvisor); retail sites (Amazon and eBay); and social-media sites (Facebook and Twitter). What do these sites have in common? They prioritize certain types of data over others to improve the relevance of the information they provide.

“Prioritization” of Google’s own shopping results, however, is the core problem for the Commission:

Google has systematically given prominent placement to its own comparison shopping service: when a consumer enters a query into the Google search engine in relation to which Google’s comparison shopping service wants to show results, these are displayed at or near the top of the search results. (Emphasis in original).

But this sort of prioritization is the norm for all search, social media, e-commerce and similar platforms. And this shouldn’t be a surprise: The value of these platforms to the user is dependent upon their ability to sort the wheat from the chaff of the now immense amount of information coursing about the Web.

As my colleagues and I noted in a paper responding to a methodologically questionable report by Tim Wu and Yelp leveling analogous “search bias” charges in the context of local search results:

Google is a vertically integrated company that offers general search, but also a host of other products…. With its well-developed algorithm and wide range of products, it is hardly surprising that Google can provide not only direct answers to factual questions, but also a wide range of its own products and services that meet users’ needs. If consumers choose Google not randomly, but precisely because they seek to take advantage of the direct answers and other options that Google can provide, then removing the sort of “bias” alleged by [complainants] would affirmatively hurt, not help, these users. (Emphasis added).

And as Josh Wright noted in an earlier paper responding to yet another set of such “search bias” charges (in that case leveled in a similarly methodologically questionable report by Benjamin Edelman and Benjamin Lockwood):

[I]t is critical to recognize that bias alone is not evidence of competitive harm and it must be evaluated in the appropriate antitrust economic context of competition and consumers, rather individual competitors and websites. Edelman & Lockwood´s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content. However, it is not useful from an antitrust policy perspective because it erroneously—and contrary to economic theory and evidence—presumes natural and procompetitive product differentiation in search rankings to be inherently harmful. (Emphasis added).

We’ll have to see what kind of analysis the Commission relies upon in its decision to reach its conclusion that prioritization is an antitrust problem, but there is reason to be skeptical that it will turn out to be compelling. The Commission states in its PR that:

The evidence shows that consumers click far more often on results that are more visible, i.e. the results appearing higher up in Google’s search results. Even on a desktop, the ten highest-ranking generic search results on page 1 together generally receive approximately 95% of all clicks on generic search results (with the top result receiving about 35% of all the clicks). The first result on page 2 of Google’s generic search results receives only about 1% of all clicks. This cannot just be explained by the fact that the first result is more relevant, because evidence also shows that moving the first result to the third rank leads to a reduction in the number of clicks by about 50%. The effects on mobile devices are even more pronounced given the much smaller screen size.

This means that by giving prominent placement only to its own comparison shopping service and by demoting competitors, Google has given its own comparison shopping service a significant advantage compared to rivals. (Emphasis added).

Whatever truth there is in the characterization that placement is more important than relevance in influencing user behavior, the evidence cited by the Commission to demonstrate that doesn’t seem applicable to what’s happening on Google’s search results page now.

Most crucially, the evidence offered by the Commission refers only to how placement affects clicks on “generic search results” and glosses over the fact that the “prominent placement” of Google’s “results” is not only a difference in position but also in the type of result offered.

Google Shopping results (like many of its other “vertical results” and direct answers) are very different than the 10 blue links of old. These “universal search” results are, for one thing, actual answers rather than merely links to other sites. They are also more visually rich and attractively and clearly displayed.

Ironically, Tim Wu and Yelp use the claim that users click less often on Google’s universal search results to support their contention that increased relevance doesn’t explain Google’s prioritization of its own content. Yet, as we note in our response to their study:

[I]f a consumer is using a search engine in order to find a direct answer to a query rather than a link to another site to answer it, click-through would actually represent a decrease in consumer welfare, not an increase.

In fact, the study fails to incorporate this dynamic even though it is precisely what the authors claim the study is measuring.

Further, as the WaPo editorial intimates, these universal search results (including Google Shopping results) are quite plausibly more valuable to users. As even Tim Wu and Yelp note:

No one truly disagrees that universal search, in concept, can be an important innovation that can serve consumers.

Google sees it exactly this way, of course. Here’s Tim Wu and Yelp again:

According to Google, a principal difference between the earlier cases and its current conduct is that universal search represents a pro-competitive, user-serving innovation. By deploying universal search, Google argues, it has made search better. As Eric Schmidt argues, “if we know the answer it is better for us to answer that question so [the user] doesn’t have to click anywhere, and in that sense we… use data sources that are our own because we can’t engineer it any other way.”

Of course, in this case, one would expect fewer clicks to correlate with higher value to users — precisely the opposite of the claim made by Tim Wu and Yelp, which is the surest sign that their study is faulty.

But the Commission, at least according to the evidence cited in its PR, doesn’t even seem to measure the relative value of the very different presentations of information at all, instead resting on assertions rooted in the irrelevant difference in user propensity to click on generic (10 blue links) search results depending on placement.

Add to this Pinar Akman’s important point that Google Shopping “results” aren’t necessarily search results at all, but paid advertising:

[O]nce one appreciates the fact that Google’s shopping results are simply ads for products and Google treats all ads with the same ad-relevant algorithm and all organic results with the same organic-relevant algorithm, the Commission’s order becomes impossible to comprehend. Is the Commission imposing on Google a duty to treat non-sponsored results in the same way that it treats sponsored results? If so, does this not provide an unfair advantage to comparison shopping sites over, for example, Google’s advertising partners as well as over Amazon, eBay, various retailers, etc…?

Randy Picker also picks up on this point:

But those Google shopping boxes are ads, Picker told me. “I can’t imagine what they’re thinking,” he said. “Google is in the advertising business. That’s how it makes its money. It has no obligation to put other people’s ads on its website.”

The bottom line here is that the WaPo editorial board does a better job characterizing the actual, relevant market dynamics in a single sentence than the Commission seems to have done in its lengthy releases summarizing its decision following seven full years of investigation.

The second point made by the WaPo editorial board to which I want to draw attention is equally important:

Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

The Commission dismisses this argument in its Factsheet:

The Commission Decision concerns the effect of Google’s practices on comparison shopping markets. These offer a different service to merchant platforms, such as Amazon and eBay. Comparison shopping services offer a tool for consumers to compare products and prices online and find deals from online retailers of all types. By contrast, they do not offer the possibility for products to be bought on their site, which is precisely the aim of merchant platforms. Google’s own commercial behaviour reflects these differences – merchant platforms are eligible to appear in Google Shopping whereas rival comparison shopping services are not.

But the reality is that “comparison shopping,” just like “general search,” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google (or Foundem, or Amazon, or Facebook…) happens to use doesn’t reflect the extent of substitutability between these different mechanisms.

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive. The same goes for comparison shopping.

And the fact that Amazon and eBay “offer the possibility for products to be bought on their site” doesn’t take away from the fact that they also “offer a tool for consumers to compare products and prices online and find deals from online retailers of all types.” Not only do these sites contain enormous amounts of valuable (and well-presented) information about products, including product comparisons and consumer reviews, but they also actually offer comparisons among retailers. In fact, Fifty percent of the items sold through Amazon’s platform, for example, are sold by third-party retailers — the same sort of retailers that might also show up on a comparison shopping site.

More importantly, though, as the WaPo editorial rightly notes, “[t]hose who aren’t happy anyway have other options.” Google just isn’t the indispensable gateway to the Internet (and definitely not to shopping on the Internet) that the Commission seems to think.

Today over half of product searches in the US start on Amazon. The majority of web page referrals come from Facebook. Yelp’s most engaged users now access it via its app (which has seen more than 3x growth in the past five years). And a staggering 40 percent of mobile browsing on both Android and iOS now takes place inside the Facebook app.

Then there are “closed” platforms like the iTunes store and innumerable other apps that handle copious search traffic (including shopping-related traffic) but also don’t figure in the Commission’s analysis, apparently.

In fact, billions of users reach millions of companies every day through direct browser navigation, social media, apps, email links, review sites, blogs, and countless other means — all without once touching Google.com. So-called “dark social” interactions (email, text messages, and IMs) drive huge amounts of some of the most valuable traffic on the Internet, in fact.

All of this, in turn, has led to a competitive scramble to roll out completely new technologies to meet consumers’ informational (and merchants’ advertising) needs. The already-arriving swarm of VR, chatbots, digital assistants, smart-home devices, and more will offer even more interfaces besides Google through which consumers can reach their favorite online destinations.

The point is this: Google’s competitors complaining that the world is evolving around them don’t need to rely on Google. That they may choose to do so does not saddle Google with an obligation to ensure that they can always do so.

Antitrust laws — in Europe, no less than in the US — don’t require Google or any other firm to make life easier for competitors. That’s especially true when doing so would come at the cost of consumer-welfare-enhancing innovations. The Commission doesn’t seem to have grasped this fundamental point, however.

The WaPo editorial board gets it, though:

The immense size and power of all Internet giants are a legitimate focus for the antitrust authorities on both sides of the Atlantic. Brussels vs. Google, however, seems to be a case of punishment without crime.

Regardless of the merits and soundness (or lack thereof) of this week’s European Commission Decision in the Google Shopping case — one cannot assess this until we have the text of the decision — two comments really struck me during the press conference.

First, it was said that Google’s conduct had essentially reduced innovation. If I heard correctly, this is a formidable statement. In 2016, another official EU service published stats that described Alphabet as increasing its R&D by 22% and ranked it as the world’s 4th top R&D investor. Sure it can always be better. And sure this does not excuse everything. But still. The press conference language on incentives to innovate was a bit of an oversell, to say the least.

Second, the Commission views this decision as a “precedent” or as a “framework” that will inform the way dominant Internet platforms should display, intermediate and market their services and those of their competitors. This may fuel additional complaints by other vertical search rivals against (i) Google in relation to other product lines, but also against (ii) other large platform players.

Beyond this, the Commission’s approach raises a gazillion questions of law and economics. Pending the disclosure of the economic evidence in the published decision, let me share some thoughts on a few (arbitrarily) selected legal issues.

First, the Commission has drawn the lesson of the Microsoft remedy quagmire. The Commission refrains from using a trustee to ensure compliance with the decision. This had been a bone of contention in the 2007 Microsoft appeal. Readers will recall that the Commission had imposed on Microsoft to appoint a monitoring trustee, who was supposed to advise on possible infringements in the implementation of the decision. On appeal, the Court eventually held that the Commission was solely responsible for this, and could not delegate those powers. Sure, the Commission could “retai[n] its own external expert to provide advice when it investigates the implementation of the remedies.” But no more than that.

Second, we learn that the Commission is no longer in the business of software design. Recall the failed untying of WMP and Windows — Windows Naked sold only 11,787 copies, likely bought by tech bootleggers willing to acquire the first piece of software ever designed by antitrust officials — or the browser “Choice Screen” compliance saga which eventually culminated with a €561 million fine. Nothing of this can be found here. The Commission leaves remedial design to the abstract concept of “equal treatment”.[1] This, certainly, is a (relatively) commendable approach, and one that could inspire remedies in other unilateral conduct cases, in particular, exploitative conduct ones where pricing remedies are both costly, impractical, and consequentially inefficient.

On the other hand, readers will also not fail to see the corollary implication of “equal treatment”: search neutrality could actually cut both ways, and lead to a lawful degradation in consumer welfare if Google were ever to decide to abandon rich format displays for both its own shopping services and those of rivals.

Third, neither big data nor algorithmic design is directly vilified in the case (“The Commission Decision does not object to the design of Google’s generic search algorithms or to demotions as such, nor to the way that Google displays or organises its search results pages”). In fact, the Commission objects to the selective application of Google’s generic search algorithms to its own products. This is an interesting, and subtle, clarification given all the coverage that this topic has attracted in recent antitrust literature. We are in fact very close to a run of the mill claim of disguised market manipulation, not causally related to data or algorithmic technology.

Fourth, Google said it contemplated a possible appeal of the decision. Now, here’s a challenging question: can an antitrust defendant effectively exercise its right to judicial review of an administrative agency (and more generally its rights of defense), when it operates under the threat of antitrust sanctions in ongoing parallel cases investigated by the same agency (i.e., the antitrust inquiries related to Android and Ads)? This question cuts further than the Google Shopping case. Say firm A contemplates a merger with firm B in market X, while it is at the same time subject to antitrust investigations in market Z. And assume that X and Z are neither substitutes nor complements so there is little competitive relationship between both products. Can the Commission leverage ongoing antitrust investigations in market Z to extract merger concessions in market X? Perhaps more to the point, can the firm interact with the Commission as if the investigations are completely distinct, or does it have to play a more nuanced game and consider the ramifications of its interactions with the Commission in both markets?

Fifth, as to the odds of a possible appeal, I don’t believe that arguments on the economic evidence or legal theory of liability will ever be successful before the General Court of the EU. The law and doctrine in unilateral conduct cases are disturbingly — and almost irrationally — severe. As I have noted elsewhere, the bottom line in the EU case-law on unilateral conduct is to consider the genuine requirement of “harm to competition” as a rhetorical question, not an empirical one. In EU unilateral conduct law, exclusion of every and any firm is a per se concern, regardless of evidence of efficiency, entry or rivalry.

In turn, I tend to opine that Google has a stronger game from a procedural standpoint, having been left with (i) the expectation of a settlement (it played ball three times by making proposals); (ii) a corollary expectation of the absence of a fine (settlement discussions are not appropriate for cases that could end with fines); and (iii) a full seven long years of an investigatory cloud. We know from the past that EU judges like procedural issues, but like comparably less to debate the substance of the law in unilateral conduct cases. This case could thus be a test case in terms of setting boundaries on how freely the Commission can U-turn a case (the Commissioner said “take the case forward in a different way”).

On Thursday, March 30, Friday March 31, and Monday April 3, Truth on the Market and the International Center for Law and Economics presented a blog symposium — Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries — discussing three proposed agricultural/biotech industry mergers awaiting judgment by antitrust authorities around the globe. These proposed mergers — Bayer/Monsanto, Dow/DuPont and ChemChina/Syngenta — present a host of fascinating issues, many of which go to the core of merger enforcement in innovative industries — and antitrust law and economics more broadly.

The big issue for the symposium participants was innovation (as it was for the European Commission, which cleared the Dow/DuPont merger last week, subject to conditions, one of which related to the firms’ R&D activities).

Critics of the mergers, as currently proposed, asserted that the increased concentration arising from the “Big 6” Ag-biotech firms consolidating into the Big 4 could reduce innovation competition by (1) eliminating parallel paths of research and development (Moss); (2) creating highly integrated technology/traits/seeds/chemicals platforms that erect barriers to new entry platforms (Moss); (3) exploiting eventual network effects that may result from the shift towards data-driven agriculture to block new entry in input markets (Lianos); or (4) increasing incentives to refuse to license, impose discriminatory restrictions in technology licensing agreements, or tacitly “agree” not to compete (Moss).

Rather than fixating on horizontal market share, proponents of the mergers argued that innovative industries are often marked by disruptions and that investment in innovation is an important signal of competition (Manne). An evaluation of the overall level of innovation should include not only the additional economies of scale and scope of the merged firms, but also advancements made by more nimble, less risk-averse biotech companies and smaller firms, whose innovations the larger firms can incentivize through licensing or M&A (Shepherd). In fact, increased efficiency created by economies of scale and scope can make funds available to source innovation outside of the large firms (Shepherd).

In addition, innovation analysis must also account for the intricately interwoven nature of agricultural technology across seeds and traits, crop protection, and, now, digital farming (Sykuta). Combined product portfolios generate more data to analyze, resulting in increased data-driven value for farmers and more efficiently targeted R&D resources (Sykuta).

While critics voiced concerns over such platforms erecting barriers to entry, markets are contestable to the extent that incumbents are incentivized to compete (Russell). It is worth noting that certain industries with high barriers to entry or exit, significant sunk costs, and significant costs disadvantages for new entrants (including automobiles, wireless service, and cable networks) have seen their prices decrease substantially relative to inflation over the last 20 years — even as concentration has increased (Russell). Not coincidentally, product innovation in these industries, as in ag-biotech, has been high.

Ultimately, assessing the likely effects of each merger using static measures of market structure is arguably unreliable or irrelevant in dynamic markets with high levels of innovation (Manne).

Regarding patents, critics were skeptical that combining the patent portfolios of the merging companies would offer benefits beyond those arising from cross-licensing, and would serve to raise rivals’ costs (Ghosh). While this may be true in some cases, IP rights are probabilistic, especially in dynamic markets, as Nicolas Petit noted:

There is no certainty that R&D investments will lead to commercially successful applications; (ii) no guarantee that IP rights will resist to invalidity proceedings in court; (iii) little safety to competition by other product applications which do not practice the IP but provide substitute functionality; and (iv) no inevitability that the environmental, toxicological and regulatory authorization rights that (often) accompany IP rights will not be cancelled when legal requirements change.

In spite of these uncertainties, deals such as the pending ag-biotech mergers provide managers the opportunity to evaluate and reorganize assets to maximize innovation and return on investment in such a way that would not be possible absent a merger (Sykuta). Neither party would fully place its IP and innovation pipeline on the table otherwise.

For a complete rundown of the arguments both for and against, the full archive of symposium posts from our outstanding and diverse group of scholars, practitioners and other experts is available at this link, and individual posts can be easily accessed by clicking on the authors’ names below.

We’d like to thank all of the participants for their excellent contributions!