Archives For corporate & securities law

In the wake of the launch of Facebook’s content oversight board, Republican Senator Josh Hawley and FCC Commissioner Brendan Carr, among others, have taken to Twitter to levy criticisms at the firm and, in the process, demonstrate just how far the Right has strayed from its first principles around free speech and private property. For his part, Commissioner Carr’s thread makes the case that the members of the board are highly partisan and mostly left-wing and can’t be trusted with the responsibility of oversight. While Senator Hawley took the approach that the Board’s very existence is just further evidence of the need to break Facebook up. 

Both Hawley and Carr have been lauded in rightwing circles, but in reality their positions contradict conservative notions of the free speech and private property protections given by the First Amendment.  

This blog post serves as a sequel to a post I wrote last year here at TOTM explaining how There’s nothing “conservative” about Trump’s views on free speech and the regulation of social media. As I wrote there:

I have noted in several places before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.

With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).

Commissioner Carr’s complaint and Senator Hawley’s antitrust approach of breaking up Facebook has much more in common with the views traditionally held by left-wing Democrats on the need for the government to regulate private actors in order to promote speech interests. Originalists and law & economics scholars, on the other hand, have consistently taken the opposite point of view that the First Amendment protects against government infringement of speech interests, including protecting the right to editorial discretion. While there is clearly a conflict of visions in First Amendment jurisprudence, the conservative (and, in my view, correct) point of view should not be jettisoned by Republicans to achieve short-term political gains.

The First Amendment restricts government action, not private action

The First Amendment, by its very text, only applies to government action: “Congress shall make no law . . . abridging the freedom of speech.” This applies to the “State[s]” through the Fourteenth Amendment. There is extreme difficulty in finding any textual hook to say the First Amendment protects against private action, like that of Facebook. 

Originalists have consistently agreed. Most recently, in Manhattan Community Access Corp. v. Halleck, Justice Kavanaugh—on behalf of the conservative bloc and the Court—wrote:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).

This was true at the adoption of the First Amendment and remains true today in a high-tech world. Federal district courts have consistently dismissed First Amendment lawsuits against Facebook on the grounds there is no state action. 

For instance, in Nyawba v. Facebook, the plaintiff initiated a civil rights lawsuit against Facebook for restricting his use of the platform. The U.S. District Court for the Southern District of Texas dismissed the case, noting 

Because the First Amendment governs only governmental restrictions on speech, Nyabwa has not stated a cause of action against FaceBook… Like his free speech claims, Nyabwa’s claims for violation of his right of association and violation of his due process rights are claims that may be vindicated against governmental actors pursuant to § 1983, but not a private entity such as FaceBook.

Similarly, in Young v. Facebook, the U.S. District Court for the Northern District of California rejected a claim that Facebook violated the First Amendment by deactivating the plaintiff’s Facebook page. The court declined to subject Facebook to the First Amendment analysis, stating that “because Young has not alleged any action under color of state law, she fails to state a claim under § 1983.”

The First Amendment restricts antitrust actions against Facebook, not Facebook’s editorial discretion over its platform

Far from restricting Facebook, the First Amendment actually restricts government actions aimed at platforms like Facebook when they engage in editorial discretion by moderating content. If an antitrust plaintiff was to act on the impulse to “break up” Facebook because of alleged political bias in its editorial discretion, the lawsuit would be running headlong into the First Amendment’s protections.

There is no basis for concluding online platforms do not have editorial discretion under the law. In fact, the position of Facebook here is very similar to the newspaper in Miami Herald Publishing Co. v. Tornillo, in which the Supreme Court considered a state law giving candidates for public office a right to reply in newspapers to editorials written about them. The Florida Supreme Court upheld the statute, finding it furthered the “broad societal interest in the free flow of information to the public.” The U.S. Supreme Court, despite noting the level of concentration in the newspaper industry, nonetheless reversed. The Court explicitly found the newspaper had a First Amendment right to editorial discretion:

The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials — whether fair or unfair — constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time. 

Online platforms have the same First Amendment protections for editorial discretion. For instance, in both Search King v. Google and Langdon v. Google, two different federal district courts ruled Google’s search results are subject to First Amendment protections, both citing Tornillo

In Zhang v. Baidu.com, another district court went so far as to grant a Chinese search engine the right to editorial discretion in limiting access to democracy movements in China. The court found that the search engine “inevitably make[s] editorial judgments about what information (or kinds of information) to include in the results and how and where to display that information.” Much like the search engine in Zhang, Facebook is clearly making editorial judgments about what information shows up in newsfeed and where to display it. 

None of this changes because the generally applicable law is antitrust rather than some other form of regulation. For instance, in Tornillo, the Supreme Court took pains to distinguish the case from an earlier antitrust case against newspapers, Associated Press v. United States, which found that there was no broad exemption from antitrust under the First Amendment.

The Court foresaw the problems relating to government-enforced access as early as its decision in Associated Press v. United States, supra. There it carefully contrasted the private “compulsion to print” called for by the Association’s bylaws with the provisions of the District Court decree against appellants which “does not compel AP or its members to permit publication of anything which their `reason’ tells them should not be published.”

In other words, the Tornillo and Associated Press establish the government may not compel speech through regulation, including an antitrust remedy. 

Once it is conceded that there is a speech interest here, the government must justify the use of antitrust law to compel Facebook to display the speech of users in the newsfeeds of others under the strict scrutiny test of the First Amendment. In other words, the use of antitrust law must be narrowly tailored to a compelling government interest. Even taking for granted that there may be a compelling government interest in facilitating a free and open platform (which is by no means certain), it is clear that this would not be narrowly tailored action. 

First, “breaking up” Facebook is clearly overbroad as compared to the goal of promoting free speech on the platform. There is no need to break it up just because it has an Oversight Board that engages in editorial responsibilities. There are many less restrictive means, including market competition, which has greatly expanded consumer choice for communications and connections. Second, antitrust does not even really have a remedy for free speech issues complained of here, as it would require courts to engage in long-term oversight and engage in compelled speech foreclosed by Associated Press

Note that this makes good sense from a law & economics perspective. Platforms like Facebook should be free to regulate the speech on their platforms as they see fit and consumers are free to decide which platforms they wish to use based upon that information. While there are certainly network effects to social media, the plethora of options currently available with low switching costs suggests that there is no basis for antitrust action against Facebook because consumers are unable to speak. In other words, the least restrictive means test of the First Amendment is best fulfilled by market competition in this case.

If there were a basis for antitrust intervention against Facebook, either through merger review or as a standalone monopoly claim, the underlying issue would be harm to competition. While this would have implications for speech concerns (which may be incorporated into an analysis through quality-adjusted price), it is inconceivable how an antitrust remedy could be formed on speech issues consistent with the First Amendment. 

Conclusion

Despite now well-worn complaints by so-called conservatives in and out of the government about the baneful influence of Facebook and other Big Tech companies, the First Amendment forecloses government actions to violate the editorial discretion of these companies. Even if Commissioner Carr is right, this latest call for antitrust enforcement against Facebook by Senator Hawley should be rejected for principled conservative reasons.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics); and Dirk Auer, (Senior Fellow of Law & Economics, ICLE)]

Back in 2012, Covidien, a large health care products company and medical device manufacturer, purchased Newport Medical Instruments, a small ventilator developer and manufacturer. (Covidien itself was subsequently purchased by Medtronic in 2015).

Eight years later, in the midst of the coronavirus pandemic, the New York Times has just published an article revisiting the Covidien/Newport transaction, and questioning whether it might have contributed to the current shortage of ventilators.

The article speculates that Covidien’s purchase of Newport, and the subsequent discontinuation of Newport’s “Aura” ventilator — which was then being developed by Newport under a government contract — delayed US government efforts to procure mechanical ventilators until the second half of 2020 — too late to treat the first wave of COVID-19 patients:

And then things suddenly veered off course. A multibillion-dollar maker of medical devices bought the small California company that had been hired to design the new machines. The project ultimately produced zero ventilators.

That failure delayed the development of an affordable ventilator by at least half a decade, depriving hospitals, states and the federal government of the ability to stock up.

* * *

Today, with the coronavirus ravaging America’s health care system, the nation’s emergency-response stockpile is still waiting on its first shipment.

The article has generated considerable interest not so much for what it suggests about government procurement policies or for its relevance to the ventilator shortages associated with the current pandemic, but rather for its purported relevance to ongoing antitrust debates and the arguments put forward by “antitrust populists” and others that merger enforcement in the US is dramatically insufficient. 

Only a single sentence in the article itself points to a possible antitrust story — and it does nothing more than report unsubstantiated speculation from unnamed “government officials” and rival companies: 

Government officials and executives at rival ventilator companies said they suspected that Covidien had acquired Newport to prevent it from building a cheaper product that would undermine Covidien’s profits from its existing ventilator business.

Nevertheless, and right on cue, various antitrust scholars quickly framed the deal as a so-called “killer acquisition” (see also here and here):

Unsurprisingly, politicians were also quick to jump on the bandwagon. David Cicilline, the powerful chairman of the House Antitrust Subcommittee, opined that:

And FTC Commissioner Rebecca Kelly Slaughter quickly called for a retrospective review of the deal:

The public reporting on this acquisition raises important questions about the review of this deal. We should absolutely be looking back to figure out what happened.

These “hot takes” raise a crucial issue. The New York Times story opened the door to a welter of hasty conclusions offered to support the ongoing narrative that antitrust enforcement has failed us — in this case quite literally at the cost of human lives. But are any of these claims actually supportable?

Unfortunately, the competitive realities of the mechanical ventilator industry, as well as a more clear-eyed view of what was likely going on with the failed government contract at the heart of the story, simply do not support the “killer acquisition” story.

What is a “killer acquisition”…?

Let’s take a step back. Because monopoly profits are, by definition, higher than joint duopoly profits (all else equal), economists have long argued that incumbents may find it profitable to acquire smaller rivals in order to reduce competition and increase their profits. More specifically, incumbents may be tempted to acquire would-be entrants in order to prevent them from introducing innovations that might hurt the incumbent’s profits.

For this theory to have any purchase, however, a number of conditions must hold. Most importantly, as Colleen Cunningham, Florian Ederer, and Song Ma put it in an influential paper

“killer acquisitions” can only occur when the entrepreneur’s project overlaps with the acquirer’s existing product…. [W]ithout any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur… because, without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.

Moreover, the authors add that:

Successfully developing a new product draws consumer demand and profits away equally from all existing products. An acquiring incumbent is hurt more by such cannibalization when he is a monopolist (i.e., the new product draws demand away only from his own existing product) than when he already faces many other existing competitors (i.e., cannibalization losses are spread over many firms). As a result, as the number of existing competitors increases, the replacement effect decreases and the acquirer’s development decisions become more similar to those of the entrepreneur

Finally, the “killer acquisition” terminology is appropriate only when the incumbent chooses to discontinue its rival’s R&D project:

If incumbents face significant existing competition, acquired projects are not significantly more frequently discontinued than independent projects. Thus, more competition deters incumbents from acquiring and terminating the projects of potential future competitors, which leads to more competition in the future.

…And what isn’t a killer acquisition?

What is left out of this account of killer acquisitions is the age-old possibility that an acquirer purchases a rival precisely because it has superior know-how or a superior governance structure that enables it to realize greater return and more productivity than its target. In the case of a so-called killer acquisition, this means shutting down a negative ROI project and redeploying resources to other projects or other uses — including those that may not have any direct relation to the discontinued project. 

Such “synergistic” mergers are also — like allegedly “killer” mergers — likely to involve acquirers and targets in the same industry and with technological overlap between their R&D projects; it is in precisely these situations that the acquirer is likely to have better knowledge than the target’s shareholders that the target is undervalued because of poor governance rather than exogenous, environmental factors.  

In other words, whether an acquisition is harmful or not — as the epithet “killer” implies it is — depends on whether it is about reducing competition from a rival, on the one hand, or about increasing the acquirer’s competitiveness by putting resources to more productive use, on the other.

As argued below, it is highly unlikely that Covidien’s acquisition of Newport could be classified as a “killer acquisition.” There is thus nothing to suggest that the merger materially impaired competition in the mechanical ventilator market, or that it measurably affected the US’s efforts to fight COVID-19.

The market realities of the ventilator market and its implications for the “killer acquisition” story

1. The mechanical ventilator market is highly competitive

As explained above, “killer acquisitions” are less likely to occur in competitive markets. Yet the mechanical ventilator industry is extremely competitive. 

A number of reports conclude that there is significant competition in the industry. One source cites at least seven large producers. Another report cites eleven large players. And, in the words of another report:

Medical ventilators market competition is intense. 

The conclusion that the mechanical ventilator industry is highly competitive is further supported by the fact that the five largest producers combined reportedly hold only 50% of the market. In other words, available evidence suggests that none of these firms has anything close to a monopoly position. 

This intense competition, along with the small market shares of the merging firms, likely explains why the FTC declined to open an in-depth investigation into Covidien’s acquisition of Newport.

Similarly, following preliminary investigations, neither the FTC nor the European Commission saw the need for an in-depth look at the ventilator market when they reviewed Medtronic’s subsequent acquisition of Covidien (which closed in 2015). Although Medtronic did not produce any mechanical ventilators before the acquisition, authorities (particularly the European Commission) could nevertheless have analyzed that market if Covidien’s presumptive market share was particularly high. The fact that they declined to do so tends to suggest that the ventilator market was relatively unconcentrated.

2. The value of the merger was too small

A second strong reason to believe that Covidien’s purchase of Newport wasn’t a killer acquisition is the acquisition’s value of $103 million

Indeed, if it was clear that Newport was about to revolutionize the ventilator market, then Covidien would likely have been made to pay significantly more than $103 million to acquire it. 

As noted above, the crux of the “killer acquisition” theory is that incumbents can induce welfare-reducing acquisitions by offering to acquire their rivals for significantly more than the present value of their rivals’ expected profits. Because an incumbent undertaking a “killer” takeover expects to earn monopoly profits as a result of the transaction, it can offer a substantial premium and still profit from its investment. It is this basic asymmetry that drives the theory.

Indeed, as a recent article by Kevin Bryan and Erik Hovenkamp notes, an acquisition value out of line with current revenues may be an indicator of the significance of a pending acquisition in which enforcers may not actually know the value of the target’s underlying technology: 

[Where] a court may lack the expertise to [assess the commercial significance of acquired technology]…, the transaction value… may provide a reasonable proxy. Intuitively, if the startup is a relatively small company with relatively few sales to its name, then a very high acquisition price may reasonably suggest that the startup technology has significant promise.

The strategy only works, however, if the target firm’s shareholders agree that share value properly reflects only “normal” expected profits, and not that the target is poised to revolutionize its market with a uniquely low-cost or high-quality product. Relatively low acquisition prices relative to market size, therefore, tend to reflect low (or normal) expected profits, and a low perceived likelihood of radical innovations occurring.

We can apply this reasoning to Covidien’s acquisition of Newport: 

  • Precise and publicly available figures concerning the mechanical ventilator market are hard to come by. Nevertheless, one estimate finds that the global ventilator market was worth $2.715 billion in 2012. Another report suggests that the global market was worth $4.30 billion in 2018; still another that it was worth $4.58 billion in 2019.
  • As noted above, Covidien reported to the SEC that it paid $103 million to purchase Newport (a firm that produced only ventilators and apparently had no plans to branch out). 
  • For context, at the time of the acquisition Covidien had annual sales of $11.8 billion overall, and $743 million in sales of its existing “Airways and Ventilation Products.”

If the ventilator market was indeed worth billions of dollars per year, then the comparatively small $108 million paid by Covidien — small even relative to Covidien’s own share of the market — suggests that, at the time of the acquisition, it was unlikely that Newport was poised to revolutionize the market for mechanical ventilators (for instance, by successfully bringing its Aura ventilator to market). 

The New York Times article claimed that Newport’s ventilators would be sold (at least to the US government) for $3,000 — a substantial discount from the reportedly then-going rate of $10,000. If selling ventilators at this price seemed credible at the time, then Covidien — as well as Newport’s shareholders — knew that Newport was about to achieve tremendous cost savings, enabling it to offer ventilators not only to the the US government, but to purchasers around the world, at an irresistibly attractive — and profitable — price.

Ventilators at the time typically went for about $10,000 each, and getting the price down to $3,000 would be tough. But Newport’s executives bet they would be able to make up for any losses by selling the ventilators around the world.

“It would be very prestigious to be recognized as a supplier to the federal government,” said Richard Crawford, who was Newport’s head of research and development at the time. “We thought the international market would be strong, and there is where Newport would have a good profit on the product.”

If achievable, Newport thus stood to earn a substantial share of the profits in a multi-billion dollar industry. 

Of course, it is necessary to apply a probability to these numbers: Newport’s ventilator was not yet on the market, and had not yet received FDA approval. Nevertheless, if the Times’ numbers seemed credible at the time, then Covidien would surely have had to offer significantly more than $108 million in order to induce Newport’s shareholders to part with their shares.

Given the low valuation, however, as well as the fact that Newport produced other ventilators — and continues to do so to this day, there is no escaping the fact that everyone involved seemed to view Newport’s Aura ventilator as nothing more than a moonshot with, at best, a low likelihood of success. 

Curically, this same reasoning explains why it shouldn’t surprise anyone that the project was ultimately discontinued; recourse to a “killer acquisition” theory is hardly necessary.

3. Lessons from Covidien’s ventilator product decisions  

The killer acquisition claims are further weakened by at least four other important pieces of information: 

  1.  Covidien initially continued to develop Newport’s Aura ventilator, and continued to develop and sell Newport’s other ventilators.
  2. There was little overlap between Covidien and Newport’s ventilators — or, at the very least, they were highly differentiated
  3. Covidien appears to have discontinued production of its own portable ventilator in 2014
  4. The Newport purchase was part of a billion dollar series of acquisitions seemingly aimed at expanding Covidien’s in-hospital (i.e., not-portable) device portfolio

Covidien continued to develop and sell Newport’s ventilators

For a start, while the Aura line was indeed discontinued by Covidien, the timeline is important. The acquisition of Newport by Covidien was announced in March 2012, approved by the FTC in April of the same year, and the deal was closed on May 1, 2012.

However, as the FDA’s 510(k) database makes clear, Newport submitted documents for FDA clearance of the Aura ventilator months after its acquisition by Covidien (June 29, 2012, to be precise). And the Aura received FDA 510(k) clearance on November 9, 2012 — many months after the merger.

It would have made little sense for Covidien to invest significant sums in order to obtain FDA clearance for a project that it planned to discontinue (the FDA routinely requires parties to actively cooperate with it, even after 510(k) applications are submitted). 

Moreover, if Covidien really did plan to discreetly kill off the Aura ventilator, bungling the FDA clearance procedure would have been the perfect cover under which to do so. Yet that is not what it did.

Covidien continued to develop and sell Newport’s other ventilators

Second, and just as importantly, Covidien (and subsequently Medtronic) continued to sell Newport’s other ventilators. The Newport e360 and HT70 are still sold today. Covidien also continued to improve these products: it appears to have introduced an improved version of the Newport HT70 Plus ventilator in 2013.

If eliminating its competitor’s superior ventilators was the only goal of the merger, then why didn’t Covidien also eliminate these two products from its lineup, rather than continue to improve and sell them? 

At least part of the answer, as will be seen below, is that there was almost no overlap between Covidien and Newport’s product lines.

There was little overlap between Covidien’s and Newport’s ventilators

Third — and perhaps the biggest flaw in the killer acquisition story — is that there appears to have been very little overlap between Covidien and Newport’s ventilators. 

This decreases the likelihood that the merger was a killer acquisition. When two products are highly differentiated (or not substitutes at all), sales of the first are less likely to cannibalize sales of the other. As Florian Ederer and his co-authors put it:

Importantly, without any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur, neither to “Acquire to Kill” nor to “Acquire to Continue.” This is because without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.

A quick search of the FDA’s 510(k) database reveals that Covidien has three approved lines of ventilators: the Puritan Bennett 980, 840, and 540 (apparently essentially the same as the PB560, the plans to which Medtronic recently made freely available in order to facilitate production during the current crisis). The same database shows that these ventilators differ markedly from Newport’s ventilators (particularly the Aura).

In particular, Covidien manufactured primarily traditional, invasive ICU ventilators (except for the PB540, which is potentially a substitute for the Newport HT70), while Newport made much-more-portable ventilators, suitable for home use (notably the Aura, HT50 and HT70 lines). 

Under normal circumstances, critical care and portable ventilators are not substitutes. As the WHO website explains, portable ventilators are:

[D]esigned to provide support to patients who do not require complex critical care ventilators.

A quick glance at Medtronic’s website neatly illustrates the stark differences between these two types of devices:

This is not to say that these devices do not have similar functionalities, or that they cannot become substitutes in the midst of a coronavirus pandemic. However, in normal times (as was the case when Covidien acquired Newport), hospitals likely did not view these devices as substitutes.

The conclusion that Covidien and Newport’s ventilator were not substitutes finds further support in documents and statements released at the time of the merger. For instance, Covidien’s CEO explained that:

This acquisition is consistent with Covidien’s strategy to expand into adjacencies and invest in product categories where it can develop a global competitive advantage.

And that:

Newport’s products and technology complement our current portfolio of respiratory solutions and will broaden our ventilation platform for patients around the world, particularly in emerging markets.

In short, the fact that almost all of Covidien and Newport’s products were not substitutes further undermines the killer acquisition story. It also tends to vindicate the FTC’s decision to rapidly terminate its investigation of the merger.

Covidien appears to have discontinued production of its own portable ventilator in 2014

Perhaps most tellingly: It appears that Covidien discontinued production of its own competing, portable ventilator, the Puritan Bennett 560, in 2014.

The product is reported on the company’s 2011, 2012 and 2013 annual reports:

Airway and Ventilation Products — airway, ventilator, breathing systems and inhalation therapy products. Key products include: the Puritan Bennett™ 840 line of ventilators; the Puritan Bennett™ 520 and 560 portable ventilator….

(The PB540 was launched in 2009; the updated PB560 in 2010. The PB520 was the EU version of the device, launched in 2011).

But in 2014, the PB560 was no longer listed among the company’s ventilator products:  

Airway & Ventilation, which primarily includes sales of airway, ventilator and inhalation therapy products and breathing systems.

Key airway & ventilation products include: the Puritan Bennett™ 840 and 980 ventilators, the Newport™ e360 and HT70 ventilators….

Nor — despite its March 31 and April 1 “open sourcing” of the specifications and software necessary to enable others to produce the PB560 — did Medtronic appear to have restarted production, and the company did not mention the device in its March 18 press release announcing its own, stepped-up ventilator production plans.

Surely if Covidien had intended to capture the portable ventilator market by killing off its competition it would have continued to actually sell its own, competing device. The fact that the only portable ventilators produced by Covidien by 2014 were those it acquired in the Newport deal strongly suggests that its objective in that deal was the acquisition and deployment of Newport’s viable and profitable technologies — not the abandonment of them. This, in turn, suggests that the Aura was not a viable and profitable technology.

(Admittedly we are unable to determine conclusively that either Covidien or Medtronic stopped producing the PB520/540/560 series of ventilators. But our research seems to indicate strongly that this is indeed the case).

Putting the Newport deal in context

Finally, although not dispositive, it seems important to put the Newport purchase into context. In the same year as it purchased Newport, Covidien paid more than a billion dollars to acquire five other companies, as well — all of them primarily producing in-hospital medical devices. 

That 2012 spending spree came on the heels of a series of previous medical device company acquisitions, apparently totally some four billion dollars. Although not exclusively so, the acquisitions undertaken by Covidien seem to have been primarily targeted at operating room and in-hospital monitoring and treatment — making the putative focus on cornering the portable (home and emergency) ventilator market an extremely unlikely one. 

By the time Covidien was purchased by Medtronic the deal easily cleared antitrust review because of the lack of overlap between the company’s products, with Covidien’s focusing predominantly on in-hospital, “diagnostic, surgical, and critical care” and Medtronic’s on post-acute care.

Newport misjudged the costs associated with its Aura project; Covidien was left to pick up the pieces

So why was the Aura ventilator discontinued?

Although it is almost impossible to know what motivated Covidien’s executives, the Aura ventilator project clearly suffered from many problems. 

The Aura project was intended to meet the requirements of the US government’s BARDA program (under the auspices of the U.S. Department of Health and Human Services’ Biomedical Advanced Research and Development Authority). In short, the program sought to create a stockpile of next generation ventilators for emergency situations — including, notably, pandemics. The ventilator would thus have to be designed for events where

mass casualties may be expected, and when shortages of experienced health care providers with respiratory support training, and shortages of ventilators and accessory components may be expected.

The Aura ventilator would thus sit somewhere between Newport’s two other ventilators: the e360 which could be used in pediatric care (for newborns smaller than 5kg) but was not intended for home care use (or the extreme scenarios envisioned by the US government); and the more portable HT70 which could be used in home care environments, but not for newborns. 

Unfortunately, the Aura failed to achieve this goal. The FDA’s 510(k) clearance decision clearly states that the Aura was not intended for newborns:

The AURA family of ventilators is applicable for infant, pediatric and adult patients greater than or equal to 5 kg (11 lbs.).

A press release issued by Medtronic confirms that

the company was unable to secure FDA approval for use in neonatal populations — a contract requirement.

And the US Government RFP confirms that this was indeed an important requirement:

The device must be able to provide the same standard of performance as current FDA pre-market cleared portable ventilators and shall have the following additional characteristics or features: 

Flexibility to accommodate a wide patient population range from neonate to adult.

Newport also seems to have been unable to deliver the ventilator at the low price it had initially forecasted — a common problem for small companies and/or companies that undertake large R&D programs. It also struggled to complete the project within the agreed-upon deadlines. As the Medtronic press release explains:

Covidien learned that Newport’s work on the ventilator design for the Government had significant gaps between what it had promised the Government and what it could deliverboth in terms of being able to achieve the cost of production specified in the contract and product features and performance. Covidien management questioned whether Newport’s ability to complete the project as agreed to in the contract was realistic.

As Jason Crawford, an engineer and tech industry commentator, put it:

Projects fail all the time. “Supplier risk” should be a standard checkbox on anyone’s contingency planning efforts. This is even more so when you deliberately push the price down to 30% of the market rate. Newport did not even necessarily expect to be profitable on the contract.

The above is mostly Covidien’s “side” of the story, of course. But other pieces of evidence lend some credibility to these claims:

  • Newport agreed to deliver its Aura ventilator at a per unit cost of less than $3000. But, even today, this seems extremely ambitious. For instance, the WHO has estimated that portable ventilators cost between $3,300 and $13,500. If Newport could profitably sell the Aura at such a low price, then there was little reason to discontinue it (readers will recall the development of the ventilator was mostly complete when Covidien put a halt to the project).
  • Covidien/Newport is not the only firm to have struggled to offer suitable ventilators at such a low price. Philips (which took Newport’s place after the government contract fell through) also failed to achieve this low price. Rather than the $2,000 price sought in the initial RFP, Philips ultimately agreed to produce the ventilators for $3,280. But it has not yet been able to produce a single ventilator under the government contract at that price.
  • Covidien has repeatedly been forced to recall some of its other ventilators ( here, here and here) — including the Newport HT70. And rival manufacturers have also faced these types of issues (for example, here and here). 

Accordingly, Covidien may well have preferred to cut its losses on the already problem-prone Aura project, before similar issues rendered it even more costly. 

In short, while it is impossible to prove that these development issues caused Covidien to pull the plug on the Aura project, it is certainly plausible that they did. This further supports the hypothesis that Covidien’s acquisition of Newport was not a killer acquisition. 

Ending the Aura project might have been an efficient outcome

As suggested above, moreover, it is entirely possible that Covidien was better able to realize the poor prospects of Newport’s Aura project and also better organized to enable it to make the requisite decision to abandon the project.

A small company like Newport faces greater difficulties abandoning entrepreneurial projects because doing so can impair a privately held firm’s ability to raise funds for subsequent projects.

Moreover, the relatively large share of revue and reputation that Newport — worth $103 million in 2012, versus Covidien’s $11.8 billion — would have realized from fulfilling a substantial US government project could well have induced it to overestimate the project’s viability and to undertake excessive risk in the (vain) hope of bringing the project to fruition.  

While there is a tendency among antitrust scholars, enforcers, and practitioners to look for (and find…) antitrust-related rationales for mergers and other corporate conduct, it remains the case that most corporate control transactions (such as mergers) are driven by the acquiring firm’s expectation that it can manage more efficiently. As Henry G. Manne put it in his seminal article, Mergers and the Market for Corporate Control (1965): 

Since, in a world of uncertainty, profitable transactions will be entered into more often by those whose information is relatively more reliable, it should not surprise us that mergers within the same industry have been a principal form of changing corporate control. Reliable information is often available to suppliers and customers as well. Thus many vertical mergers may be of the control takeover variety rather than of the “foreclosure of competitors” or scale-economies type.

Of course, the same information that renders an acquiring firm in the same line of business knowledgeable enough to operate a target more efficiently could also enable it to effect a “killer acquisition” strategy. But the important point is that a takeover by a firm with a competing product line, after which the purchased company’s product line is abandoned, is at least as consistent with a “market for corporate control” story as with a “killer acquisition” story.

Indeed, as Florian Ederer himself noted with respect to the Covidien/Newport merger, 

“Killer acquisitions” can have a nefarious image, but killing off a rival’s product was probably not the main purpose of the transaction, Ederer said. He raised the possibility that Covidien decided to kill Newport’s innovation upon realising that the development of the devices would be expensive and unlikely to result in profits.

Concluding remarks

In conclusion, Covidien’s acquisition of Newport offers a cautionary tale about reckless journalism, “blackboard economics,” and government failure.

Reckless journalism because the New York Times clearly failed to do the appropriate due diligence for its story. Its journalists notably missed (or deliberately failed to mention) a number of critical pieces of information — such as the hugely important fact that most of Covidien’s and Newport’s products did not overlap, or the fact that there were numerous competitors in the highly competitive mechanical ventilator industry. 

And yet, that did not stop the authors from publishing their extremely alarming story, effectively suggesting that a small medical device merger materially contributed to the loss of many American lives.

The story also falls prey to what Ronald Coase called “blackboard economics”:

What is studied is a system which lives in the minds of economists but not on earth. 

Numerous commentators rushed to fit the story to their preconceived narratives, failing to undertake even a rudimentary examination of the underlying market conditions before they voiced their recriminations. 

The only thing that Covidien and Newport’s merger ostensibly had in common with the killer acquisition theory was the fact that a large firm purchased a small rival, and that the one of the small firm’s products was discontinued. But this does not even begin to meet the stringent conditions that must be fulfilled for the theory to hold water. Unfortunately, critics appear to have completely ignored all contradicting evidence. 

Finally, what the New York Times piece does offer is a chilling tale of government failure.

The inception of the US government’s BARDA program dates back to 2008 — twelve years before the COVID-19 pandemic hit the US. 

The collapse of the Aura project is no excuse for the fact that, more than six years after the Newport contract fell through, the US government still has not obtained the necessary ventilators. Questions should also be raised about the government’s decision to effectively put all of its eggs in the same basket — twice. If anything, it is thus government failure that was the real culprit. 

And yet the New York Times piece and the critics shouting “killer acquisition!” effectively give the US government’s abject failure here a free pass — all in the service of pursuing their preferred “killer story.”

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]

The Wall Street Journal reports congressional leaders have agreed to impose limits on stock buybacks and dividend payments for companies receiving aid under the COVID-19 disaster relief package. 

Rather than a flat-out ban, the draft legislation forbids any company taking federal emergency loans or loan guarantees from repurchasing its own stock or paying shareholder dividends. The ban lasts for the term of the loans, plus one year after the aid had ended.

In theory, under a strict set of conditions, there is no difference between dividends and buybacks. Both approaches distribute cash from the corporation to shareholders. In practice, there are big differences between dividends and share repurchases.

  • Dividends are publicly visible actions and require authorization by the board of directors. Shareholders have expectations of regular, stable dividends. Buybacks generally lack such transparency. Firms have flexibility in choosing the timing and the amount of repurchases, subject to the details of their repurchase programs.
  • Cash dividends have no effect on the number of shares outstanding. In contrast, share repurchases reduce the number of shares outstanding. By reducing the number of shares outstanding, buybacks increase earnings per share, all other things being equal. 

Over the past 15 years, buybacks have outpaced dividend payouts. The figure above, from Seeking Alpha, shows that while dividends have grown relatively smoothly over time, the aggregate value of buybacks are volatile and vary with the business cycle. In general, firms increase their repurchases relative to dividends when the economy booms and reduce them when the economy slows or shrinks. 

This observation is consistent with a theory that buybacks are associated with periods of greater-than-expected financial performance. On the other hand, dividends are associated with expectations of long-term profitability. Dividends can decrease, but only when profits are expected to be “permanently” lower. 

During the Great Recession, the figure above shows that dividends declined by about 10%, the amount of share repurchases plummeted by approximately 85%. The flexibility afforded by buybacks provided stability in dividends.

There is some logic to dividend and buyback limits imposed by the COVID-19 disaster relief package. If a firm has enough cash on hand to pay dividends or repurchase shares, then it doesn’t need cash assistance from the federal government. Similarly, if a firm is so desperate for cash that it needs a federal loan or loan guarantee, then it doesn’t have enough cash to provide a payout to shareholders. Surely managers understand this and sophisticated shareholders should too.

Because of this understanding, the dividend and buyback limits may be a non-binding constraint. It’s not a “good look” for a corporation to accept millions of dollars in federal aid, only to turn around and hand out those taxpayer dollars to the company’s shareholders. That’s a sure way to get an unflattering profile in the New York Times and an invitation to attend an uncomfortable hearing at the U.S. Capitol. Even if a distressed firm could repurchase its shares, it’s unlikely that it would.

The logic behind the plus-one-year ban on dividends and buybacks is less clear. The relief package is meant to get the U.S. economy back to normal as fast as possible. That means if a firm repays its financial assistance early, the company’s shareholders should be rewarded with a cash payout rather than waiting a year for some arbitrary clock to run out.

The ban on dividends and buybacks may lead to an unintended consequence of increased merger and acquisition activity. Vox reports an email to Goldman Sachs’ investment banking division says Goldman expects to see an increase in hostile takeovers and shareholder activism as the prices of public companies fall. Cash rich firms who are subject to the ban and cannot get that cash to their existing shareholders may be especially susceptible takeover targets.

Desperate times call for desperate measures and these are desperate times. Buyback backlash has been brewing for sometime and the COVID-19 relief package presents a perfect opportunity to ban buybacks. With the pressures businesses are under right now, it’s unlikely there’ll be many buybacks over the next few months. The concern should be over the unintended consequences facing firms once the economy recovers.

Yesterday was President Trump’s big “Social Media Summit” where he got together with a number of right-wing firebrands to decry the power of Big Tech to censor conservatives online. According to the Wall Street Journal

Mr. Trump attacked social-media companies he says are trying to silence individuals and groups with right-leaning views, without presenting specific evidence. He said he was directing his administration to “explore all legislative and regulatory solutions to protect free speech and the free speech of all Americans.”

“Big Tech must not censor the voices of the American people,” Mr. Trump told a crowd of more than 100 allies who cheered him on. “This new technology is so important and it has to be used fairly.”

Despite the simplistic narrative tying President Trump’s vision of the world to conservatism, there is nothing conservative about his views on the First Amendment and how it applies to social media companies.

I have noted in several places before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.

With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).

Contrary to the original meaning of the First Amendment and the weight of Supreme Court precedent, President Trump’s view of the First Amendment is that it protects a positive conception of liberty — one under which the government, in order to facilitate its conception of “free speech,” has the right and even the duty to impose restrictions on how private actors regulate speech on their property (in this case, social media companies). 

But if Trump’s view were adopted, discretion as to what is necessary to facilitate free speech would be left to future presidents and congresses, undermining the bedrock conservative principle of the Constitution as a shield against government regulation, all falsely in the name of protecting speech. This is counter to the general approach of modern conservatism (but not, of course, necessarily Republicanism) in the United States, including that of many of President Trump’s own judicial and agency appointees. Indeed, it is actually more consistent with the views of modern progressives — especially within the FCC.

For instance, the current conservative bloc on the Supreme Court (over the dissent of the four liberal Justices) recently reaffirmed the view that the First Amendment applies only to state action in Manhattan Community Access Corp. v. Halleck. The opinion, written by Trump-appointee, Justice Brett Kavanaugh, states plainly that:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).

Former Stanford Law dean and First Amendment scholar, Kathleen Sullivan, has summed up the very different approaches to free speech pursued by conservatives and progressives (insofar as they are represented by the “conservative” and “liberal” blocs on the Supreme Court): 

In the first vision…, free speech rights serve an overarching interest in political equality. Free speech as equality embraces first an antidiscrimination principle: in upholding the speech rights of anarchists, syndicalists, communists, civil rights marchers, Maoist flag burners, and other marginal, dissident, or unorthodox speakers, the Court protects members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference…. By invalidating conditions on speakers’ use of public land, facilities, and funds, a long line of speech cases in the free-speech-as-equality tradition ensures public subvention of speech expressing “the poorly financed causes of little people.” On the equality-based view of free speech, it follows that the well-financed causes of big people (or big corporations) do not merit special judicial protection from political regulation. And because, in this view, the value of equality is prior to the value of speech, politically disadvantaged speech prevails over regulation but regulation promoting political equality prevails over speech.

The second vision of free speech, by contrast, sees free speech as serving the interest of political liberty. On this view…, the First Amendment is a negative check on government tyranny, and treats with skepticism all government efforts at speech suppression that might skew the private ordering of ideas. And on this view, members of the public are trusted to make their own individual evaluations of speech, and government is forbidden to intervene for paternalistic or redistributive reasons. Government intervention might be warranted to correct certain allocative inefficiencies in the way that speech transactions take place, but otherwise, ideas are best left to a freely competitive ideological market.

The outcome of Citizens United is best explained as representing a triumph of the libertarian over the egalitarian vision of free speech. Justice Kennedy’s opinion for the Court, joined by Chief Justice Roberts and Justices Scalia, Thomas, and Alito, articulates a robust vision of free speech as serving political liberty; the dissenting opinion by Justice Stevens, joined by Justices Ginsburg, Breyer, and Sotomayor, sets forth in depth the countervailing egalitarian view. (Emphasis added).

President Trump’s views on the regulation of private speech are alarmingly consistent with those embraced by the Court’s progressives to “protect[] members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference” — exactly the sort of conservative “victimhood” that Trump and his online supporters have somehow concocted to describe themselves. 

Trump’s views are also consistent with those of progressives who, since the Reagan FCC abolished it in 1987, have consistently angled for a resurrection of some form of fairness doctrine, as well as other policies inconsistent with the “free-speech-as-liberty” view. Thus Democratic commissioner Jessica Rosenworcel takes a far more interventionist approach to private speech:

The First Amendment does more than protect the interests of corporations. As courts have long recognized, it is a force to support individual interest in self-expression and the right of the public to receive information and ideas. As Justice Black so eloquently put it, “the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.” Our leased access rules provide opportunity for civic participation. They enhance the marketplace of ideas by increasing the number of speakers and the variety of viewpoints. They help preserve the possibility of a diverse, pluralistic medium—just as Congress called for the Cable Communications Policy Act… The proper inquiry then, is not simply whether corporations providing channel capacity have First Amendment rights, but whether this law abridges expression that the First Amendment was meant to protect. Here, our leased access rules are not content-based and their purpose and effect is to promote free speech. Moreover, they accomplish this in a narrowly-tailored way that does not substantially burden more speech than is necessary to further important interests. In other words, they are not at odds with the First Amendment, but instead help effectuate its purpose for all of us. (Emphasis added).

Consistent with the progressive approach, this leaves discretion in the hands of “experts” (like Rosenworcel) to determine what needs to be done in order to protect the underlying value of free speech in the First Amendment through government regulation, even if it means compelling speech upon private actors. 

Trump’s view of what the First Amendment’s free speech protections entail when it comes to social media companies is inconsistent with the conception of the Constitution-as-guarantor-of-negative-liberty that conservatives have long embraced. 

Of course, this is not merely a “conservative” position; it is fundamental to the longstanding bipartisan approach to free speech generally and to the regulation of online platforms specifically. As a diverse group of 75 scholars and civil society groups (including ICLE) wrote yesterday in their “Principles for Lawmakers on Liability for User-Generated Content Online”:

Principle #2: Any new intermediary liability law must not target constitutionally protected speech.

The government shouldn’t require—or coerce—intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment. Also, imposing broad liability for user speech incentivizes services to err on the side of taking down speech, resulting in overbroad censorship—or even avoid offering speech forums altogether.

As those principles suggest, the sort of platform regulation that Trump, et al. advocate — essentially a “fairness doctrine” for the Internet — is the opposite of free speech:

Principle #4: Section 230 does not, and should not, require “neutrality.”

Publishing third-party content online never can be “neutral.” Indeed, every publication decision will necessarily prioritize some content at the expense of other content. Even an “objective” approach, such as presenting content in reverse chronological order, isn’t neutral because it prioritizes recency over other values. By protecting the prioritization, de-prioritization, and removal of content, Section 230 provides Internet services with the legal certainty they need to do the socially beneficial work of minimizing harmful content.

The idea that social media should be subject to a nondiscrimination requirement — for which President Trump and others like Senator Josh Hawley have been arguing lately — is flatly contrary to Section 230 — as well as to the First Amendment.

Conservatives upset about “social media discrimination” need to think hard about whether they really want to adopt this sort of position out of convenience, when the tradition with which they align rejects it — rightly — in nearly all other venues. Even if you believe that Facebook, Google, and Twitter are trying to make it harder for conservative voices to be heard (despite all evidence to the contrary), it is imprudent to reject constitutional first principles for a temporary policy victory. In fact, there’s nothing at all “conservative” about an abdication of the traditional principle linking freedom to property for the sake of political expediency.

[TOTM: The following is the fifth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.

This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]

[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]

Introduction

In a recent article Joe Kattan and Tim Muris (K&M) criticize our article on the predictive power of bargaining models in antitrust, in which we used two recent applications to explore implications for uses of bargaining models in courts and antitrust agencies moving forward.  Like other theoretical models used to predict competitive effects, complex bargaining models require courts and agencies rigorously to test their predictions against data from the real world markets and institutions to which they are being applied.  Where the “real-world evidence,” as Judge Leon described such data in AT&T/Time Warner, is inconsistent with the predictions of a complex bargaining model, then the tribunal should reject the model rather than reality.

K&M, who represent Intel Corporation in connection with the FTC v. Qualcomm case now pending in the Northern District of California, focus exclusively upon, and take particular issue with, one aspect of our prior article:  We argued that, as in AT&T/Time Warner, the market realities at issue in FTC v. Qualcomm are inconsistent with the use of Dr. Carl Shapiro’s bargaining model to predict competitive effects in the relevant market.  K&M—no doubt confident in their superior knowledge of the underlying facts due to their representation in the matter—criticize our analysis for our purported failure to get our hands sufficiently dirty with the facts.  They criticize our broader analysis of bargaining models and their application for our failure to discuss specific pieces of evidence presented at trial, and offer up several quotations from Qualcomm’s customers as support for Shapiro’s economic analysis.  K&M concede that, as we argue, the antitrust laws should not condemn a business practice in the absence of robust economic evidence of actual or likely harm to competition; yet, they do not see any conflict between that concession and their position that the FTC need not, through its expert, quantify the royalty surcharge imposed by Qualcomm because the “exact size of the overcharge was not relevant to the issue of Qualcomm’s liability.” [Kattan and Muris miss the point that within the context of economic modeling, the failure to identify the magnitude of an effect with any certainty when data are available, including whether the effect is statistically different than zero, calls into question the model’s robustness more generally.]

Though our prior article was a broad one, not limited to FTC v. Qualcomm or intended to cover record evidence in detail, we welcome K&M’s critique and are happy to accept their invitation to engage further on the facts of that particular case.  We agree that accounting for market realities is very important when complex economic models are at play.  Unfortunately, K&M’s position that the evidence “supports Shapiro’s testimony overwhelmingly” ignores the sound empirical evidence employed by Dr. Aviv Nevo during trial and has not aged well in light of the internal Apple documents made public in Qualcomm’s Opening Statement following the companies’ decision to settle the case, which Apple had initiated in January 2017.

Qualcomm’s Opening Statement in the Apple litigation revealed a number of new facts that are problematic, to say the least, for K&M’s position and, even more troublesome for Shapiro’s model and the FTC’s case.  Of course, as counsel to an interested party in the FTC case, it is entirely possible that K&M were aware of the internal Apple documents cited in Qualcomm’s Opening Statement (or similar documents) and simply disagree about their significance.  On the other hand, it is quite clear the Department of Justice Antitrust Division found them to be significantly damaging; it took the rare step of filing a Statement of Interest of the United States with the district court citing the documents and imploring the court to call for additional briefing and hold a hearing on issues related to a remedy in the event that it finds Qualcomm liable on any of the FTC’s claims. The internal Apple documents cited in Qualcomm’s Opening Statement leave no doubt as to several critical market realities that call into question the FTC’s theory of harm and Shapiro’s attempts to substantiate it.

(For more on the implications of these documents, see Geoffrey Manne’s post in this series, here).

First, the documents laying out Apple’s litigation strategy clearly establish that it has a high regard for Qualcomm’s technology and patent portfolio and that Apple strategized for several years about how to reduce its net royalties and to hurt Qualcomm financially. 

Second, the documents undermine Apple’s public complaints about Qualcomm and call into question the validity of the underlying theory of harm in the FTC’s case.  In particular, the documents plainly debunk Apple’s claims that Qualcomm’s patents weakened over time as a result of a decline in the quality of the technology and that Qualcomm devised an anticompetitive strategy in order to extract value from a weakening portfolio.  The documents illustrate that in fact, Apple adopted a deliberate strategy of trying to manipulate the value of Qualcomm’s portfolio.  The company planned to “creat[e] evidence” by leveraging its purchasing power to methodically license less expensive patents in hope of making Qualcomm’s royalties appear artificially inflated. In other words, if Apple’s made-for-litigation position were correct, then it would be only because of Apple’s attempt to manipulate and devalue Qualcomm’s patent portfolio, not because there had been any real change in its value. 

Third, the documents directly refute some of the arguments K&M put forth in their critique of our prior article, in which we invoked Dr. Nevo’s empirical analysis of royalty rates over time as important evidence of historical facts that contradict Dr. Shapiro’s model.  For example, K&M attempt to discredit Nevo’s analysis by claiming he did not control for changes in the strength of Qualcomm’s patent portfolio which, they claim, had weakened over time. According to internal Apple documents, however, “Qualcomm holds a stronger position in . . . , and particularly with respect to cellular and Wi-Fi SEPs” than do Huawei, Nokia, Ericsson, IDCC, and Apple. Another document states that “Qualcomm is widely considered the owner of the strongest patent portfolio for essential and relevant patents for wireless standards.” Indeed, Apple’s documents show that Apple sought artificially to “devalue SEPs” in the industry by “build[ing] favorable, arms-length ‘comp’ licenses” in an attempt to reduce what FRAND means. The ultimate goal of this pursuit was stated frankly by Apple: To “reduce Apple’s net royalty to Qualcomm” despite conceding that Qualcomm’s chips “engineering wise . . . have been the best.”

As new facts relevant to the FTC’s case and contrary to its theory of harm come to light, it is important to re-emphasize the fundamental point of our prior article: Model predictions that are inconsistent with actual market evidence should give fact finders serious pause before accepting the results as reliable.  This advice is particularly salient in a case like FTC v. Qualcomm, where intellectual property and innovation are critical components of the industry and its competitiveness, because condemning behavior that is not truly anticompetitive may have serious, unintended consequences. (See Douglas H. Ginsburg & Joshua D. Wright, Dynamic Analysis and the Limits of Antitrust Institutions, 78 Antitrust L.J. 1 (2012); Geoffrey A. Manne & Joshua D. Wright, Innovation and the Limits of Antitrust, 6 J. Competition L. & Econ. 153 (2010)).

The serious consequences of a false positive, that is, the erroneous condemnation of a procompetitive or competitively neutral business practice, is undoubtedly what caused the Antitrust Division to file its Statement of Interest in the FTC’s case against Qualcomm.  That Statement correctly highlights the Apple documents as support for Government’s concern that “an overly broad remedy in this case could reduce competition and innovation in markets for 5G technology and downstream applications that rely on that technology.”

In this reply, we examine closely the market realities that with and hence undermine both Dr. Shapiro’s bargaining model and the FTC’s theory of harm in its case against Qualcomm.  We believe the “large body of evidence” offered by K&M supporting Shapiro’s theoretical analysis is insufficient to sustain his conclusions under standard antitrust analysis, including the requirement that a plaintiff alleging monopolization or attempted monopolization provide evidence of actual or likely anticompetitive effects.  We will also discuss the implications of the newly-public internal Apple documents for the FTC’s case, which remains pending at the time of this writing, and for future government investigations involving allegedly anticompetitive licensing of intellectual property.

I. Kattan and Muris Rely Upon Inconsequential Testimony and Mischaracterize Dr. Nevo’s Empirical Analysis

K&M march through a series of statements from Qualcomm’s customers asserting that the threat of Qualcomm discontinuing the supply of modem chips forced them to agree to unreasonable licensing demands.  This testimony, however, is reminiscent of Dr. Shapiro’s testimony in AT&T/Time Warner concerning the threat of a long-term blackout of CNN and other Turner channels:  Qualcomm has never cut off any customer’s supply of chips.  The assertion that companies negotiating with Qualcomm either had to “agree to the license or basically go out of business” ignores the reality that even if Qualcomm discontinued supplying chips to a customer, the customer could obtain chips from one of four rival sources.  This was not a theoretical possibility.  Indeed, Apple has been sourcing chips from Intel since 2016 and made the decision to switch to Intel specifically in order, in its own words, to exert “commercial pressure against Qualcomm.”

Further, as Dr. Nevo pointed out at trial, SEP license agreements are typically long term (e.g., 10 or 15 year agreements) and are negotiated far less frequently than chip prices, which are typically negotiated annually.  In other words, Qualcomm’s royalty rate is set prior to and independent of chip sale negotiations. 

K&M raise a number of theoretical objections to Nevo’s empirical analysis.  For example, K&M accuse Nevo of “cherry picking” the licenses he included in his empirical analysis to show that royalty rates remained constant over time, stating that he “excluded from consideration any license that had non-standard terms.” They mischaracterize Nevo’s testimony on this point.  Nevo excluded from his analysis agreements that, according to the FTC’s own theory of harm, would be unaffected (e.g., agreements that were signed subject to government supervision or agreements that have substantially different risk splitting provisions).  In any event, Nevo testified that modifying his analysis to account for Shapiro’s criticism regarding the excluded agreements would have no material effect on his conclusions.  To our knowledge, Nevo’s testimony is the only record evidence providing any empirical analysis of the effects of Qualcomm’s licensing agreements.

As previously mentioned, K&M also claim that Dr. Nevo’s analysis failed to account for the alleged weakening of Qualcomm’s patent portfolio over time.  Apple’s internal documents, however, are fatal to that claim..  K&M also pinpoint failure to control for differences among customers and changes in the composition of handsets over time as critical errors in Nevo’s analysis.  Their assertion that Nevo should have controlled for differences among customers is puzzling.  They do not elaborate upon that criticism, but they seem to believe different customers are entitled to different FRAND rates for the same license.  But Qualcomm’s standard practice—due to the enormous size of its patent portfolio—is and has always been to charge all licensees the same rate for the entire portfolio.

As to changes in the composition of handsets over time, no doubt a smartphone today has many more features than a first-generation handset that only made and received calls; those new features, however, would be meaningless without Qualcomm’s SEPs, which are implemented by mobile chips that enable cellular communication.  One must wonder why Qualcomm should have reduced the royalty rate on licenses for patents that are just as fundamental to the functioning of mobile phones today as they were to the functioning of a first-generation handset.  K&M ignore the fundamental importance of Qualcomm’s SEPs in claiming that royalty rates should have declined along with the quality adjusted/? declining prices of mobile phones.  They also, conveniently, ignore the evidence that the industry has been characterized by increasing output and quality—increases which can certainly be attributed at least in part to Qualcomm’s chips being “engineering wise . . . the best.”. 

II. Apple’s Internal Documents Eviscerate the FTC’s Theory of Harm

The FTC’s theory of harm is premised upon Qualcomm’s allegedly charging a supra-FRAND rate for its SEPs (the “royalty surcharge”), which squeezes the margins of OEMs and consequently prevents rival chipset suppliers from obtaining a sufficient return when negotiating with those OEMs. (See Luke Froeb, et al’s criticism of the FTC’s theory of harm on these and related grounds, here). To predict the effects of Qualcomm’s allegedly anticompetitive conduct, Dr. Shapiro compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs.  Shapiro testified that he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences,” for competition and for consumers, though his bargaining model did not quantify the effects of Qualcomm’s practice. 

The premise of the FTC theory requires a belief about FRAND as a meaningful, objective competitive benchmark that Qualcomm was able to evade as a result of its market power in chipsets.  But Apple manipulated negotiations as a tactic to reshape FRAND itself.  The closer look at the facts invited by K&M does nothing to improve one’s view of the FTC’s claims.  The Apple documents exposed at trial make it clear that Apple deliberately manipulated negotiations with other suppliers in order to make it appear to courts and antitrust agencies that something other than the quality of Qualcomm’s technology was driving royalty rates.  For example, Apple’s own documents show it sought artificially to “devalue SEPs” by “build[ing] favorable, arms-length ‘comp’ licenses” in an attempt to reshape what FRAND means in this industry. Simply put, Apple’s strategy was to negotiate cheap supposedly “comparable” licenses with other chipset suppliers as part of a plan to reduce its net royalties to Qualcomm. 

As part of the same strategy, Apple spent years arguing to regulators and courts that Qualcomm’s patents were no better than those of its competitors.  But their internal documents tell this very different story:

  • “Nokia’s patent portfolio is significantly weaker than Qualcomm’s.”
  • “[InterDigital] makes minimal contributions to [the 4G/LTE] standard”
  • “Compared to [Huawei, Nokia, Ericsson, IDCC, and Apple], Qualcomm holds a stronger position in , and particularly with respect to cellular and Wi-Fi SEPs.”
  • “Compared to other licensors, Qualcomm has more significant holdings in key areas such as media processing, non-cellular communications and hardware.  Likewise, using patent citation analysis as a measure of thorough prosecution within the US PTO, Qualcomm patents (SEPs and non-SEPs both) on average score higher compared to the other, largely non-US based licensors.”

One internal document that is particularly troubling states that Apple’s plan was to “create leverage by building pressure” in order to  (i) hurt Qualcomm financially and (ii) put Qualcomm’s licensing model at risk. What better way to harm Qualcomm financially and put its licensing model at risk than to complain to regulators that the business model is anticompetitive and tie the company up in multiple costly litigations?  That businesses make strategic plans to harm one another is no surprise.  But it underscores the importance of antitrust institutions – with their procedural and evidentiary requirements – to separate meritorious claims from fabricated ones. They failed to do so here.

III. Lessons Learned

So what should we make of evidence suggesting one of the FTC’s key informants during its investigation of Qualcomm didn’t believe the arguments it was selling?  The exposure of Apple’s internal documents is a sobering reminder that the FTC is not immune from the risk of being hoodwinked by rent-seeking antitrust plaintiffs.  That a firm might try to persuade antitrust agencies to investigate and sue its rivals is nothing new (see, e.g., William J. Baumol & Janusz A. Ordover, Use of Antitrust to Subvert Competition, 28 J.L. & Econ. 247 (1985)), but it is a particularly high-stakes game in modern technology markets. 

Lesson number one: Requiring proof of actual anticompetitive effects rather than relying upon a model that is not robust to market realities is an important safeguard to ensure that Section 2 protects competition and not merely an individual competitor.  Yet the agencies’ staked their cases on bargaining models in AT&T/Time Warner and FTC v. Qualcomm that fell short of proving anticompetitive effects.  An agency convinced by one firm or firms to pursue an action against a rival for conduct that does not actually harm competition could have a significant and lasting anticompetitive effect on the market.  Modern antitrust analysis requires plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.  That safeguard is particularly important when an agency is pursuing an enforcement action against a company in a market where the risks of regulatory capture and false positives are high.  With calls to move away from the consumer welfare standard—which would exacerbate both the risks and consequences of false positives–it is imperative to embrace rather than reject the requirement of proof in monopolization cases. (See Elyse Dorsey, Jan Rybnicek & Joshua D. Wright, Hipster Antitrust Meets Public Choice Economics: The Consumer Welfare Standard, Rule of Law, and Rent-Seeking, CPI Antitrust Chron. (Apr. 2018); see also Joshua D. Wright et al., Requiem For a Paradox: The Dubious Rise and Inevitable Fall of Hipster Antitrust, 51 Ariz. St. L.J. 293 (2019).) The DOJ’s Statement of Interest is a reminder of this basic tenet. 

Lesson number two: Antitrust should have a limited role in adjudicating disputes arising between sophisticated parties in bilateral negotiations of patent licenses.  Overzealous claims of harm from patent holdup and anticompetitive licensing can deter the lawful exercise of patent rights, good faith modifications of existing contracts, and more generally interfere with the outcome of arms-length negotiations (See Bruce H. Kobayashi & Joshua D. Wright, The Limits of Antitrust and Patent Holdup: A Reply To Cary et al., 78 Antitrust L.J. 701 (2012)). It is also a difficult task for an antitrust regulator or court to identify and distinguish anticompetitive patent licenses from neutral or welfare-increasing behavior.  An antitrust agency’s willingness to cast the shadow of antitrust remedies over one side of the bargaining table inevitably places the agency in the position of encouraging further rent-seeking by licensees seeking similar intervention on their behalf.

Finally, antitrust agencies intervening in patent holdup and licensing disputes on behalf of one party to a patent licensing agreement risks transforming the agency into a price regulator.  Apple’s fundamental complaint in its own litigation, and the core of the similar FTC allegation against Qualcomm, is that royalty rates are too high.  The risks to competition and consumers of antitrust courts and agencies playing the role of central planner for the innovation economy are well known, and are at the peak when the antitrust enterprise is used to set prices, mandate a particular organizational structure for the firm, or to intervene in garden variety contract and patent disputes in high-tech markets.

The current Commission did not vote out the Complaint now being litigated in the Northern District of California.  That case was initiated by an entirely different set of Commissioners.  It is difficult to imagine the new Commissioners having no reaction to the Apple documents, and in particular to the perception they create that Apple was successful in manipulating the agency in its strategy to bolster its negotiating position against Qualcomm.  A thorough reevaluation of the evidence here might well lead the current Commission to reconsider the merits of the agency’s position in the litigation and whether continuing is in the public interest.  The Apple documents, should they enter the record, may affect significantly the Ninth Circuit’s or Supreme Court’s understanding of the FTC’s theory of harm.

[TOTM: The following is the fourth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here. This post originally appeared on the Federalist Society Blog.]

The courtroom trial in the Federal Trade Commission’s (FTC’s) antitrust case against Qualcomm ended in January with a promise from the judge in the case, Judge Lucy Koh, to issue a ruling as quickly as possible — caveated by her acknowledgement that the case is complicated and the evidence voluminous. Well, things have only gotten more complicated since the end of the trial. Not only did Apple and Qualcomm reach a settlement in the antitrust case against Qualcomm that Apple filed just three days after the FTC brought its suit, but the abbreviated trial in that case saw the presentation by Qualcomm of some damning evidence that, if accurate, seriously calls into (further) question the merits of the FTC’s case.

Apple v. Qualcomm settles — and the DOJ takes notice

The Apple v. Qualcomm case, which was based on substantially the same arguments brought by the FTC in its case, ended abruptly last month after only a day and a half of trial — just enough time for the parties to make their opening statements — when Apple and Qualcomm reached an out-of-court settlement. The settlement includes a six-year global patent licensing deal, a multi-year chip supplier agreement, an end to all of the patent disputes around the world between the two companies, and a $4.5 billion settlement payment from Apple to Qualcomm.

That alone complicates the economic environment into which Judge Koh will issue her ruling. But the Apple v. Qualcomm trial also appears to have induced the Department of Justice Antitrust Division (DOJ) to weigh in on the FTC’s case with a Statement of Interest requesting Judge Koh to use caution in fashioning a remedy in the case should she side with the FTC, followed by a somewhat snarky Reply from the FTC arguing the DOJ’s filing was untimely (and, reading the not-so-hidden subtext, unwelcome).

But buried in the DOJ’s Statement is an important indication of why it filed its Statement when it did, just about a week after the end of the Apple v. Qualcomm case, and a pointer to a much larger issue that calls the FTC’s case against Qualcomm even further into question (I previously wrote about the lack of theoretical and evidentiary merit in the FTC’s case here).

Footnote 6 of the DOJ’s Statement reads:

Internal Apple documents that recently became public describe how, in an effort to “[r]educe Apple’s net royalty to Qualcomm,” Apple planned to “[h]urt Qualcomm financially” and “[p]ut Qualcomm’s licensing model at risk,” including by filing lawsuits raising claims similar to the FTC’s claims in this case …. One commentator has observed that these documents “potentially reveal[] that Apple was engaging in a bad faith argument both in front of antitrust enforcers as well as the legal courts about the actual value and nature of Qualcomm’s patented innovation.” (Emphasis added).

Indeed, the slides presented by Qualcomm during that single day of trial in Apple v. Qualcomm are significant, not only for what they say about Apple’s conduct, but, more importantly, for what they say about the evidentiary basis for the FTC’s claims against the company.

The evidence presented by Qualcomm in its opening statement suggests some troubling conduct by Apple

Others have pointed to Qualcomm’s opening slides and the Apple internal documents they present to note Apple’s apparent bad conduct. As one commentator sums it up:

Although we really only managed to get a small glimpse of Qualcomm’s evidence demonstrating the extent of Apple’s coordinated strategy to manipulate the FRAND license rate, that glimpse was particularly enlightening. It demonstrated a decade-long coordinated effort within Apple to systematically engage in what can only fairly be described as manipulation (if not creation of evidence) and classic holdout.

Qualcomm showed during opening arguments that, dating back to at least 2009, Apple had been laying the foundation for challenging its longstanding relationship with Qualcomm. (Emphasis added).

The internal Apple documents presented by Qualcomm to corroborate this claim appear quite damning. Of course, absent explanation and cross-examination, it’s impossible to know for certain what the documents mean. But on their face they suggest Apple knowingly undertook a deliberate scheme (and knowingly took upon itself significant legal risk in doing so) to devalue comparable patent portfolios to Qualcomm’s:

The apparent purpose of this scheme was to devalue comparable patent licensing agreements where Apple had the power to do so (through litigation or the threat of litigation) in order to then use those agreements to argue that Qualcomm’s royalty rates were above the allowable, FRAND level, and to undermine the royalties Qualcomm would be awarded in courts adjudicating its FRAND disputes with the company. As one commentator put it:

Apple embarked upon a coordinated scheme to challenge weaker patents in order to beat down licensing prices. Once the challenges to those weaker patents were successful, and the licensing rates paid to those with weaker patent portfolios were minimized, Apple would use the lower prices paid for weaker patent portfolios as proof that Qualcomm was charging a super-competitive licensing price; a licensing price that violated Qualcomm’s FRAND obligations. (Emphasis added).

That alone is a startling revelation, if accurate, and one that would seem to undermine claims that patent holdout isn’t a real problem. It also would undermine Apple’s claims that it is a “willing licensee,” engaging with SEP licensors in good faith. (Indeed, this has been called into question before, and one Federal Circuit judge has noted in dissent that “[t]he record in this case shows evidence that Apple may have been a hold out.”). If the implications drawn from the Apple documents shown in Qualcomm’s opening statement are accurate, there is good reason to doubt that Apple has been acting in good faith.

Even more troubling is what it means for the strength of the FTC’s case

But the evidence offered in Qualcomm’s opening argument point to another, more troubling implication, as well. We know that Apple has been coordinating with the FTC and was likely an important impetus for the FTC’s decision to bring an action in the first place. It seems reasonable to assume that Apple used these “manipulated” agreements to help make its case.

But what is most troubling is the extent to which it appears to have worked.

The FTC’s action against Qualcomm rested in substantial part on arguments that Qualcomm’s rates were too high (even though the FTC constructed its case without coming right out and saying this, at least until trial). In its opening statement the FTC said:

Qualcomm’s practices, including no license, no chips, skewed negotiations towards the outcomes that favor Qualcomm and lead to higher royalties. Qualcomm is committed to license its standard essential patents on fair, reasonable, and non-discriminatory terms. But even before doing market comparison, we know that the license rates charged by Qualcomm are too high and above FRAND because Qualcomm uses its chip power to require a license.

* * *

Mr. Michael Lasinski [the FTC’s patent valuation expert] compared the royalty rates received by Qualcomm to … the range of FRAND rates that ordinarily would form the boundaries of a negotiation … Mr. Lasinski’s expert opinion … is that Qualcomm’s royalty rates are far above any indicators of fair and reasonable rates. (Emphasis added).

The key question is what constitutes the “range of FRAND rates that ordinarily would form the boundaries of a negotiation”?

Because they were discussed under seal, we don’t know the precise agreements that the FTC’s expert, Mr. Lasinski, used for his analysis. But we do know something about them: His analysis entailed a study of only eight licensing agreements; in six of them, the licensee was either Apple or Samsung; and in all of them the licensor was either Interdigital, Nokia, or Ericsson. We also know that Mr. Lasinski’s valuation study did not include any Qualcomm licenses, and that the eight agreements he looked at were all executed after the district court’s decision in Microsoft vs. Motorola in 2013.

A curiously small number of agreements

Right off the bat there is a curiosity in the FTC’s valuation analysis. Even though there are hundreds of SEP license agreements involving the relevant standards, the FTC’s analysis relied on only eight, three-quarters of which involved licensing by only two companies: Apple and Samsung.

Indeed, even since 2013 (a date to which we will return) there have been scads of licenses (see, e.g., herehere, and here). Not only Apple and Samsung make CDMA and LTE devices; there are — quite literally — hundreds of other manufacturers out there, all of them licensing essentially the same technology — including global giants like LG, Huawei, HTC, Oppo, Lenovo, and Xiaomi. Why were none of their licenses included in the analysis? 

At the same time, while Interdigital, Nokia, and Ericsson are among the largest holders of CDMA and LTE SEPs, several dozen companies have declared such patents, including Motorola (Alphabet), NEC, Huawei, Samsung, ZTE, NTT DOCOMO, etc. Again — why were none of their licenses included in the analysis?

All else equal, more data yields better results. This is particularly true where the data are complex license agreements which are often embedded in larger, even-more-complex commercial agreements and which incorporate widely varying patent portfolios, patent implementers, and terms.

Yet the FTC relied on just eight agreements in its comparability study, covering a tiny fraction of the industry’s licensors and licensees, and, notably, including primarily licenses taken by the two companies (Samsung and Apple) that have most aggressively litigated their way to lower royalty rates.

A curiously crabbed selection of licensors

And it is not just that the selected licensees represent a weirdly small and biased sample; it is also not necessarily even a particularly comparable sample.

One thing we can be fairly confident of, given what we know of the agreements used, is that at least one of the license agreements involved Nokia licensing to Apple, and another involved InterDigital licensing to Apple. But these companies’ patent portfolios are not exactly comparable to Qualcomm’s. About Nokia’s patents, Apple said:

And about InterDigital’s:

Meanwhile, Apple’s view of Qualcomm’s patent portfolio (despite its public comments to the contrary) was that it was considerably better than the others’:

The FTC’s choice of such a limited range of comparable license agreements is curious for another reason, as well: It includes no Qualcomm agreements. Qualcomm is certainly one of the biggest players in the cellular licensing space, and no doubt more than a few license agreements involve Qualcomm. While it might not make sense to include Qualcomm licenses that the FTC claims incorporate anticompetitive terms, that doesn’t describe the huge range of Qualcomm licenses with which the FTC has no quarrel. Among other things, Qualcomm licenses from before it began selling chips would not have been affected by its alleged “no license, no chips” scheme, nor would licenses granted to companies that didn’t also purchase Qualcomm chips. Furthermore, its licenses for technology reading on the WCDMA standard are not claimed to be anticompetitive by the FTC.

And yet none of these licenses were deemed “comparable” by the FTC’s expert, even though, on many dimensions — most notably, with respect to the underlying patent portfolio being valued — they would have been the most comparable (i.e., identical).

A curiously circumscribed timeframe

That the FTC’s expert should use the 2013 cut-off date is also questionable. According to Lasinski, he chose to use agreements after 2013 because it was in 2013 that the U.S. District Court for the Western District of Washington decided the Microsoft v. Motorola case. Among other things, the court in Microsoft v Motorola held that the proper value of a SEP is its “intrinsic” patent value, including its value to the standard, but not including the additional value it derives from being incorporated into a widely used standard.

According to the FTC’s expert,

prior to [Microsoft v. Motorola], people were trying to value … the standard and the license based on the value of the standard, not the value of the patents ….

Asked by Qualcomm’s counsel if his concern was that the “royalty rates derived in license agreements for cellular SEPs [before Microsoft v. Motorola] could very well have been above FRAND,” Mr. Lasinski concurred.

The problem with this approach is that it’s little better than arbitrary. The Motorola decision was an important one, to be sure, but the notion that sophisticated parties in a multi-billion dollar industry were systematically agreeing to improper terms until a single court in Washington suggested otherwise is absurd. To be sure, such agreements are negotiated in “the shadow of the law,” and judicial decisions like the one in Washington (later upheld by the Ninth Circuit) can affect the parties’ bargaining positions.

But even if it were true that the court’s decision had some effect on licensing rates, the decision would still have been only one of myriad factors determining parties’ relative bargaining  power and their assessment of the proper valuation of SEPs. There is no basis to support the assertion that the Motorola decision marked a sea-change between “improper” and “proper” patent valuations. And, even if it did, it was certainly not alone in doing so, and the FTC’s expert offers no justification for determining that agreements reached before, say, the European Commission’s decision against Qualcomm in 2018 were “proper,” or that the Korea FTC’s decision against Qualcomm in 2009 didn’t have the same sort of corrective effect as the Motorola court’s decision in 2013. 

At the same time, a review of a wider range of agreements suggested that Qualcomm’s licensing royalties weren’t inflated

Meanwhile, one of Qualcomm’s experts in the FTC case, former DOJ Chief Economist Aviv Nevo, looked at whether the FTC’s theory of anticompetitive harm was borne out by the data by looking at Qualcomm’s royalty rates across time periods and standards, and using a much larger set of agreements. Although his remit was different than Mr. Lasinski’s, and although he analyzed only Qualcomm licenses, his analysis still sheds light on Mr. Lasinski’s conclusions:

[S]pecifically what I looked at was the predictions from the theory to see if they’re actually borne in the data….

[O]ne of the clear predictions from the theory is that during periods of alleged market power, the theory predicts that we should see higher royalty rates.

So that’s a very clear prediction that you can take to data. You can look at the alleged market power period, you can look at the royalty rates and the agreements that were signed during that period and compare to other periods to see whether we actually see a difference in the rates.

Dr. Nevo’s analysis, which looked at royalty rates in Qualcomm’s SEP license agreements for CDMA, WCDMA, and LTE ranging from 1990 to 2017, found no differences in rates between periods when Qualcomm was alleged to have market power and when it was not alleged to have market power (or could not have market power, on the FTC’s theory, because it did not sell corresponding chips).

The reason this is relevant is that Mr. Lasinski’s assessment implies that Qualcomm’s higher royalty rates weren’t attributable to its superior patent portfolio, leaving either anticompetitive conduct or non-anticompetitive, superior bargaining ability as the explanation. No one thinks Qualcomm has cornered the market on exceptional negotiators, so really the only proffered explanation for the results of Mr. Lasinski’s analysis is anticompetitive conduct. But this assumes that his analysis is actually reliable. Prof. Nevo’s analysis offers some reason to think that it is not.

All of the agreements studied by Mr. Lasinski were drawn from the period when Qualcomm is alleged to have employed anticompetitive conduct to elevate its royalty rates above FRAND. But when the actual royalties charged by Qualcomm during its alleged exercise of market power are compared to those charged when and where it did not have market power, the evidence shows it received identical rates. Mr Lasinki’s results, then, would imply that Qualcomm’s royalties were “too high” not only while it was allegedly acting anticompetitively, but also when it was not. That simple fact suggests on its face that Mr. Lasinski’s analysis may have been flawed, and that it systematically under-valued Qualcomm’s patents.

Connecting the dots and calling into question the strength of the FTC’s case

In its closing argument, the FTC pulled together the implications of its allegations of anticompetitive conduct by pointing to Mr. Lasinski’s testimony:

Now, looking at the effect of all of this conduct, Qualcomm’s own documents show that it earned many times the licensing revenue of other major licensors, like Ericsson.

* * *

Mr. Lasinski analyzed whether this enormous difference in royalties could be explained by the relative quality and size of Qualcomm’s portfolio, but that massive disparity was not explained.

Qualcomm’s royalties are disproportionate to those of other SEP licensors and many times higher than any plausible calculation of a FRAND rate.

* * *

The overwhelming direct evidence, some of which is cited here, shows that Qualcomm’s conduct led licensees to pay higher royalties than they would have in fair negotiations.

It is possible, of course, that Lasinki’s methodology was flawed; indeed, at trial Qualcomm argued exactly this in challenging his testimony. But it is also possible that, whether his methodology was flawed or not, his underlying data was flawed.

It is impossible from the publicly available evidence to definitively draw this conclusion, but the subsequent revelation that Apple may well have manipulated at least a significant share of the eight agreements that constituted Mr. Lasinski’s data certainly increases the plausibility of this conclusion: We now know, following Qualcomm’s opening statement in Apple v. Qualcomm, that that stilted set of comparable agreements studied by the FTC’s expert also happens to be tailor-made to be dominated by agreements that Apple may have manipulated to reflect lower-than-FRAND rates.

What is most concerning is that the FTC may have built up its case on such questionable evidence, either by intentionally cherry picking the evidence upon which it relied, or inadvertently because it rested on such a needlessly limited range of data, some of which may have been tainted.

Intentionally or not, the FTC appears to have performed its valuation analysis using a needlessly circumscribed range of comparable agreements and justified its decision to do so using questionable assumptions. This seriously calls into question the strength of the FTC’s case.

I posted this originally on my own blog, but decided to cross-post here since Thom and I have been blogging on this topic.

“The U.S. stock market is having another solid year. You wouldn’t know it by looking at the shares of companies that manage money.”

That’s the lead from Charles Stein on Bloomberg’s Markets’ page today. Stein goes on to offer three possible explanations: 1) a weary bull market, 2) a move toward more active stock-picking by individual investors, and 3) increasing pressure on fees.

So what has any of that to do with the common ownership issue? A few things.

First, it shows that large institutional investors must not be very good at harvesting the benefits of the non-competitive behavior they encourage among the firms the invest in–if you believe they actually do that in the first place. In other words, if you believe common ownership is a problem because CEOs are enriching institutional investors by softening competition, you must admit they’re doing a pretty lousy job of capturing that value.

Second, and more importantly–as well as more relevant–the pressure on fees has led money managers to emphasis low-cost passive index funds. Indeed, among the firms doing well according to the article is BlackRock, “whose iShares exchange-traded fund business tracks indexes, won $20 billion.” In an aggressive move, Fidelity has introduced a total of four zero-fee index funds as a way to draw fee-conscious investors. These index tracking funds are exactly the type of inter-industry diversified funds that negate any incentive for competition softening in any one industry.

Finally, this also illustrates the cost to the investing public of the limits on common ownership proposed by the likes of Einer Elhague, Eric Posner, and Glen Weyl. Were these types of proposals in place, investment managers could not offer diversified index funds that include more than one firm’s stock from any industry with even a moderate level of market concentration. Given competitive forces are pushing investment companies to increase the offerings of such low-cost index funds, any regulatory proposal that precludes those possibilities is sure to harm the investing public.

Just one more piece of real evidence that common ownership is not only not a problem, but that the proposed “fixes” are.

As has been rumored in the press for a few weeks, today Comcast announced it is considering making a renewed bid for a large chunk of Twenty-First Century Fox’s (Fox) assets. Fox is in the process of a significant reorganization, entailing primarily the sale of its international and non-television assets. Fox itself will continue, but with a focus on its US television business.

In December of last year, Fox agreed to sell these assets to Disney, in the process rejecting a bid from Comcast. Comcast’s initial bid was some 16% higher than Disney’s, although there were other differences in the proposed deals, as well.

In April of this year, Disney and Fox filed a proxy statement with the SEC explaining the basis for the board’s decision, including predominantly the assertion that the Comcast bid (NB: Comcast is identified as “Party B” in that document) presented greater regulatory (antitrust) risk.

As noted, today Comcast announced it is in “advanced stages” of preparing another unsolicited bid. This time,

Any offer for Fox would be all-cash and at a premium to the value of the current all-share offer from Disney. The structure and terms of any offer by Comcast, including with respect to both the spin-off of “New Fox” and the regulatory risk provisions and the related termination fee, would be at least as favorable to Fox shareholders as the Disney offer.

Because, as we now know (since the April proxy filing), Fox’s board rejected Comcast’s earlier offer largely on the basis of the board’s assessment of the antitrust risk it presented, and because that risk assessment (and the difference between an all-cash and all-share offer) would now be the primary distinguishing feature between Comcast’s and Disney’s bids, it is worth evaluating that conclusion as Fox and its shareholders consider Comcast’s new bid.

In short: There is no basis for ascribing a greater antitrust risk to Comcast’s purchase of Fox’s assets than to Disney’s.

Summary of the Proposed Deal

Post-merger, Fox will continue to own Fox News Channel, Fox Business Network, Fox Broadcasting Company, Fox Sports, Fox Television Stations Group, and sports cable networks FS1, FS2, Fox Deportes, and Big Ten Network.

The deal would transfer to Comcast (or Disney) the following:

  • Primarily, international assets, including Fox International (cable channels in Latin America, the EU, and Asia), Star India (the largest cable and broadcast network in India), and Fox’s 39% interest in Sky (Europe’s largest pay TV service).
  • Fox’s film properties, including 20th Century Fox, Fox Searchlight, and Fox Animation. These would bring along with them studios in Sydney and Los Angeles, but would not include the Fox Los Angeles backlot. Like the rest of the US film industry, the majority of Fox’s film revenue is earned overseas.
  • FX cable channels, National Geographic cable channels (of which Fox currently owns 75%), and twenty-two regional sports networks (RSNs). In terms of relative demand for the two cable networks, FX is a popular basic cable channel, but fairly far down the list of most-watched channels, while National Geographic doesn’t even crack the top 50. Among the RSNs, only one geographic overlap exists with Comcast’s current RSNs, and most of the Fox RSNs (at least 14 of the 22) are not in areas where Comcast has a substantial service presence.
  • The deal would also entail a shift in the companies’ ownership interests in Hulu. Hulu is currently owned in equal 30% shares by Disney, Comcast, and Fox, with the remaining, non-voting 10% owned by Time Warner. Either Comcast or Disney would hold a controlling 60% share of Hulu following the deal with Fox.

Analysis of the Antitrust Risk of a Comcast/Fox Merger

According to the joint proxy statement, Fox’s board discounted Comcast’s original $34.36/share offer — but not the $28.00/share offer from Disney — because of “the level of regulatory issues posed and the proposed risk allocation arrangements.” Significantly on this basis, the Fox board determined Disney’s offer to be superior.

The claim that a merger with Comcast poses sufficiently greater antitrust risk than a purchase by Disney to warrant its rejection out of hand is unsupportable, however. From an antitrust perspective, it is even plausible that a Comcast acquisition of the Fox assets would be on more-solid ground than would be a Disney acquisition.

Vertical Mergers Generally Present Less Antitrust Risk

A merger between Comcast and Fox would be predominantly vertical, while a merger between Disney and Fox, in contrast, would be primarily horizontal. Generally speaking, it is easier to get antitrust approval for vertical mergers than it is for horizontal mergers. As Bruce Hoffman, Director of the FTC’s Bureau of Competition, noted earlier this year:

[V]ertical merger enforcement is still a small part of our merger workload….

There is a strong theoretical basis for horizontal enforcement because economic models predict at least nominal potential for anticompetitive effects due to elimination of horizontal competition between substitutes.

Where horizontal mergers reduce competition on their face — though that reduction could be minimal or more than offset by benefits — vertical mergers do not…. [T]here are plenty of theories of anticompetitive harm from vertical mergers. But the problem is that those theories don’t generally predict harm from vertical mergers; they simply show that harm is possible under certain conditions.

On its face, and consistent with the last quarter century of merger enforcement by the DOJ and FTC, the Comcast acquisition would be less likely to trigger antitrust scrutiny, and the Disney acquisition raises more straightforward antitrust issues.

This is true even in light of the fact that the DOJ decided to challenge the AT&T-Time Warner (AT&T/TWX) merger.

The AT&T/TWX merger is a single data point in a long history of successful vertical mergers that attracted little scrutiny, and no litigation, by antitrust enforcers (although several have been approved subject to consent orders).

Just because the DOJ challenged that one merger does not mean that antitrust enforcers generally, nor even the DOJ in particular, have suddenly become more hostile to vertical mergers.

Of particular importance to the conclusion that the AT&T/TWX merger challenge is of minimal relevance to predicting the DOJ’s reception in this case, the theory of harm argued by the DOJ in that case is far from well-accepted, while the potential theory that could underpin a challenge to a Disney/Fox merger is. As Bruce Hoffman further remarks:

I am skeptical of arguments that vertical mergers cause harm due to an increased bargaining skill; this is likely not an anticompetitive effect because it does not flow from a reduction in competition. I would contrast that to the elimination of competition in a horizontal merger that leads to an increase in bargaining leverage that could raise price or reduce output.

The Relatively Lower Risk of a Vertical Merger Challenge Hasn’t Changed Following the DOJ’s AT&T/Time Warner Challenge

Judge Leon is expected to rule on the AT&T/TWX merger in a matter of weeks. The theory underpinning the DOJ’s challenge is problematic (to say the least), and the case it presented was decidedly weak. But no litigated legal outcome is ever certain, and the court could, of course, rule against the merger nevertheless.

Yet even if the court does rule against the AT&T/TWX merger, this hardly suggests that a Comcast/Fox deal would create a greater antitrust risk than would a Disney/Fox merger.

A single successful challenge to a vertical merger — what would be, in fact, the first successful vertical merger challenge in four decades — doesn’t mean that the courts are becoming hostile to vertical mergers any more than the DOJ’s challenge means that vertical mergers suddenly entail heightened enforcement risk. Rather, it would simply mean that that, given the specific facts of the case, the DOJ was able to make out its prima facie case, and that the defendants were unable to rebut it.  

A ruling for the DOJ in the AT&T/TWX merger challenge would be rooted in a highly fact-specific analysis that could have no direct bearing on future cases.

In the AT&T/TWX case, the court’s decision will turn on its assessment of the DOJ’s argument that the merged firm could raise subscriber prices by a few pennies per subscriber. But as AT&T’s attorney aptly pointed out at trial (echoing the testimony of AT&T’s economist, Dennis Carlton):

The government’s modeled price increase is so negligible that, given the inherent uncertainty in that predictive exercise, it is not meaningfully distinguishable from zero.

Even minor deviations from the facts or the assumptions used in the AT&T/TWX case could completely upend the analysis — and there are important differences between the AT&T/TWX merger and a Comcast/Fox merger. True, both would be largely vertical mergers that would bring together programming and distribution assets in the home video market. But the foreclosure effects touted by the DOJ in the AT&T/TWX merger are seemingly either substantially smaller or entirely non-existent in the proposed Comcast/Fox merger.

Most importantly, the content at issue in AT&T/TWX is at least arguably (and, in fact, argued by the DOJ) “must have” programming — Time Warner’s premium HBO channels and its CNN news programming, in particular, were central to the DOJ’s foreclosure argument. By contrast, the programming that Comcast would pick up as a result of the proposed merger with Fox — FX (a popular, but non-essential, basic cable channel) and National Geographic channels (which attract a tiny fraction of cable viewing) — would be extremely unlikely to merit that designation.

Moreover, the DOJ made much of the fact that AT&T, through DirectTV, has a national distribution footprint. As a result, its analysis was dependent upon the company’s potential ability to attract new subscribers decamping from competing providers from whom it withholds access to Time Warner content in every market in the country. Comcast, on the other hand, provides cable service in only about 35% of the country. This significantly limits its ability to credibly threaten competitors because its ability to recoup lost licensing fees by picking up new subscribers is so much more limited.

And while some RSNs may offer some highly prized live sports programming, the mismatch between Comcast’s footprint and the FOX RSNs (only about 8 of the 22 Fox RSNs are in Comcast service areas) severely limits any ability or incentive the company would have to leverage that content for higher fees. Again, to the extent that RSN programming is not “must-have,” and to the extent there is not overlap between the RSN’s geographic area and Comcast’s service area, the situation is manifestly not the same as the one at issue in the AT&T/TWX merger.

In sum, a ruling in favor of the DOJ in the AT&T/TWX case would be far from decisive in predicting how the agency and the courts would assess any potential concerns arising from Comcast’s ownership of Fox’s assets.

A Comcast/Fox Deal May Entail Lower Antitrust Risk than a Disney/Fox Merger

As discussed below, concerns about antitrust enforcement risk from a Comcast/Fox merger are likely overstated. Perhaps more importantly, however, to the extent these concerns are legitimate, they apply at least as much to a Disney/Fox merger. There is, at minimum, no basis for assuming a Comcast deal would present any greater regulatory risk.

The Antitrust Risk of a Comcast/Fox Merger Is Likely Overstated

The primary theory upon which antitrust enforcers could conceivably base a Comcast/Fox merger challenge would be a vertical foreclosure theory. Importantly, such a challenge would have to be based on the incremental effect of adding the Fox assets to Comcast, and not on the basis of its existing assets. Thus, for example, antitrust enforcers would not be able to base a merger challenge on the possibility that Comcast could leverage NBC content it currently owns to extract higher fees from competitors. Rather, only if the combination of NBC programming with additional content from Fox could create a new antitrust risk would a case be tenable.

Enforcers would be unlikely to view the addition of FX and National Geographic to the portfolio of programming content Comcast currently owns as sufficient to raise concerns that the merger would give Comcast anticompetitive bargaining power or the ability to foreclose access to its content.

Although even less likely, enforcers could be concerned with the (horizontal) addition of 20th Century Fox filmed entertainment to Universal’s existing film production and distribution. But the theatrical film market is undeniably competitive, with the largest studio by revenue (Disney) last year holding only 22% of the market. The combination of 20th Century Fox with Universal would still result in a market share only around 25% based on 2017 revenues (and, depending on the year, not even result in the industry’s largest share).

There is also little reason to think that a Comcast controlling interest in Hulu would attract problematic antitrust attention. Comcast has already demonstrated an interest in diversifying its revenue across cable subscriptions and licensing, broadband subscriptions, and licensing to OVDs, as evidenced by its recent deal to offer Netflix as part of its Xfinity packages. Hulu likely presents just one more avenue for pursuing this same diversification strategy. And Universal has a history (see, e.g., this, this, and this) of very broad licensing across cable providers, cable networks, OVDs, and the like.

In the case of Hulu, moreover, the fact that Comcast is vertically integrated in broadband as well as cable service likely reduces the anticompetitive risk because more-attractive OVD content has the potential to increase demand for Comcast’s broadband service. Broadband offers larger margins (and is growing more rapidly) than cable, and it’s quite possible that any loss in Comcast’s cable subscriber revenue from Hulu’s success would be more than offset by gains in its content licensing and broadband subscription revenue. The same, of course, goes for Comcast’s incentives to license content to OVD competitors like Netflix: Comcast plausibly gains broadband subscription revenue from heightened consumer demand for Netflix, and this at least partially offsets any possible harm to Hulu from Netflix’s success.

At the same time, especially relative to Netflix’s vast library of original programming (an expected $8 billion worth in 2018 alone) and content licensed from other sources, the additional content Comcast would gain from a merger with Fox is not likely to appreciably increase its bargaining leverage or its ability to foreclose Netflix’s access to its content.     

Finally, Comcast’s ownership of Fox’s RSNs could, as noted, raise antitrust enforcers’ eyebrows. Enforcers could be concerned that Comcast would condition competitors’ access to RSN programming on higher licensing fees or prioritization of its NBC Sports channels.

While this is indeed a potential risk, it is hardly a foregone conclusion that it would draw an enforcement action. Among other things, NBC is far from the market leader, and improving its competitive position relative to ESPN could be viewed as a benefit of the deal. In any case, potential problems arising from ownership of the RSNs could easily be dealt with through divestiture or behavioral conditions; they are extremely unlikely to lead to an outright merger challenge.

The Antitrust Risk of a Disney Deal May Be Greater than Expected

While a Comcast/Fox deal doesn’t entail no antitrust enforcement risk, it certainly doesn’t entail sufficient risk to deem the deal dead on arrival. Moreover, it may entail less antitrust enforcement risk than would a Disney/Fox tie-up.

Yet, curiously, the joint proxy statement doesn’t mention any antitrust risk from the Disney deal at all and seems to suggest that the Fox board applied no risk discount in evaluating Disney’s bid.

Disney — already the market leader in the filmed entertainment industry — would acquire an even larger share of box office proceeds (and associated licensing revenues) through acquisition of Fox’s film properties. Perhaps even more important, the deal would bring the movie rights to almost all of the Marvel Universe within Disney’s ambit.

While, as suggested above, even that combination probably wouldn’t trigger any sort of market power presumption, it would certainly create an entity with a larger share of the market and stronger control of the industry’s most valuable franchises than would a Comcast/Fox deal.

Another relatively larger complication for a Disney/Fox merger arises from the prospect of combining Fox’s RSNs with ESPN. Whatever ability or incentive either company would have to engage in anticompetitive conduct surrounding sports programming, that risk would seem to be more significant for undisputed market leader, Disney. At the same time, although still powerful, demand for ESPN on cable has been flagging. Disney could well see the ability to bundle ESPN with regional sports content as a way to prop up subscription revenues for ESPN — a practice, in fact, that it has employed successfully in the past.   

Finally, it must be noted that licensing of consumer products is an even bigger driver of revenue from filmed entertainment than is theatrical release. No other company comes close to Disney in this space.

Disney is the world’s largest licensor, earning almost $57 billion in 2016 from licensing properties like Star Wars and Marvel Comics. Universal is in a distant 7th place, with 2016 licensing revenue of about $6 billion. Adding Fox’s (admittedly relatively small) licensing business would enhance Disney’s substantial lead (even the number two global licensor, Meredith, earned less than half of Disney’s licensing revenue in 2016). Again, this is unlikely to be a significant concern for antitrust enforcers, but it is notable that, to the extent it might be an issue, it is one that applies to Disney and not Comcast.

Conclusion

Although I hope to address these issues in greater detail in the future, for now the preliminary assessment is clear: There is no legitimate basis for ascribing a greater antitrust risk to a Comcast/Fox deal than to a Disney/Fox deal.

As Thom previously posted, he and I have a new paper explaining The Case for Doing Nothing About Common Ownership of Small Stakes in Competing Firms. Our paper is a response to cries from the likes of Einer Elhauge and of Eric Posner, Fiona Scott Morton, and Glen Weyl, who have called for various types of antitrust action to reign in what they claim is an “economic blockbuster” and “the major new antitrust challenge of our time,” respectively. This is the first in a series of posts that will unpack some of the issues and arguments we raise in our paper.

At issue is the growth in the incidence of common-ownership across firms within various industries. In particular, institutional investors with broad portfolios frequently report owning small stakes in a number of firms within a given industry. Although small, these stakes may still represent large block holdings relative to other investors. This intra-industry diversification, critics claim, changes the managerial objectives of corporate executives from aggressively competing to increase their own firm’s profits to tacitly colluding to increase industry-level profits instead. The reason for this change is that competition by one firm comes at a cost of profits from other firms in the industry. If investors own shares across firms, then any competitive gains in one firm’s stock are offset by competitive losses in the stocks of other firms in the investor’s portfolio. If one assumes corporate executives aim to maximize total value for their largest shareholders, then managers would have incentive to soften competition against firms with which they share common ownership. Or so the story goes (more on that in a later post.)

Elhague and Posner, et al., draw their motivation for new antitrust offenses from a handful of papers that purport to establish an empirical link between the degree of common ownership among competing firms and various measures of softened competitive behavior, including airline prices, banking fees, executive compensation, and even corporate disclosure patterns. The paper of most note, by José Azar, Martin Schmalz, and Isabel Tecu and forthcoming in the Journal of Finance, claims to identify a causal link between the degree of common ownership among airlines competing on a given route and the fares charged for flights on that route.

Measuring common ownership with MHHI

Azar, et al.’s airline paper uses a metric of industry concentration called a Modified Herfindahl–Hirschman Index, or MHHI, to measure the degree of industry concentration taking into account the cross-ownership of investors’ stakes in competing firms. The original Herfindahl–Hirschman Index (HHI) has long been used as a measure of industry concentration, debuting in the Department of Justice’s Horizontal Merger Guidelines in 1982. The HHI is calculated by squaring the market share of each firm in the industry and summing the resulting numbers.

The MHHI is rather more complicated. MHHI is composed of two parts: the HHI measuring product market concentration and the MHHI_Delta measuring the additional concentration due to common ownership. We offer a step-by-step description of the calculations and their economic rationale in an appendix to our paper. For this post, I’ll try to distill that down. The MHHI_Delta essentially has three components, each of which is measured relative to every possible competitive pairing in the market as follows:

  1. A measure of the degree of common ownership between Company A and Company -A (Not A). This is calculated by multiplying the percentage of Company A shares owned by each Investor I with the percentage of shares Investor I owns in Company -A, then summing those values across all investors in Company A. As this value increases, MHHI_Delta goes up.
  2. A measure of the degree of ownership concentration in Company A, calculated by squaring the percentage of shares owned by each Investor I and summing those numbers across investors. As this value increases, MHHI_Delta goes down.
  3. A measure of the degree of product market power exerted by Company A and Company -A, calculated by multiplying the market shares of the two firms. As this value increases, MHHI_Delta goes up.

This process is repeated and aggregated first for every pairing of Company A and each competing Company -A, then repeated again for every other company in the market relative to its competitors (e.g., Companies B and -B, Companies C and -C, etc.). Mathematically, MHHI_Delta takes the form:

where the Ss represent the firm market shares of, and Betas represent ownership shares of Investor I in, the respective companies A and -A.

As the relative concentration of cross-owning investors to all investors in Company A increases (i.e., the ratio on the right increases), managers are assumed to be more likely to soften competition with that competitor. As those two firms control more of the market, managers’ ability to tacitly collude and increase joint profits is assumed to be higher. Consequently, the empirical research assumes that as MHHI_Delta increases, we should observe less competitive behavior.

And indeed that is the “blockbuster” evidence giving rise to Elhauge’s and Posner, et al.,’s arguments  For example, Azar, et. al., calculate HHI and MHHI_Delta for every US airline market–defined either as city-pairs or departure-destination pairs–for each quarter of the 14-year time period in their study. They then regress ticket prices for each route against the HHI and the MHHI_Delta for that route, controlling for a number of other potential factors. They find that airfare prices are 3% to 7% higher due to common ownership. Other papers using the same or similar measures of common ownership concentration have likewise identified positive correlations between MHHI_Delta and their respective measures of anti-competitive behavior.

Problems with the problem and with the measure

We argue that both the theoretical argument underlying the empirical research and the empirical research itself suffer from some serious flaws. On the theoretical side, we have two concerns. First, we argue that there is a tremendous leap of faith (if not logic) in the idea that corporate executives would forgo their own self-interest and the interests of the vast majority of shareholders and soften competition simply because a small number of small stakeholders are intra-industry diversified. Second, we argue that even if managers were so inclined, it clearly is not the case that softening competition would necessarily be desirable for institutional investors that are both intra- and inter-industry diversified, since supra-competitive pricing to increase profits in one industry would decrease profits in related industries that may also be in the investors’ portfolios.

On the empirical side, we have concerns both with the data used to calculate the MHHI_Deltas and with the nature of the MHHI_Delta itself. First, the data on institutional investors’ holdings are taken from Schedule 13 filings, which report aggregate holdings across all the institutional investor’s funds. Using these data masks the actual incentives of the institutional investors with respect to investments in any individual company or industry. Second, the construction of the MHHI_Delta suffers from serious endogeneity concerns, both in investors’ shareholdings and in market shares. Finally, the MHHI_Delta, while seemingly intuitive, is an empirical unknown. While HHI is theoretically bounded in a way that lends to interpretation of its calculated value, the same is not true for MHHI_Delta. This makes any inference or policy based on nominal values of MHHI_Delta completely arbitrary at best.

We’ll expand on each of these concerns in upcoming posts. We will then take on the problems with the policy proposals being offered in response to the common ownership ‘problem.’

 

 

 

 

 

 

One of the hottest antitrust topics of late has been institutional investors’ “common ownership” of minority stakes in competing firms.  Writing in the Harvard Law Review, Einer Elhauge proclaimed that “[a]n economic blockbuster has recently been exposed”—namely, “[a] small group of institutions has acquired large shareholdings in horizontal competitors throughout our economy, causing them to compete less vigorously with each other.”  In the Antitrust Law Journal, Eric Posner, Fiona Scott Morton, and Glen Weyl contended that “the concentration of markets through large institutional investors is the major new antitrust challenge of our time.”  Those same authors took to the pages of the New York Times to argue that “[t]he great, but mostly unknown, antitrust story of our time is the astonishing rise of the institutional investor … and the challenge that it poses to market competition.”

Not surprisingly, these scholars have gone beyond just identifying a potential problem; they have also advocated policy solutions.  Elhauge has called for allowing government enforcers and private parties to use Section 7 of the Clayton Act, the provision primarily used to prevent anticompetitive mergers, to police institutional investors’ ownership of minority positions in competing firms.  Posner et al., concerned “that private litigation or unguided public litigation could cause problems because of the interactive nature of institutional holdings on competition,” have proposed that federal antitrust enforcers adopt an enforcement policy that would encourage institutional investors either to avoid common ownership of firms in concentrated industries or to limit their influence over such firms by refraining from voting their shares.

The position of these scholars is thus (1) that common ownership by institutional investors significantly diminishes competition in concentrated industries, and (2) that additional antitrust intervention—beyond generally applicable rules on, say, hub-and-spoke conspiracies and anticompetitive information exchanges—is appropriate to prevent competitive harm.

Mike Sykuta and I have recently posted a paper taking issue with this two-pronged view.  With respect to the first prong, we contend that there are serious problems with both the theory of competitive harm stemming from institutional investors’ common ownership and the empirical evidence that has been marshalled in support of that theory.  With respect to the second, we argue that even if competition were softened by institutional investors’ common ownership of small minority interests in competing firms, the unintended negative consequences of an antitrust fix would outweigh any benefits from such intervention.

Over the next few days, we plan to unpack some of the key arguments in our paper, The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms.  In the meantime, we encourage readers to download the paper and send us any comments.

The paper’s abstract is below the fold. Continue Reading…

In a recent long-form article in the New York Times, reporter Noam Scheiber set out to detail some of the ways Uber (and similar companies, but mainly Uber) are engaged in “an extraordinary experiment in behavioral science to subtly entice an independent work force to maximize its growth.”

That characterization seems innocuous enough, but it is apparent early on that Scheiber’s aim is not only to inform but also, if not primarily, to deride these efforts. The title of the piece, in fact, sets the tone:

How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons

Uber and its relationship with its drivers are variously described by Scheiber in the piece as secretive, coercive, manipulative, dominating, and exploitative, among other things. As Schreiber describes his article, it sets out to reveal how

even as Uber talks up its determination to treat drivers more humanely, it is engaged in an extraordinary behind-the-scenes experiment in behavioral science to manipulate them in the service of its corporate growth — an effort whose dimensions became evident in interviews with several dozen current and former Uber officials, drivers and social scientists, as well as a review of behavioral research.

What’s so galling about the piece is that, if you strip away the biased and frequently misguided framing, it presents a truly engaging picture of some of the ways that Uber sets about solving a massively complex optimization problem, abetted by significant agency costs.

So I did. Strip away the detritus, add essential (but omitted) context, and edit the article to fix the anti-Uber bias, the one-sided presentation, the mischaracterizations, and the fundamentally non-economic presentation of what is, at its core, a fascinating illustration of some basic problems (and solutions) from industrial organization economics. (For what it’s worth, Scheiber should know better. After all, “He holds a master’s degree in economics from the University of Oxford, where he was a Rhodes Scholar, and undergraduate degrees in math and economics from Tulane University.”)

In my retelling, the title becomes:

How Uber Uses Innovative Management Tactics to Incentivize Its Drivers

My transformed version of the piece, with critical commentary in the form of tracked changes to the original, is here (pdf).

It’s a long (and, as I said, fundamentally interesting) piece, with cool interactive graphics, well worth the read (well, at least in my retelling, IMHO). Below is just a taste of the edits and commentary I added.

For example, where Scheiber writes:

Uber exists in a kind of legal and ethical purgatory, however. Because its drivers are independent contractors, they lack most of the protections associated with employment. By mastering their workers’ mental circuitry, Uber and the like may be taking the economy back toward a pre-New Deal era when businesses had enormous power over workers and few checks on their ability to exploit it.

With my commentary (here integrated into final form rather than tracked), that paragraph becomes:

Uber operates under a different set of legal constraints, however, also duly enacted and under which millions of workers have profitably worked for decades. Because its drivers are independent contractors, they receive their compensation largely in dollars rather than government-mandated “benefits” that remove some of the voluntariness from employer/worker relationships. And, in the case of overtime pay, for example, the Uber business model that is built in part on offering flexible incentives to match supply and demand using prices and compensation, would be next to impossible. It is precisely through appealing to drivers’ self-interest that Uber and the like may be moving the economy forward to a new era when businesses and workers have more flexibility, much to the benefit of all.

Elsewhere, Scheiber’s bias is a bit more subtle, but no less real. Thus, he writes:

As he tried to log off at 7:13 a.m. on New Year’s Day last year, Josh Streeter, then an Uber driver in the Tampa, Fla., area, received a message on the company’s driver app with the headline “Make it to $330.” The text then explained: “You’re $10 away from making $330 in net earnings. Are you sure you want to go offline?” Below were two prompts: “Go offline” and “Keep driving.” The latter was already highlighted.

With my edits and commentary, that paragraph becomes:

As he started the process of logging off at 7:13 a.m. on New Year’s Day last year, Josh Streeter, then an Uber driver in the Tampa, Fla., area, received a message on the company’s driver app with the headline “Make it to $330.” The text then explained: “You’re $10 away from making $330 in net earnings. Are you sure you want to go offline?” Below were two prompts: “Go offline” and “Keep driving.” The latter was already highlighted, but the former was listed first. It’s anyone’s guess whether either characteristic — placement or coloring — had any effect on drivers’ likelihood of clicking one button or the other.

And one last example. Scheiber writes:

Consider an algorithm called forward dispatch — Lyft has a similar one — that dispatches a new ride to a driver before the current one ends. Forward dispatch shortens waiting times for passengers, who may no longer have to wait for a driver 10 minutes away when a second driver is dropping off a passenger two minutes away.

Perhaps no less important, forward dispatch causes drivers to stay on the road substantially longer during busy periods — a key goal for both companies.

Uber and Lyft explain this in essentially the same way. “Drivers keep telling us the worst thing is when they’re idle for a long time,” said Kevin Fan, the director of product at Lyft. “If it’s slow, they’re going to go sign off. We want to make sure they’re constantly busy.”

While this is unquestionably true, there is another way to think of the logic of forward dispatch: It overrides self-control.

* * *

Uber officials say the feature initially produced so many rides at times that drivers began to experience a chronic Netflix ailment — the inability to stop for a bathroom break. Amid the uproar, Uber introduced a pause button.

“Drivers were saying: ‘I can never go offline. I’m on just continuous trips. This is a problem.’ So we redesigned it,” said Maya Choksi, a senior Uber official in charge of building products that help drivers. “In the middle of the trip, you can say, ‘Stop giving me requests.’ So you can have more control over when you want to stop driving.”

It is true that drivers can pause the services’ automatic queuing feature if they need to refill their tanks, or empty them, as the case may be. Yet once they log back in and accept their next ride, the feature kicks in again. To disable it, they would have to pause it every time they picked up a new passenger. By contrast, even Netflix allows users to permanently turn off its automatic queuing feature, known as Post-Play.

This pre-emptive hard-wiring can have a huge influence on behavior, said David Laibson, the chairman of the economics department at Harvard and a leading behavioral economist. Perhaps most notably, as Ms. Rosenblat and Luke Stark observed in an influential paper on these practices, Uber’s app does not let drivers see where a passenger is going before accepting the ride, making it hard to judge how profitable a trip will be.

Here’s how I would recast that, and add some much-needed economics:

Consider an algorithm called forward dispatch — Lyft has a similar one — that dispatches a new ride to a driver before the current one ends. Forward dispatch shortens waiting times for passengers, who may no longer have to wait for a driver 10 minutes away when a second driver is dropping off a passenger two minutes away.

Perhaps no less important, forward dispatch causes drivers to stay on the road substantially longer during busy periods — a key goal for both companies — by giving them more income-earning opportunities.

Uber and Lyft explain this in essentially the same way. “Drivers keep telling us the worst thing is when they’re idle for a long time,” said Kevin Fan, the director of product at Lyft. “If it’s slow, they’re going to go sign off. We want to make sure they’re constantly busy.”

While this is unquestionably true, and seems like another win-win, some critics have tried to paint even this means of satisfying both driver and consumer preferences in a negative light by claiming that the forward dispatch algorithm overrides self-control.

* * *

Uber officials say the feature initially produced so many rides at times that drivers began to experience a chronic Netflix ailment — the inability to stop for a bathroom break. Amid the uproar, Uber introduced a pause button.

“Drivers were saying: ‘I can never go offline. I’m on just continuous trips. This is a problem.’ So we redesigned it,” said Maya Choksi, a senior Uber official in charge of building products that help drivers. “In the middle of the trip, you can say, ‘Stop giving me requests.’ So you can have more control over when you want to stop driving.”

Tweaks like these put paid to the arguments that Uber is simply trying to abuse its drivers. And yet, critics continue to make such claims:

It is true that drivers can pause the services’ automatic queuing feature if they need to refill their tanks, or empty them, as the case may be. Yet once they log back in and accept their next ride, the feature kicks in again. To disable it, they would have to pause it every time they picked up a new passenger. By contrast, even Netflix allows users to permanently turn off its automatic queuing feature, known as Post-Play.

It’s difficult to take seriously claims that Uber “abuses” drivers by setting a default that drivers almost certainly prefer; surely drivers seek out another fare following the last fare more often than they seek out another bathroom break. In any case, the difference between one default and the other is a small change in the number of times drivers might have to push a single button; hardly a huge impediment.

But such claims persist, nevertheless. Setting a trivially different default can have a huge influence on behavior, claims David Laibson, the chairman of the economics department at Harvard and a leading behavioral economist. Perhaps most notably — and to change the subject — as Ms. Rosenblat and Luke Stark observed in an influential paper on these practices, Uber’s app does not let drivers see where a passenger is going before accepting the ride, making it hard to judge how profitable a trip will be. But there are any number of defenses of this practice, from both a driver- and consumer-welfare standpoint. Not least, such disclosure could well create isolated scarcity for a huge range of individual ride requests (as opposed to the general scarcity during a “surge”), leading to longer wait times, the need to adjust prices for consumers on the basis of individual rides, and more intense competition among drivers for the most profitable rides. Given these and other explanations, it is extremely unlikely that the practice is actually aimed at “abusing” drivers.

As they say, read the whole thing!

The antitrust industry never sleeps – it is always hard at work seeking new business practices to scrutinize, eagerly latching on to any novel theory of anticompetitive harm that holds out the prospect of future investigations.  In so doing, antitrust entrepreneurs choose, of course, to ignore Nobel Laureate Ronald Coase’s warning that “[i]f an economist finds something . . . that he does not understand, he looks for a monopoly explanation.  And as in this field we are rather ignorant, the number of ununderstandable practices tends to be rather large, and the reliance on monopoly explanations frequent.”  Ambitious antitrusters also generally appear oblivious to the fact that since antitrust is an administrative system subject to substantial error and transaction costs in application (see here), decision theory counsels that enforcers should proceed with great caution before adopting novel untested theories of competitive harm.

The latest example of this regrettable phenomenon is the popular new theory that institutional investors’ common ownership of minority shares in competing firms may pose serious threats to vigorous market competition (see here, for example).  If such investors’ shareholdings are insufficient to control or substantially influence the strategies employed by the competing firms, what is the precise mechanism by which this occurs?  At the very least, this question should give enforcers pause (and cause them to carefully examine both the theoretical and empirical underpinnings of the common ownership story) before they charge ahead as knights errant seeking to vanquish new financial foes.  Yet it appears that at least some antitrust enforcers have been wasting no time in seeking to factor common ownership concerns into their modes of analysis.  (For example, The European Commission in at least one case presented a modified Herfindahl-Hirschman Index (MHHI) analysis to account for the effects of common shareholding by institutional investors, as part of a statement of objections to a proposed merger, see here.)

A recent draft paper by Bates White economists Daniel P. O’Brien and Keith Waehrer raises major questions about recent much heralded research (reported in three studies dealing with executive compensation, airlines, and banking) that has been cited to raise concerns about common minority shareholdings’ effects on competition.  The draft paper’s abstract argues that the theory underlying these concerns is insufficiently developed, and that there are serious statistical flaws in the empirical work that purports to show a relationship between price and common ownership:

“Recent empirical research purports to show that common ownership by institutional investors harms competition even when all financial holdings are minority interests. This research has received a great deal of attention, leading to both calls for and actual changes in antitrust policy. This paper examines the research on this subject to date and finds that its conclusions regarding the effects of minority shareholdings on competition are not well established. Without prejudging what more rigorous empirical work might show, we conclude that researchers and policy authorities are getting well ahead of themselves in drawing policy conclusions from the research to date. The theory of partial ownership does not yield a specific relationship between price and the MHHI. In addition, the key explanatory variable in the emerging research – the MHHI – is an endogenous measure of concentration that depends on both common ownership and market shares. Factors other than common ownership affect both price and the MHHI, so the relationship between price and the MHHI need not reflect the relationship between price and common ownership. Thus, regressions of price on the MHHI are likely to show a relationship even if common ownership has no actual causal effect on price. The instrumental variable approaches employed in this literature are not sufficient to remedy this issue. We explain these points with reference to the economic theory of partial ownership and suggest avenues for further research.”

In addition to pinpointing deficiencies in existing research, O’Brien and Waehrer also summarize serious negative implications for the financial sector that could stem from the aggressive antitrust pursuit of partial ownership for the financial sector – a new approach that would be at odds with longstanding antitrust practice (footnote citations deleted):

“While it is widely accepted that common ownership can have anticompetitive effects when the owners have control over at least one of the firms they own (a complete merger is a special case), antitrust authorities historically have taken limited interest in common ownership by minority shareholders whose control seems to be limited to voting rights. Thus, if the empirical findings and conclusions in the emerging research are correct and robust, they could have dramatic implications for the antitrust analysis of mergers and acquisitions. The findings could be interpreted to suggest that antitrust authorities should scrutinize not only situations in which a common owner of competing firms control at least one of the entities it owns, but also situations in which all of the common owner’s shareholdings are small minority positions. As [previously] noted, . . . such a policy shift is already occurring.

Institutional investors (e.g., mutual funds) frequently take positions in multiple firms in an industry in order to offer diversified portfolios to retail investors at low transaction costs. A change in antitrust or regulatory policy toward these investments could have significant negative implications for the types of investments currently available to retail investors. In particular, a recent proposal to step up antitrust enforcement in this area would seem to require significant changes to the size or composition of many investment funds that are currently offered.

Given the potential policy implications of this research and the less than obvious connections between small minority ownership interests and anticompetitive price effects, it is important to be particularly confident in the analysis and empirical findings before drawing strong policy conclusions. In our view, this requires a valid empirical test that permits causal inferences about the effects of common ownership on price. In addition, the empirical findings and their interpretation should be consistent with the observed behavior of firms and investors in the economic and legal environments in which they operate.

We find that the airline, banking, and compensation papers [that deal with minority shareholding] fall short of these criteria.”

In sum, at the very least, a substantial amount of further work is called for before significant enforcement resources are directed to common minority shareholder investigations, lest competitively non-problematic investment holdings be chilled.  More generally, the trendy antitrust pursuit of common minority shareholdings threatens to interfere inappropriately in investment decisions of institutional investors and thereby undermine efficiency.  Given the great significance of institutional investment for vibrant capital markets and a growing, dynamic economy, the negative economic welfare consequences of such unwarranted meddling would likely swamp any benefits that might accrue from an occasional meritorious prosecution.  One may hope that the Trump Administration will seriously weigh those potential consequences as it examines the minority shareholding issue, in deciding upon its antitrust policy priorities.