Archives For 5G

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Randy May is president of the Free State Foundation.]

I am pleased to participate in this retrospective symposium regarding Ajit Pai’s tenure as Federal Communications Commission chairman. I have been closely involved in communications law and policy for nearly 45 years, and, as I’ve said several times since Chairman Pai announced his departure, he will leave as one of the most consequential leaders in the agency’s history. And, I should hastily add, consequential in a positive way, because it’s possible to be consequential in a not-so-positive way.

Chairman Pai’s leadership has been impactful in many different areas—for example, spectrum availability, media deregulation, and institutional reform, to name three—but in this tribute I will focus on his efforts regarding “net neutrality.” I use the quotes because the term has been used by many to mean many different things in many different contexts.

Within a year of becoming chairman, and with the support of fellow Republican commissioners Michael O’Rielly and Brendan Carr, Ajit Pai led the agency in reversing the public utility-like “net neutrality” regulation that had been imposed by the Obama FCC in February 2015 in what became known as the Title II Order. The Title II Order had classified internet service providers (ISPs) as “telecommunications carriers” subject to the same common-carrier regulatory regime imposed on monopolistic Ma Bell during most of the 20th century. While “forbearing” from imposing the full array of traditional common-carrier regulatory mandates, the Title II Order also subjected ISPs to sanctions if they violated an amorphous “general conduct standard,” which provided that ISPs could not “unreasonably” interfere with or disadvantage end users or edge providers like Google, Facebook, and the like.

The aptly styled Restoring Internet Freedom Order (RIF Order), adopted in December 2017, reversed nearly all of the Title II Order’s heavy-handed regulation of ISPs in favor of a light-touch regulatory regime. It was aptly named, because the RIF Order “restored” market “freedom” to internet access regulation that had mostly prevailed since the turn of the 21st century. It’s worth remembering that, in 1999, in opting not to require that newly emerging cable broadband providers be subjected to a public utility-style regime, Clinton-appointee FCC Chairman William Kennard declared: “[T]he alternative is to go to the telephone world…and just pick up this whole morass of regulation and dump it wholesale on the cable pipe. That is not good for America.” And worth recalling, too, that in 2002, the commission, under the leadership of Chairman Michael Powell, determined that “broadband services should exist in a minimal regulatory environment that promotes investment and innovation in a competitive market.”

It was this reliance on market freedom that was “restored” under Ajit Pai’s leadership. In an appearance at a Free State Foundation event in December 2016, barely a month before becoming chairman, then-Commissioner Pai declared: “It is time to fire up the weed whacker and remove those rules that are holding back investment, innovation, and job creation.” And he added: “Proof of market failure should guide the next commission’s consideration of new regulations.” True to his word, the weed whacker was used to cut down the public utility regime imposed on ISPs by his predecessor. And the lack of proof of any demonstrable market failure was at the core of the RIF Order’s reasoning.

It is true that, as a matter of law, the D.C. Circuit’s affirmance of the Restoring Internet Freedom Order in Mozilla v. FCC rested heavily on the application by the court of Chevron deference, just as it is true that Chevron deference played a central role in the affirmance of the Title II Order and the Brand X decision before that. And it would be disingenuous to suggest that, if a newly reconstituted Biden FCC reinstitutes a public utility-like regulatory regime for ISPs, that Chevron deference won’t once again play a central role in the appeal.

But optimist that I am, and focusing not on what possibly may be done as a matter of law, but on what ought to be done as a matter of policy, the “new” FCC should leave in place the RIF Order’s light-touch regulatory regime. In affirming most of the RIF Order in Mozilla, the D.C. Circuit agreed there was substantial evidence supporting the commission’s predictive judgment that reclassification of ISPs “away from public-utility style regulation” was “likely to increase ISP investment and output.” And the court agreed there was substantial evidence to support the commission’s position that such regulation is especially inapt for “a dynamic industry built on technological development and disruption.”

Indeed, the evidence has only become more substantial since the RIF Order’s adoption. Here are only a few factual snippets: According to CTIA, wireless-industry investment for 2019 grew to $29.1 billion, up from $27.4 billion in 2018 and $25.6 billion in 2017USTelecom estimates that wireline broadband ISPs invested approximately $80 billion in network infrastructure in 2018, up more than $3.1 billion from $76.9 billion in 2017. And total investment most likely increased in 2019 for wireline ISPs like it did for wireless ISPs. Figures cited in the FCC’s 2020 Broadband Deployment Report indicate that fiber broadband networks reached an additional 6.5 million homes in 2019, a 16% increase over the prior year and the largest single-year increase ever

Additionally, more Americans have access to broadband internet access services, and at ever higher speeds. According to an April 2020 report by USTelecom, for example, gigabit internet service is available to at least 85% of U.S. homes, compared to only 6% of U.S. homes three-and-a-half years ago. In an October 2020 blog post, Chairman Pai observed that “average download speeds for fixed broadband in the United States have doubled, increasing by over 99%” since the RIF Order was adopted. Ookla Speedtests similarly show significant gains in mobile wireless speeds, climbing to 47/10 Mbps in September 2020 compared to 27/8 Mbps in the first half of 2018.

More evidentiary support could be offered regarding the positive results that followed adoption of the RIF Order, and I assume in the coming year it will be. But the import of abandonment of public utility-like regulation of ISPs should be clear.

There is certainly much that Ajit Pai, the first-generation son of immigrants who came to America seeking opportunity in the freedom it offered, accomplished during his tenure. To my way of thinking, “Restoring Internet Freedom” ranks at—or at least near—the top of the list.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

Ajit Pai will step down from his position as chairman of the Federal Communications Commission (FCC) effective Jan. 20. Beginning Jan. 15, Truth on the Market will host a symposium exploring Pai’s tenure, with contributions from a range of scholars and practitioners.

As we ponder the changes to FCC policy that may arise with the next administration, it’s also a timely opportunity to reflect on the chairman’s leadership at the agency and his influence on telecommunications policy more broadly. Indeed, the FCC has faced numerous challenges and opportunities over the past four years, with implications for a wide range of federal policy and law. Our symposium will offer insights into numerous legal, economic, and policy matters of ongoing importance.

Under Pai’s leadership, the FCC took on key telecommunications issues involving spectrum policy, net neutrality, 5G, broadband deployment, the digital divide, and media ownership and modernization. Broader issues faced by the commission include agency process reform, including a greater reliance on economic analysis; administrative law; federal preemption of state laws; national security; competition; consumer protection; and innovation, including the encouragement of burgeoning space industries.

This symposium asks contributors for their thoughts on these and related issues. We will explore a rich legacy, with many important improvements that will guide the FCC for some time to come.

Truth on the Market thanks all of these excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting Jan. 15.

The European Commission has unveiled draft legislation (the Digital Services Act, or “DSA”) that would overhaul the rules governing the online lives of its citizens. The draft rules are something of a mixed bag. While online markets present important challenges for law enforcement, the DSA would significantly increase the cost of doing business in Europe and harm the very freedoms European lawmakers seek to protect. The draft’s newly proposed “Know Your Business Customer” (KYBC) obligations, however, will enable smoother operation of the liability regimes that currently apply to online intermediaries. 

These reforms come amid a rash of headlines about election meddling, misinformation, terrorist propaganda, child pornography, and other illegal and abhorrent content spread on digital platforms. These developments have galvanized debate about online liability rules.

Existing rules, codified in the e-Commerce Directive, largely absolve “passive” intermediaries that “play a neutral, merely technical and passive role” from liability for content posted by their users so long as they remove it once notified. “Active” intermediaries have more legal exposure. This regime isn’t perfect, but it seems to have served the EU well in many ways.

With its draft regulation, the European Commission is effectively arguing that those rules fail to address the legal challenges posed by the emergence of digital platforms. As the EC’s press release puts it:

The landscape of digital services is significantly different today from 20 years ago, when the eCommerce Directive was adopted. […]  Online intermediaries […] can be used as a vehicle for disseminating illegal content, or selling illegal goods or services online. Some very large players have emerged as quasi-public spaces for information sharing and online trade. They have become systemic in nature and pose particular risks for users’ rights, information flows and public participation.

Online platforms initially hoped lawmakers would agree to some form of self-regulation, but those hopes were quickly dashed. Facebook released a white paper this Spring proposing a more moderate path that would expand regulatory oversight to “ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression.” The proposed regime would not impose additional liability for harmful content posted by users, a position that Facebook and other internet platforms reiterated during congressional hearings in the United States.

European lawmakers were not moved by these arguments. EU Commissioner for Internal Market and Services Thierry Breton, among other European officials, dismissed Facebook’s proposal within hours of its publication, saying:

It’s not enough. It’s too slow, it’s too low in terms of responsibility and regulation.

Against this backdrop, the draft DSA includes many far-reaching measures: transparency requirements for recommender systems, content moderation decisions, and online advertising; mandated sharing of data with authorities and researchers; and numerous compliance measures that include internal audits and regular communication with authorities. Moreover, the largest online platforms—so-called “gatekeepers”—will have to comply with a separate regulation that gives European authorities new tools to “protect competition” in digital markets (the Digital Markets Act, or “DMA”).

The upshot is that, if passed into law, the draft rules will place tremendous burdens upon online intermediaries. This would be self-defeating. 

Excessive regulation or liability would significantly increase their cost of doing business, leading to significantly smaller networks and significantly increased barriers to access for many users. Stronger liability rules would also encourage platforms to play it safe, such as by quickly de-platforming and refusing access to anyone who plausibly engaged in illegal activity. Such an outcome would harm the very freedoms European lawmakers seek to protect.

This could prove particularly troublesome for small businesses that find it harder to compete against large platforms due to rising compliance costs. In effect, the new rules will increase barriers to entry, as has already been seen with the GDPR.

In the commission’s defense, some of the proposed reforms are more appealing. This is notably the case with the KYBC requirements, as well as the decision to leave most enforcement to member states, where services providers have their main establishments. The latter is likely to preserve regulatory competition among EU members to attract large tech firms, potentially limiting regulatory overreach. 

Indeed, while the existing regime does, to some extent, curb the spread of online crime, it does little for the victims of cybercrime, who ultimately pay the price. Removing illegal content doesn’t prevent it from reappearing in the future, sometimes on the same platform. Importantly, hosts have no obligation to provide the identity of violators to authorities, or even to know their identity in the first place. The result is an endless game of “whack-a-mole”: illegal content is taken down, but immediately reappears elsewhere. This status quo enables malicious users to upload illegal content, such as that which recently led card networks to cut all ties with Pornhub

Victims arguably need additional tools. This is what the Commission seeks to achieve with the DSA’s “traceability of traders” requirement, a form of KYBC:

Where an online platform allows consumers to conclude distance contracts with traders, it shall ensure that traders can only use its services to promote messages on or to offer products or services to consumers located in the Union if, prior to the use of its services, the online platform has obtained the following information: […]

Instead of rewriting the underlying liability regime—with the harmful unintended consequences that would likely entail—the draft DSA creates parallel rules that require platforms to better protect victims.

Under the proposed rules, intermediaries would be required to obtain the true identity of commercial clients (as opposed to consumers) and to sever ties with businesses that refuse to comply (rather than just take down their content). Such obligations would be, in effect, a version of the “Know Your Customer” regulations that exist in other industries. Banks, for example, are required to conduct due diligence to ensure scofflaws can’t use legitimate financial services to further criminal enterprises. It seems reasonable to expect analogous due diligence from the Internet firms that power so much of today’s online economy.

Obligations requiring platforms to vet their commercial relationships may seem modest, but they’re likely to enable more effective law enforcement against the actual perpetrators of online harms without diminishing platform’s innovation and the economic opportunity they provide (and that everyone agrees is worth preserving).

There is no silver bullet. Illegal activity will never disappear entirely from the online world, just as it has declined, but not vanished, from other walks of life. But small regulatory changes that offer marginal improvements can have a substantial effect. Modest informational requirements would weed out the most blatant crimes without overly burdening online intermediaries. In short, it would make the Internet a safer place for European citizens.

Rolled by Rewheel, Redux

Eric Fruits —  15 December 2020

The Finnish consultancy Rewheel periodically issues reports using mobile wireless pricing information to make claims about which countries’ markets are competitive and which are not. For example, Rewheel claims Canada and Greece have the “least competitive monthly prices” while the United Kingdom and Finland have the most competitive.

Rewheel often claims that the number of carriers operating in a country is the key determinant of wireless pricing. 

Their pricing studies attract a great deal of attention. For example, in February 2019 testimony before the U.S. House Energy and Commerce Committee, Phillip Berenbroick of Public Knowledge asserted: “Rewheel found that consumers in markets with three facilities-based providers paid twice as much per gigabyte as consumers in four firm markets.” So, what’s wrong with Rewheel? An earlier post highlights some of the flaws in Rewheel’s methodology. But there’s more.

Rewheel creates fictional market baskets of mobile plans for each provider in a county. Country-by-country comparisons are made by evaluating the lowest-priced basket for each country and the basket with the median price.

Rewheel’s market baskets are hypothetical packages that say nothing about which plans are actually chosen by consumers or what the actual prices paid by those consumers were. This is not a new criticism. In 2014, Pauline Affeldt and Rainer Nitsche called these measures “meaningless”:

Such approaches are taken by Rewheel (2013) and also the Austrian regulator rtr … Such studies face the following problems: They may pick tariffs that are relatively meaningless in the country. They will have to assume one or more consumption baskets (voice minutes, data volume etc.) in order to compare tariffs. This may drive results. Apart from these difficulties such comparisons require very careful tracking of tariffs and their changes. Even if one assumes studying a sample of tariffs is potentially meaningful, a comparison across countries (or over time) would still require taking into account key differences across countries (or over time) like differences in demand, costs, network quality etc.

For example, reporting that the average price of a certain T-Mobile USA smartphone, tablet and home Internet plan is $125 is about as useless as knowing that the average price of a Kroger shopping cart containing a six-pack of Budweiser, a dozen eggs, and a pound of oranges is $10. Is Safeway less “competitive” if the price of the same cart of goods is $12? What could you say about pricing at a store that doesn’t sell Budweiser (e.g., Trader Joe’s)?

Rewheel solves that last problem by doing something bonkers. If a carrier doesn’t offer a plan in one of Rewheel’s baskets, they “assign” the HIGHEST monthly price in the world. 

For example, Rewheel notes that Vodafone India does not offer a fixed wireless broadband plan with at least 1,000GB of data and download speeds of 100 Mbps or faster. So, Rewheel “assigns” Vodafone India the highest price in its dataset. That price belongs to a plan that’s sold in the United Kingdom. It simply makes no sense. 

To return to the supermarket analogy, it would be akin to saying that, if a Trader Joe’s in the United States doesn’t sell six-packs of Budweiser, we should assume the price of Budweiser at Trader Joe’s is equal to the world’s most expensive six-pack of the beer. In reality, Trader Joe’s is known for having relatively low prices. But using the Rewheel approach, the store would be assessed to have some of the highest prices.

Because of Rewheel’s “assignment” of highest monthly prices to many plans, it’s irrelevant whether their analysis is based on a country’s median price or lowest price. The median is skewed and the lowest actual may be missing from the dataset.

Rewheel publishes these reports to support its argument that mobile prices are lower in markets with four carriers than in those with three carriers. But even if we accept Rewheel’s price data as reliable, which it isn’t, their own data show no relationship between the number of carriers and average price.

Notice the huge overlap of observations among markets with three and four carriers. 

Rewheel’s latest report provides a redacted dataset, reporting only data usage and weighted average price for each provider. So, we have to work with what we have. 

A simple regression analysis shows there is no statistically significant difference in the intercept or the slopes for markets with three, four or five carriers (the default is three carriers in the regression). Based on the data Rewheel provides to the public, the number of carriers in a country has no relationship to wireless prices.

Rewheel seems to have a rich dataset of pricing information that could be useful to inform policy. It’s a shame that their topline summaries seem designed to support a predetermined conclusion.

Apple’s legal team will be relieved that “you reap what you sow” is just a proverb. After a long-running antitrust battle against Qualcomm unsurprisingly ended in failure, Apple now faces antitrust accusations of its own (most notably from Epic Games). Somewhat paradoxically, this turn of events might cause Apple to see its previous defeat in a new light. Indeed, the well-established antitrust principles that scuppered Apple’s challenge against Qualcomm will now be the rock upon which it builds its legal defense.

But while Apple’s reversal of fortunes might seem anecdotal, it neatly illustrates a fundamental – and often overlooked – principle of antitrust policy: Antitrust law is about maximizing consumer welfare. Accordingly, the allocation of surplus between two companies is only incidentally relevant to antitrust proceedings, and it certainly is not a goal in and of itself. In other words, antitrust law is not about protecting David from Goliath.

Jockeying over the distribution of surplus

Or at least that is the theory. In practice, however, most antitrust cases are but small parts of much wider battles where corporations use courts and regulators in order to jockey for market position and/or tilt the distribution of surplus in their favor. The Microsoft competition suits brought by the DOJ and the European commission (in the EU and US) partly originated from complaints, and lobbying, by Sun Microsystems, Novell, and Netscape. Likewise, the European Commission’s case against Google was prompted by accusations from Microsoft and Oracle, among others. The European Intel case was initiated following a complaint by AMD. The list goes on.

The last couple of years have witnessed a proliferation of antitrust suits that are emblematic of this type of power tussle. For instance, Apple has been notoriously industrious in using the court system to lower the royalties that it pays to Qualcomm for LTE chips. One of the focal points of Apple’s discontent was Qualcomm’s policy of basing royalties on the end-price of devices (Qualcomm charged iPhone manufacturers a 5% royalty rate on their handset sales – and Apple received further rebates):

“The whole idea of a percentage of the cost of the phone didn’t make sense to us,” [Apple COO Jeff Williams] said. “It struck at our very core of fairness. At the time we were making something really really different.”

This pricing dispute not only gave rise to high-profile court cases, it also led Apple to lobby Standard Developing Organizations (“SDOs”) in a partly successful attempt to make them amend their patent policies, so as to prevent this type of pricing. 

However, in a highly ironic turn of events, Apple now finds itself on the receiving end of strikingly similar allegations. At issue is the 30% commission that Apple charges for in app purchases on the iPhone and iPad. These “high” commissions led several companies to lodge complaints with competition authorities (Spotify and Facebook, in the EU) and file antitrust suits against Apple (Epic Games, in the US).

Of course, these complaints are couched in more sophisticated, and antitrust-relevant, reasoning. But that doesn’t alter the fact that these disputes are ultimately driven by firms trying to tilt the allocation of surplus in their favor (for a more detailed explanation, see Apple and Qualcomm).

Pushback from courts: The Qualcomm case

Against this backdrop, a string of recent cases sends a clear message to would-be plaintiffs: antitrust courts will not be drawn into rent allocation disputes that have no bearing on consumer welfare. 

The best example of this judicial trend is Qualcomm’s victory before the United States Court of Appeal for the 9th Circuit. The case centered on the royalties that Qualcomm charged to OEMs for its Standard Essential Patents (SEPs). Both the district court and the FTC found that Qualcomm had deployed a series of tactics (rebates, refusals to deal, etc) that enabled it to circumvent its FRAND pledges. 

However, the Court of Appeal was not convinced. It failed to find any consumer harm, or recognizable antitrust infringement. Instead, it held that the dispute at hand was essentially a matter of contract law:

To the extent Qualcomm has breached any of its FRAND commitments, a conclusion we need not and do not reach, the remedy for such a breach lies in contract and patent law. 

This is not surprising. From the outset, numerous critics pointed that the case lied well beyond the narrow confines of antitrust law. The scathing dissenting statement written by Commissioner Maureen Olhaussen is revealing:

[I]n the Commission’s 2-1 decision to sue Qualcomm, I face an extraordinary situation: an enforcement action based on a flawed legal theory (including a standalone Section 5 count) that lacks economic and evidentiary support, that was brought on the eve of a new presidential administration, and that, by its mere issuance, will undermine U.S. intellectual property rights in Asia and worldwide. These extreme circumstances compel me to voice my objections. 

In reaching its conclusion, the Court notably rejected the notion that SEP royalties should be systematically based upon the “Smallest Saleable Patent Practicing Unit” (or SSPPU):

Even if we accept that the modem chip in a cellphone is the cellphone’s SSPPU, the district court’s analysis is still fundamentally flawed. No court has held that the SSPPU concept is a per se rule for “reasonable royalty” calculations; instead, the concept is used as a tool in jury cases to minimize potential jury confusion when the jury is weighing complex expert testimony about patent damages.

Similarly, it saw no objection to Qualcomm licensing its technology at the OEM level (rather than the component level):

Qualcomm’s rationale for “switching” to OEM-level licensing was not “to sacrifice short-term benefits in order to obtain higher profits in the long run from the exclusion of competition,” the second element of the Aspen Skiing exception. Aerotec Int’l, 836 F.3d at 1184 (internal quotation marks and citation omitted). Instead, Qualcomm responded to the change in patent-exhaustion law by choosing the path that was “far more lucrative,” both in the short term and the long term, regardless of any impacts on competition. 

Finally, the Court concluded that a firm breaching its FRAND pledges did not automatically amount to anticompetitive conduct: 

We decline to adopt a theory of antitrust liability that would presume anticompetitive conduct any time a company could not prove that the “fair value” of its SEP portfolios corresponds to the prices the market appears willing to pay for those SEPs in the form of licensing royalty rates.

Taken together, these findings paint a very clear picture. The Qualcomm Court repeatedly rejected the radical idea that US antitrust law should concern itself with the prices charged by monopolists — as opposed to practices that allow firms to illegally acquire or maintain a monopoly position. The words of Learned Hand and those of Antonin Scalia (respectively, below) loom large:

The successful competitor, having been urged to compete, must not be turned upon when he wins. 

And,

To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.

Other courts (both in the US and abroad) have reached similar conclusions

For instance, a district court in Texas dismissed a suit brought by Continental Automotive Systems (which supplies electronic systems to the automotive industry) against a group of SEP holders. 

Continental challenged the patent holders’ decision to license their technology at the vehicle rather than component level (the allegation is very similar to the FTC’s complaint that Qualcomm licensed its SEPs at the OEM, rather than chipset level). However, following a forceful intervention by the DOJ, the Court ultimately held that the facts alleged by Continental were not indicative of antitrust injury. It thus dismissed the case.

Likewise, within weeks of the Qualcomm and Continental decisions, the UK Supreme court also ruled in favor of SEP holders. In its Unwired Planet ruling, the Court concluded that discriminatory licenses did not automatically infringe competition law (even though they might breach a firm’s contractual obligations):

[I]t cannot be said that there is any general presumption that differential pricing for licensees is problematic in terms of the public or private interests at stake.

In reaching this conclusion, the UK Supreme Court emphasized that the determination of whether licenses were FRAND, or not, was first and foremost a matter of contract law. In the case at hand, the most important guide to making this determination were the internal rules of the relevant SDO (as opposed to competition case law):

Since price discrimination is the norm as a matter of licensing practice and may promote objectives which the ETSI regime is intended to promote (such as innovation and consumer welfare), it would have required far clearer language in the ETSI FRAND undertaking to indicate an intention to impose the more strict, “hard-edged” non-discrimination obligation for which Huawei contends. Further, in view of the prevalence of competition laws in the major economies around the world, it is to be expected that any anti-competitive effects from differential pricing would be most appropriately addressed by those laws

All of this ultimately led the Court to rule in favor of Unwired Planet, thus dismissing Huawei’s claims that it had infringed competition law by breaching its FRAND pledges. 

In short, courts and antitrust authorities on both sides of the Atlantic have repeatedly, and unambiguously, concluded that pricing disputes (albeit in the specific context of technological standards) are generally a matter of contract law. Antitrust/competition law intercedes only when unfair/excessive/discriminatory prices are both caused by anticompetitive behavior and result in anticompetitive injury.

Apple’s Loss is… Apple’s gain.

Readers might wonder how the above cases relate to Apple’s app store. But, on closer inspection the parallels are numerous. As explained above, courts have repeatedly stressed that antitrust enforcement should not concern itself with the allocation of surplus between commercial partners. Yet that is precisely what Epic Game’s suit against Apple is all about.

Indeed, Epic’s central claim is not that it is somehow foreclosed from Apple’s App Store (for example, because Apple might have agreed to exclusively distribute the games of one of Epic’s rivals). Instead, all of its objections are down to the fact that it would like to access Apple’s store under more favorable terms:

Apple’s conduct denies developers the choice of how best to distribute their apps. Developers are barred from reaching over one billion iOS users unless they go through Apple’s App Store, and on Apple’s terms. […]

Thus, developers are dependent on Apple’s noblesse oblige, as Apple may deny access to the App Store, change the terms of access, or alter the tax it imposes on developers, all in its sole discretion and on the commercially devastating threat of the developer losing access to the entire iOS userbase. […]

By imposing its 30% tax, Apple necessarily forces developers to suffer lower profits, reduce the quantity or quality of their apps, raise prices to consumers, or some combination of the three.

And the parallels with the Qualcomm litigation do not stop there. Epic is effectively asking courts to make Apple monetize its platform at a different level than the one that it chose to maximize its profits (no more monetization at the app store level). Similarly, Epic Games omits any suggestion of profit sacrifice on the part of Apple — even though it is a critical element of most unilateral conduct theories of harm. Finally, Epic is challenging conduct that is both the industry norm and emerged in a highly competitive setting.

In short, all of Epic’s allegations are about monopoly prices, not monopoly maintenance or monopolization. Accordingly, just as the SEP cases discussed above were plainly beyond the outer bounds of antitrust enforcement (something that the DOJ repeatedly stressed with regard to the Qualcomm case), so too is the current wave of antitrust litigation against Apple. When all is said and done, Apple might thus be relieved that Qualcomm was victorious in their antitrust confrontation. Indeed, the legal principles that caused its demise against Qualcomm are precisely the ones that will, likely, enable it to prevail against Epic Games.

This blog post summarizes the findings of a paper published in Volume 21 of the Federalist Society Review. The paper was co-authored by Dirk Auer, Geoffrey A. Manne, Julian Morris, & Kristian Stout. It uses the analytical framework of law and economics to discuss recent patent law reforms in the US, and their negative ramifications for inventors. The full paper can be found on the Federalist Society’s website, here.

Property rights are a pillar of the free market. As Harold Demsetz famously argued, they spur specialization, investment and competition throughout the economy. And the same holds true for intellectual property rights (IPRs). 

However, despite the many social benefits that have been attributed to intellectual property protection, the past decades have witnessed the birth and growth of an powerful intellectual movement seeking to reduce the legal protections offered to inventors by patent law.

These critics argue that excessive patent protection is holding back western economies. For instance, they posit that the owners of the standard essential patents (“SEPs”) are charging their commercial partners too much for the rights to use their patents (this is referred to as patent holdup and royalty stacking). Furthermore, they argue that so-called patent trolls (“patent-assertion entities” or “PAEs”) are deterring innovation by small startups by employing “extortionate” litigation tactics.

Unfortunately, this movement has led to a deterioration of appropriate remedies in patent disputes.

The many benefits of patent protection

While patents likely play an important role in providing inventors with incentives to innovate, their role in enabling the commercialization of ideas is probably even more important.

By creating a system of clearly defined property rights, patents empower market players to coordinate their efforts in order to collectively produce innovations. In other words, patents greatly reduce the cost of concluding mutually-advantageous deals, whereby firms specialize in various aspects of the innovation process. Critically, these deals occur in the shadow of patent litigation and injunctive relief. The threat of these ensures that all parties have an incentive to take a seat at the negotiating table.

This is arguably nowhere more apparent than in the standardization space. Many of the most high-profile modern technologies are the fruit of large-scale collaboration coordinated through standards developing organizations (SDOs). These include technologies such as Wi-Fi, 3G, 4G, 5G, Blu-Ray, USB-C, and Thunderbolt 3. The coordination necessary to produce technologies of this sort is hard to imagine without some form of enforceable property right in the resulting inventions.

The shift away from injunctive relief

Of the many recent reforms to patent law, the most significant has arguably been a significant limitation of patent holders’ availability to obtain permanent injunctions. This is particularly true in the case of so-called standard essential patents (SEPs). 

However, intellectual property laws are meaningless without the ability to enforce them and remedy breaches. And injunctions are almost certainly the most powerful, and important, of these remedies.

The significance of injunctions is perhaps best understood by highlighting the weakness of damages awards when applied to intangible assets. Indeed, it is often difficult to establish the appropriate size of an award of damages when intangible property—such as invention and innovation in the case of patents—is the core property being protected. This is because these assets are almost always highly idiosyncratic. By blocking all infringing uses of an invention, injunctions thus prevent courts from having to act as price regulators. In doing so, they also ensure that innovators are adequately rewarded for their technological contributions.

Unfortunately, the Supreme Court’s 2006 ruling in eBay Inc. v. MercExchange, LLC significantly narrowed the circumstances under which patent holders could obtain permanent injunctions. This predictably led lower courts to grant fewer permanent injunctions in patent litigation suits. 

But while critics of injunctions had hoped that reducing their availability would spur innovation, empirical evidence suggests that this has not been the case so far. 

Other reforms

And injunctions are not the only area of patent law that have witnessed a gradual shift against the interests of patent holders. Much of the same could be said about damages awards, revised fee shifting standards, and the introduction of Inter Partes Review.

Critically, the intellectual movement to soften patent protection has also had ramifications outside of the judicial sphere. It is notably behind several legislative reforms, particularly the America Invents Act. Moreover, it has led numerous private parties – most notably Standard Developing Organizations (SDOs) – to adopt stances that have advanced the interests of technology implementers at the expense of inventors.

For instance, one of the most noteworthy reforms has been IEEE’s sweeping reforms to its IP policy, in 2015. The new rules notably prevented SEP holders from seeking permanent injunctions against so-called “willing licensees”. They also mandated that royalties pertaining to SEPs should be based upon the value of the smallest saleable component that practices the patented technology. Both of these measures ultimately sought to tilt the bargaining range in license negotiations in favor of implementers.

Concluding remarks

The developments discussed in this article might seem like small details, but they are part of a wider trend whereby U.S. patent law is becoming increasingly inhospitable for inventors. This is particularly true when it comes to the enforcement of SEPs by means of injunction.

While the short-term effect of these various reforms has yet to be quantified, there is a real risk that, by decreasing the value of patents and increasing transaction costs, these changes may ultimately limit the diffusion of innovations and harm incentives to invent.

This likely explains why some legislators have recently put forward bills that seek to reinforce the U.S. patent system (here and here).

Despite these initiatives, the fact remains that there is today a strong undercurrent pushing for weaker or less certain patent protection. If left unchecked, this threatens to undermine the utility of patents in facilitating the efficient allocation of resources for innovation and its commercialization. Policymakers should thus pay careful attention to the changes this trend may bring about and move swiftly to recalibrate the patent system where needed in order to better protect the property rights of inventors and yield more innovation overall.

Hardly a day goes by without news of further competition-related intervention in the digital economy. The past couple of weeks alone have seen the European Commission announce various investigations into Apple’s App Store (here and here), as well as reaffirming its desire to regulate so-called “gatekeeper” platforms. Not to mention the CMA issuing its final report regarding online platforms and digital advertising.

While the limits of these initiatives have already been thoroughly dissected (e.g. here, here, here), a fundamental question seems to have eluded discussions: What are authorities trying to achieve here?

At first sight, the answer might appear to be extremely simple. Authorities want to “bring more competition” to digital markets. Furthermore, they believe that this competition will not arise spontaneously because of the underlying characteristics of digital markets (network effects, economies of scale, tipping, etc). But while it may have some intuitive appeal, this answer misses the forest for the trees.

Let us take a step back. Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.

To draw a parallel with evolution, in the late 18th century, botanists discovered an orchid with an unusually long spur (above). This made its nectar incredibly hard to reach for insects. Rational observers at the time could be forgiven for thinking that this plant made no sense, that its design was suboptimal. And yet, decades later, Darwin conjectured that the plant could be explained by a (yet to be discovered) species of moth with a proboscis that was long enough to reach the orchid’s nectar. Decades after his death, the discovery of the xanthopan moth proved him right.

Returning to the digital economy, we thus need to ask why the platform business models that authorities desire are not the ones that emerge organically. Unfortunately, this complex question is mostly overlooked by policymakers and commentators alike.

Competition law on a spectrum

To understand the above point, let me start with an assumption: the digital platforms that have been subject to recent competition cases and investigations can all be classified along two (overlapping) dimensions: the extent to which they are open (or closed) to “rivals” and the extent to which their assets are propertized (as opposed to them being shared). This distinction borrows heavily from Jonathan Barnett’s work on the topic. I believe that by applying such a classification, we would obtain a graph that looks something like this:

While these classifications are certainly not airtight, this would be my reasoning:

In the top-left quadrant, Apple and Microsoft, both operate closed platforms that are highly propertized (Apple’s platform is likely even more closed than Microsoft’s Windows ever was). Both firms notably control who is allowed on their platform and how they can interact with users. Apple notably vets the apps that are available on its App Store and influences how payments can take place. Microsoft famously restricted OEMs freedom to distribute Windows PCs as they saw fit (notably by “imposing” certain default apps and, arguably, limiting the compatibility of Microsoft systems with servers running other OSs). 

In the top right quadrant, the business models of Amazon and Qualcomm are much more “open”, yet they remain highly propertized. Almost anyone is free to implement Qualcomm’s IP – so long as they conclude a license agreement to do so. Likewise, there are very few limits on the goods that can be sold on Amazon’s platform, but Amazon does, almost by definition, exert a significant control on the way in which the platform is monetized. Retailers can notably pay Amazon for product placement, fulfilment services, etc. 

Finally, Google Search and Android sit in the bottom left corner. Both of these services are weakly propertized. The Android source code is shared freely via an open source license, and Google’s apps can be preloaded by OEMs free of charge. The only limit is that Google partially closes its platform, notably by requiring that its own apps (if they are pre-installed) receive favorable placement. Likewise, Google’s search engine is only partially “open”. While any website can be listed on the search engine, Google selects a number of specialized results that are presented more prominently than organic search results (weather information, maps, etc). There is also some amount of propertization, namely that Google sells the best “real estate” via ad placement. 

Enforcement

Readers might ask what is the point of this classification? The answer is that in each of the above cases, competition intervention attempted (or is attempting) to move firms/platforms towards more openness and less propertization – the opposite of their original design.

The Microsoft cases and the Apple investigation, both sought/seek to bring more openness and less propetization to these respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and open up its platform to rival media players and web browsers (more openness). The same applies to Apple. Available information suggests that the Commission is seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as ensuring that it cannot exclude rival mobile payment solutions from its platform (more openness).

The various cases that were brought by EU and US authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property. The European Amazon investigation centers on the way in which the company uses data from third-party sellers (and ultimately the distribution of revenue between them and Amazon). In both of these cases, authorities are ultimately trying to limit the extent to which these firms propertize their assets.

Finally, both of the Google cases, in the EU, sought to bring more openness to the company’s main platform. The Google Shopping decision sanctioned Google for purportedly placing its services more favorably than those of its rivals. And the Android decision notably sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing investigations in the US.

What is striking about these decisions/investigations is that authorities are pushing back against the distinguishing features of the platforms they are investigating. Closed -or relatively closed- platforms are being opened-up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).

The empty quadrant

All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be very few successful consumer-oriented products and services in this space.

There have been numerous attempts to introduce truly open consumer-oriented operating systems – both in the mobile and desktop segments. For the most part, these have ended in failure. Ubuntu and other Linux distributions remain fringe products. There have been attempts to create open-source search engines, again they have not been met with success. The picture is similar in the online retail space. Amazon appears to have beaten eBay despite the latter being more open and less propertized – Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the way they sell their goods. This theme is repeated in the standardization space. There have been innumerable attempts to impose open royalty-free standards. At least in the mobile internet industry, few if any of these have taken off (5G and WiFi are the best examples of this trend). That pattern is repeated in other highly-standardized industries, like digital video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format. 

This is not to say there haven’t been any successful ventures in this space – the internet, blockchain and Wikipedia all spring to mind – or that we will not see more decentralized goods in the future. But by and large firms and consumers have not yet taken to the idea of open and shared platforms. And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase and Blockchain, or Android and Linux).

An evolutionary explanation?

The preceding paragraphs have posited a recurring reality: the digital platforms that competition authorities are trying to to bring about are fundamentally different from those that emerge organically. This begs the question: why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success at consumers’ end of the market? 

I can see at least three potential explanations:

  1. Closed/propertized platforms have systematically -and perhaps anticompetitively- thwarted their open/shared rivals;
  2. Shared platforms have failed to emerge because they are much harder to monetize (and there is thus less incentive to invest in them);
  3. Consumers have opted for closed systems precisely because they are closed.

I will not go into details over the merits of the first conjecture. Current antitrust debates have endlessly rehashed this proposition. However, it is worth mentioning that many of today’s dominant platforms overcame open/shared rivals well before they achieved their current size (Unix is older than Windows, Linux is older than iOs, eBay and Amazon are basically the same age, etc). It is thus difficult to make the case that the early success of their business models was down to anticompetitive behavior.

Much more interesting is the fact that options (2) and (3) are almost systematically overlooked – especially by antitrust authorities. And yet, if true, both of them would strongly cut against current efforts to regulate digital platforms and ramp-up antitrust enforcement against them. 

For a start, it is not unreasonable to suggest that highly propertized platforms are generally easier to monetize than shared ones (2). For example, open-source platforms often rely on complementarities for monetization, but this tends to be vulnerable to outside competition and free-riding. If this is true, then there is a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platforms’ ability to propertize their assets may harm innovation.

Similarly, authorities should at the very least reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design (3)

For instance, it is striking that the European Commission has a long track record of seeking to open-up digital platforms (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept on using the very business model that the Commission reprimanded. Apple tied the Safari browser to its iPhones, Google went to some length to ensure that Chrome was preloaded on devices, Samsung phones come with Samsung Internet as default. But this has not deterred consumers. A sizable share of them notably opted for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS). 

Finally, it is worth noting that the remedies imposed by competition authorities are anything but unmitigated successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unprecedented flop – it sold a paltry 1,787 copies. Likewise, the internet browser ballot box imposed by the Commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the Commission’s decision. 

There are many reasons why consumers might prefer “closed” systems – even when they have to pay a premium for them. Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform. 

It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Furthermore, forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire. 

To conclude, consumers and firms appear to gravitate towards both closed and highly propertized platforms, the opposite of what the Commission and many other competition authorities favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. This post certainly does not purport to answer the complex question of “the origin of platforms”, but it does suggest that what some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said this best when he quipped that economists always find a monopoly explanation for things that they fail to understand. The digital economy might just be the latest in this unfortunate trend.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Justin “Gus” Hurwitz, (Associate Professor of Law & Co-director, Space, Cyber, and Telecom Law Program, University of Nebraska; Director of Law & Economics Programs, ICLE).]

I’m a big fan of APM Marketplace, including Molly Wood’s tech coverage. But they tend to slip into advocacy mode—I think without realizing it—when it comes to telecom issues. This was on full display earlier this week in a story on widespread decisions by ISPs to lift data caps during the ongoing COVID-19 crisis (available here, the segment runs from 4:30-7:30). 

As background, all major ISPs have lifted data caps on their Internet service offerings. This is in recognition of the fact that most Americans are spending more time at home right now. During this time, many of us are teleworking, so making more intensive use of our Internet connections during the day; many have children at home during the day who are using the Internet for both education and entertainment; and we are going out less in the evening so making more use of services like streaming video for evening entertainment. All of these activities require bandwidth—and, like many businesses around the country, ISPs are taking steps (such as eliminating data caps) that will prevent undue consumer harm as we work to cope with COVID-19.

The Marketplace take on data caps

After introducing the segment, Wood and Marketplace host Kai Ryssdal turn to a misinformation and insinuation-laden discussion of telecommunications policy. Wood asserts that one of the ISPs’ “big arguments against net neutrality regulation” was that they “need [data] caps to prevent congestion on networks.” Ryssdal responds by asking, coyly, “so were they just fibbing? I mean … ya know …”

Wood responds that “there have been times when these arguments were very legitimate,” citing the early days of 4G networks. She then asserts that the United States has “some of the most expensive Internet speeds in the developed world” before jumping to the assertion that advocates will now have the “data to say that [data] caps are unnecessary.” She then goes on to argue—and here she loses any pretense of reporter neutrality—that “we are seeing that the Internet really is a utility” and that “frankly, there’s no, uhm, ongoing economic argument for [data caps].” She even notes that we can “hear [her] trying to be professional” in the discussion.

Unpacking that mess

It’s hard to know where to start with Wood & Ryssdal discussion, such a muddled mess it is. Needless to say, it is unfortunate to see tech reporters doing what tech reporters seem to do best: confusing poor and thinly veiled policy arguments for news.

Let’s start with Wood’s first claim, that ISPs (and, for that matter, others) have long argued that data caps are required to manage congestion and that this has been one of their chief arguments against net neutrality regulations. This is simply not true. 

Consider the 2015 Open Internet Order (OIO)—the net neutrality regulations adopted by the FCC under President Obama. The OIO discusses data caps (“usage allowances”) in paragraphs 151-153. It explains:

The record also reflects differing views over some broadband providers’ practices with respect to usage allowances (also called “data caps”). … Usage allowances may benefit consumers by offering them more choices over a greater range of service options, and, for mobile broadband networks, such plans are the industry norm today, in part reflecting the different capacity issues on mobile networks. Conversely, some commenters have expressed concern that such practices can potentially be used by broadband providers to disadvantage competing over-the-top providers. Given the unresolved debate concerning the benefits and drawbacks of data allowances and usage-based pricing plans,[FN373] we decline to make blanket findings about these practices and will address concerns under the no-unreasonable interference/disadvantage on a case-by-case basis. 

[FN373] Regarding usage-based pricing plans, there is similar disagreement over whether these practices are beneficial or harmful for promoting an open Internet. Compare Bright House Comments at 20 (“Variable pricing can serve as a useful technique for reducing prices for low usage (as Time Warner Cable has done) as well as for fairly apportioning greater costs to the highest users.”) with Public Knowledge Comments at 58 (“Pricing connectivity according to data consumption is like a return to the use of time. Once again, it requires consumers keep meticulous track of what they are doing online. With every new web page, new video, or new app a consumer must consider how close they are to their monthly cap. . . . Inevitably, this type of meter-watching freezes innovation.”), and ICLE & TechFreedom Policy Comments at 32 (“The fact of the matter is that, depending on background conditions, either usage-based pricing or flat-rate pricing could be discriminatory.”). 

The 2017 Restoring Internet Freedom Order (RIFO), which rescinded much of the OIO, offers little discussion of data caps—its approach to them follows that of the OIO, requiring that ISPs are free to adopt but must disclose data cap policies. It does, however, note that small ISPs expressed concern, and provided evidence, that fear of lawsuits had forced small ISPs to abandon policies like data caps, “which would have benefited its customers by lowering its cost of Internet transport.” (See paragraphs 104 and 249.) The 2010 OIO makes no reference to data caps or usage allowances. 

What does this tell us about Wood’s characterization of policy debates about data caps? The only discussion of congestion as a basis for data caps comes in the context of mobile networks. Wood gets this right: data caps have been, and continue to be, important for managing data use on mobile networks. But most people would be hard pressed to argue that these concerns are not still valid: the only people who have not experienced congestion on their mobile devices are those who do not use mobile networks.

But the discussion of data caps on broadband networks has nothing to do with congestion management. The argument against data caps is that they can be used anticompetitively. Cable companies, for instance, could use data caps to harm unaffiliated streaming video providers (that is, Netflix) in order to protect their own video services from competition; or they could exclude preferred services from data caps in order to protect them from competitors.

The argument for data caps, on the other hand, is about the cost of Internet service. Data caps are a way of offering lower priced service to lower-need users. Or, conversely, they are a way of apportioning the cost of those networks in proportion to the intensity of a given user’s usage.  Higher-intensity users are more likely to be Internet enthusiasts; lower-intensity users are more likely to use it for basic tasks, perhaps no more than e-mail or light web browsing. What’s more, if all users faced the same prices regardless of their usage, there would be no marginal cost to incremental usage: users (and content providers) would have no incentive not to use more bandwidth. This does not mean that users would face congestion without data caps—ISPs may, instead, be forced to invest in higher capacity interconnection agreements. (Importantly, interconnection agreements are often priced in terms of aggregate data transfered, not the speeds of those data transfers—that is, they are written in terms of data caps!—so it is entirely possible that an ISP would need to pay for greater interconnection capacity despite not experiencing any congestion on its network!)

In other words, the economic argument for data caps, recognized by the FCC under both the Obama and Trump administrations, is that they allow more people to connect to the Internet by allowing a lower-priced access tier, and that they keep average prices lower by creating incentives not to consume bandwidth merely because you can. In more technical economic terms, they allow potentially beneficial price discrimination and eliminate a potential moral hazard. Contrary to Wood’s snarky, unprofessional, response to Ryssdal’s question, there is emphatically not “no ongoing economic argument” for data caps.

Why lifting data caps during this crisis ain’t no thing

Even if the purpose of data caps were to manage congestion, Wood’s discussion again misses the mark. She argues that the ability to lift caps during the current crisis demonstrates that they are not needed during non-crisis periods. But the usage patterns that we are concerned about facilitating during this period are not normal, and cannot meaningfully be used to make policy decisions relevant to normal periods. 

The reason for this is captured in the below image from a recent Cloudflare discussion of how Internet usage patterns are changing during the crisis:

This image shows US Internet usage as measured by Cloudflare. The red line is the usage on March 13 (the peak is President Trump’s announcement of a state of emergency). The grey lines are the preceding several days of traffic. (The x-axis is UTC time; ET is UCT-4.) Although this image was designed to show the measurable spike in traffic corresponding to the President’s speech, it also shows typical weekday usage patterns. The large “hump” on the left side shows evening hours in the United States. The right side of the graph shows usage throughout the day. (This chart shows nation-wide usage trends, which span multiple time zones. If it were to focus on a single time zone, there would be a clear dip between daytime “business” and evening “home” hours, as can be seen here.)

More important, what this chart demonstrates is that the “peak” in usage occurs in the evening, when everyone is at home watching their Netflix. It does not occur during the daytime hours—the hours during which telecommuters are likely to be video conferencing or VPN’ing in to their work networks, or during which students are likely to be doing homework or conferencing into their meetings. And, to the extent that there will be an increase in daytime usage, it will be somewhat offset by (likely significantly) decreased usage due to coming economic lethargy. (For Kai Ryssdal, lethargy is synonymous with recession; for Aaron Sorkin fans, it is synonymous with bagel). 

This illustrates one of the fundamental challenges with pricing access to networks. Networks are designed to carry their peak load capacity. When they are operating below capacity, the marginal cost of additional usage is extremely low; once they exceed that capacity, the marginal cost of additional usage is extremely high. If you price network access based upon the average usage, you are going to get significant usage during peak hours; if you price access based upon the peak-hour marginal cost, you are going to get significant deadweight loss (under-use) during non-peak hours). 

Data caps are one way to deal with this issue. Since most users making the most intensive use of the network are all doing so at the same time (at peak hour), this incremental cost either discourages this use or provides the revenue necessary to expand capacity to accommodate their use. But data caps do not make sense during non-peak hours, when marginal cost is nearly zero. Indeed, imposing increased costs on users during non-peak hours is regressive. It creates deadweight losses during those hours (and, in principle, also during peak hours: ideally, we would price non-peak-hour usage less than peak-hour usage in order to “shave the peak” (a synonym, I kid you not, for “flatten the curve”)). 

What this all means

During the current crisis, we are seeing a significant increase in usage during non-peak hours. This imposes nearly zero incremental cost on ISPs. Indeed, it is arguably to their benefit to encourage use during this time, to “flatten the curve” of usage in the evening, when networks are, in fact, likely to experience congestion.

But there is a flipside, which we have seen develop over the past few days: how do we manage peak-hour traffic? On Thursday, the EU asked Netflix to reduce the quality of its streaming video in order to avoid congestion. Netflix is the single greatest driver of consumer-focused Internet traffic. And while being able to watch the Great British Bake Off in ultra-high definition 3D HDR 4K may be totally awesome, its value pales in comparison to keeping the American economy functioning.

Wood suggests that ISPs’ decision to lift data caps is of relevance to the network neutrality debate. It isn’t. But the impact of Netflix traffic on competing applications may be. The net neutrality debate created unmitigated hysteria about prioritizing traffic on the Internet. Many ISPs have said outright that they won’t even consider investing in prioritization technologies because of the uncertainty around the regulatory treatment of such technologies. But such technologies clearly have uses today. Video conferencing and Voice over IP protocols should be prioritized over streaming video. Packets to and from government, healthcare, university, and other educational institutions should be prioritized over Netflix traffic. It is hard to take anyone who would disagree with this proposition seriously. Yet the net neutrality debate almost entirely foreclosed development of these technologies. While they may exist, they are not in widespread deployment, and are not familiar to consumers or consumer-facing network engineers.

To the very limited extent that data caps are relevant to net neutrality policy, it is about ensuring that millions of people binge watching Bojack Horseman (seriously, don’t do it!) don’t interfere with children Skyping with their grandparents, a professor giving a lecture to her class, or a sales manager coordinating with his team to try to keep the supply chain moving.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Dirk Auer, (Senior Fellow of Law & Economics, International Center for Law & Economics).]

Republican Senator Josh Hawley infamously argued that Big Tech is overrated. In his words:

My biggest critique of big tech is: what big innovation have they really given us? What is it now that in the last 15, 20 years that people who say they are the brightest minds in the country have given this country? What are their great innovations?

To Senator Hawley these questions seemed rhetorical. Big Tech’s innovations were trivial gadgets: “autoplay” and “snap streaks”, to quote him once more.

But, as any Monty Python connoisseur will tell you, rhetorical questions have a way of being … not so rhetorical. In one of Python’s most famous jokes, members of the “People’s Front of Judea” ask “what have the Romans ever done for us”? To their own surprise, the answer turns out to be a great deal:

This post is the first in a series examining some of the many ways in which Big Tech is making Coronavirus-related lockdowns and social distancing more bearable, and how Big Tech is enabling our economies to continue functioning (albeit at a severely reduced pace) throughout the outbreak. 

Although Big Tech’s contributions are just a small part of a much wider battle, they suggest that the world is drastically better situated to deal with COVID-19 than it would have been twenty years ago – and this is in no small part thanks to Big Tech’s numerous innovations.

Of course, some will say that the world would be even better equipped to handle COVID-19, if Big Tech had only been subject to more (or less) regulation. Whether these critiques are correct, or not, they are not the point of this post. For many, like Senator Hawley, it is apparently undeniable that tech does more harm than good. But, as this post suggests, that is surely not the case. And before we do decide whether and how we want to regulate it in the future, we should be particularly mindful of what aspects of “Big Tech” seem particularly suited to dealing with the current crisis, and ensure that we don’t adopt regulations that thoughtlessly undermine these.

1. Priceless information 

One of the most important ways in which Big Tech firms have supported international attempts to COVID-19 has been their role as  information intermediaries. 

As the title of a New York Times article put it:

When Facebook Is More Trustworthy Than the President: Social media companies are delivering reliable information in the coronavirus crisis. Why can’t they do that all the time?

The author is at least correct on the first part. Big Tech has become a cornucopia of reliable information about the virus:

  • Big Tech firms are partnering with the White House and other agencies to analyze massive COVID-19 datasets in order to help discover novel answers to questions about transmission, medical care, and other interventions. This partnership is possible thanks to the massive investments in AI infrastructure that the leading tech firms have made. 
  • Google Scholar has partnered with renowned medical journals (as well as public authorities) to guide citizens towards cutting edge scholarship relating to COVID-19. This a transformative ressource in a world of lockdows and overburdened healthcare providers.
  • Google has added a number of features to its main search engine – such as a “Coronavirus Knowledge Panel” and SOS alerts – in order to help users deal with the spread of the virus.
  • On Twitter, information and insights about COVID-19 compete in the market for ideas. Numerous news outlets have published lists of recommended people to follow (Fortune, Forbes). 

    Furthermore – to curb some of the unwanted effects of an unrestrained market for ideas – Twitter (and most other digital platforms) links to the websites of public authorities when users search for COVID-related hashtags.
  • This flow of information is a two-way street: Twitter, Facebook and Reddit, among others, enable citizens and experts to weigh in on the right policy approach to COVID-19. 

    Though the results are sometimes far from perfect, these exchanges may prove invaluable in critical times where usual methods of policy-making (such as hearings and conferences) are mostly off the table.
  • Perhaps most importantly, the Internet is a precious source of knowledge about how to deal with an emerging virus, as well as life under lockdown. We often take for granted how much of our lives benefit from extreme specialization. These exchanges are severely restricted under lockdown conditions. Luckily, with the internet and modern search engines (pioneered by Google), most of the world’s information is but a click away.

    For example, Facebook Groups have been employed by users of the social media platform in order to better coordinate necessary activity among community members — like giving blood — while still engaging in social distancing.

In short, search engines and social networks have been beacons of information regarding COVID-19. Their mostly bottom-up approach to knowledge generation (i.e. popular topics emerge organically) is essential in a world of extreme uncertainty. This has ultimately enabled these players to stay ahead of the curve in bringing valuable information to citizens around the world.

2. Social interactions

This is probably the most obvious way in which Big Tech is making life under lockdown more bearable for everyone. 

  • In Italy, Whatsapp messages and calls jumped by 20% following the outbreak of COVID-19. And Microsoft claims that the use of Skype jumped by 100%.
  • Younger users are turning to social networks, like TikTok, to deal with the harsh realities of the pandemic.
  • Strangers are using Facebook groups to support each other through difficult times.
  • And institutions, like the WHO, are piggybacking on this popularity to further raise awareness about COVID-19 via social media. 
  • In South Africa, health authorities even created a whatsapp contact to answer users questions about the virus.
  • Most importantly, social media is a godsend for senior citizens and anyone else who may have to live in almost total isolation for the foreseeable future. For instance, nursing homes are putting communications apps, like Skype and WhatsApp, in the hands of their patients, to keep up their morale (here and here).

And with the economic effects of COVID-19 starting to gather speed, users will more than ever be grateful to receive these services free of charge. Sharing data – often very limited amounts – with a platform is an insignificant price to pay in times of economic hardship. 

3. Working & Learning

It will also be impossible to effectively fight COVID-19 if we cannot maintain the economy afloat. Stock markets have already plunged by record amounts. Surely, these losses would be unfathomably worse if many of us were not lucky enough to be able to work, and from the safety of our own homes. And for those individuals who are unable to work from home, their own exposure is dramatically reduced thanks to a significant proportion of the population that can stay out of public.

Once again, we largely have Big Tech to thank for this. 

  • Downloads of Microsoft Teams and Zoom are surging on both Google and Apple’s app stores. This is hardly surprising. With much of the workforce staying at home, these video-conference applications have become essential. The increased load generated by people working online might even have caused Microsoft Teams to crash in Europe.
  • According to Microsoft, the number of Microsoft Teams meetings increased by 500 percent in China.
  • Sensing that the current crisis may last for a while, some firms have also started to conduct job interviews online; populars apps for doing so include Skype, Zoom and Whatsapp. 
  • Slack has also seen a surge in usage, as firms set themselves up to work remotely. It has started offering free training, to help firms move online.
  • Along similar lines, Google recently announced that its G suite of office applications – which enables users to share and work on documents online – had recently passed 2 Billion users.
  • Some tech firms (including Google, Microsoft and Zoom) have gone a step further and started giving away some of their enterprise productivity software, in order to help businesses move their workflows online.

And Big Tech is also helping universities, schools and parents to continue providing coursework and lectures to their students/children.

  • Zoom and Microsoft Teams have been popular choices for online learning. To facilitate the transition to online learning, Zoom has notably lifted time limits relating to the free version of its app (for schools in the most affected areas).
  • Even in the US, where the virus outbreak is currently smaller than in Europe, thousands of students are already being taught online.
  • Much of the online learning being conducted for primary school children is being done with affordable Chromebooks. And some of these Chromebooks are distributed to underserved schools through grant programs administered by Google.
  • Moreover, at the time of writing, most of the best selling books on Amazon.com are pre-school learning books:

Finally, the advent of online storage services, such as Dropbox and Google Drive, has largely alleviated the need for physical copies of files. In turn, this enables employees to remotely access all the files they need to stay productive. While this may be convenient under normal circumstances, it becomes critical when retrieving a binder in the office is no longer an option.

4. So what has Big Tech ever done for us?

With millions of families around the world currently under forced lockdown, it is becoming increasingly evident that Big Tech’s innovations are anything but trivial. Innovations that seemed like convenient tools only a couple of days ago, are now becoming essential parts of our daily lives (or, at least, we are finally realizing how powerful they truly are). 

The fight against COVID-19 will be hard. We can at least be thankful that we have Big Tech by our side. Paraphrasing the Monty Python crew: 

Q: What has Big Tech ever done for us? 

A: Abundant, free, and easily accessible information. Precious social interactions. Online working and learning.

Q: But apart from information, social interactions, and online working (and learning); what has Big Tech ever done for us?

For the answer to this question, I invite you to stay tuned for the next post in this series.

On Monday evening, around 6:00 PM Eastern Standard Time, news leaked that the United States District Court for the Southern District of New York had decided to allow the T-Mobile/Sprint merger to go through, giving the companies a victory over a group of state attorneys general trying to block the deal.

Thomas Philippon, a professor of finance at NYU, used this opportunity to conduct a quick-and-dirty event study on Twitter:

Short thread on T-Mobile/Sprint merger. There were 2 theories:

(A) It’s a 4-to-3 merger that will lower competition and increase markups.

(B) The new merged entity will be able to take on the industry leaders AT&T and Verizon.

(A) and (B) make clear predictions. (A) predicts the merger is good news for AT&T and Verizon’s shareholders. (B) predicts the merger is bad news for AT&T and Verizon’s shareholders. The news leaked at 6pm that the judge would approve the merger. Sprint went up 60% as expected. Let’s test the theories. 

Here is Verizon’s after trading price: Up 2.5%.

Here is ATT after hours: Up 2%.

Conclusion 1: Theory B is bogus, and the merger is a transfer of at least 2%*$280B (AT&T) + 2.5%*$240B (Verizon) = $11.6 billion from the pockets of consumers to the pockets of shareholders. 

Conclusion 2: I and others have argued for a long time that theory B was bogus; this was anticipated. But lobbying is very effective indeed… 

Conclusion 3: US consumers already pay two or three times more than those of other rich countries for their cell phone plans. The gap will only increase.

And just a reminder: these firms invest 0% of the excess profits. 

Philippon published his thread about 40 minutes prior to markets opening for regular trading on Tuesday morning. The Court’s official decision was published shortly before markets opened as well. By the time regular trading began at 9:30 AM, Verizon had completely reversed its overnight increase and opened down from the previous day’s close. While AT&T opened up slightly, it too had given back most of its initial gains. By 11:00 AM, AT&T was also in the red. When markets closed at 4:00 PM on Tuesday, Verizon was down more than 2.5 percent and AT&T was down just under 0.5 percent.

Does this mean that, in fact, theory A is the “bogus” one? Was the T-Mobile/Sprint merger decision actually a transfer of “$7.4 billion from the pockets of shareholders to the pockets of consumers,” as I suggested in my own tongue-in-cheek thread later that day? In this post, I will look at the factors that go into conducting a proper event study.  

What’s the appropriate window for a merger event study?

In a response to my thread, Philippon said, “I would argue that an event study is best done at the time of the event, not 16 hours after. Leak of merger approval 6 pm Monday. AT&T up 2 percent immediately. AT&T still up at open Tuesday. Then comes down at 10am.” I don’t disagree that “an event study is best done at the time of the event.” In this case, however, we need to consider two important details: When was the “event” exactly, and what were the conditions in the financial markets at that time?

This event did not begin and end with the leak on Monday night. The official announcement came Tuesday morning when the full text of the decision was published. This additional information answered a few questions for market participants: 

  • Were the initial news reports true?
  • Based on the text of the decision, what is the likelihood it gets reversed on appeal?
    • Wall Street: “Not all analysts are convinced this story is over just yet. In a note released immediately after the judge’s verdict, Nomura analyst Jeff Kvaal warned that ‘we expect the state AGs to appeal.’ RBC Capital analyst Jonathan Atkin noted that such an appeal, if filed, could delay closing of the merger by ‘an additional 4-5’ months — potentially delaying closure until September 2020.”
  • Did the Court impose any further remedies or conditions on the merger?

As stock traders digested all the information from the decision, Verizon and AT&T quickly went negative. There is much debate in the academic literature about the appropriate window for event studies on mergers. But the range in question is always one of days or weeks — not a couple hours in after hours markets. A recent paper using the event study methodology analyzed roughly 5,000 mergers and found abnormal returns of about positive one percent for competitors in the relevant market following a merger announcement. Notably for our purposes, this small abnormal return builds in the first few days following a merger announcement and persists for up to 30 days, as shown in the chart below:

As with the other studies the paper cites in its literature review, this particular research design included a window of multiple weeks both before and after the event occured. When analyzing the T-Mobile/Sprint merger decision, we should similarly expand the window beyond just a few hours of after hours trading.

How liquid is the after hours market?

More important than the length of the window, however, is the relative liquidity of the market during that time. The after hours market is much thinner than the regular hours market and may not reflect all available information. For some rough numbers, let’s look at data from NASDAQ. For the last five after hours trading sessions, total volume was between 80 and 100 million shares. Let’s call it 90 million on average. By contrast, the total volume for the last five regular trading hours sessions was between 2 and 2.5 billion shares. Let’s call it 2.25 billion on average. So, the regular trading hours have roughly 25 times as much liquidity as the after hours market

We could also look at relative liquidity for a single company as opposed to the total market. On Wednesday during regular hours (data is only available for the most recent day), 22.49 million shares of Verizon stock were traded. In after hours trading that same day, fewer than a million shares traded hands. You could change some assumptions and account for other differences in the after market and the regular market when analyzing the data above. But the conclusion remains the same: the regular market is at least an order of magnitude more liquid than the after hours market. This is incredibly important to keep in mind as we compare the after hours price changes (as reported by Philippon) to the price changes during regular trading hours.

What are Wall Street analysts saying about the decision?

To understand the fundamentals behind these stock moves, it’s useful to see what Wall Street analysts are saying about the merger decision. Prior to the ruling, analysts were already worried about Verizon’s ability to compete with the combined T-Mobile/Sprint entity in the short- and medium-term:

Last week analysts at LightShed Partners wrote that if Verizon wins most of the first available tranche of C-band spectrum, it could deploy 60 MHz in 2022 and see capacity and speed benefits starting in 2023.

With that timeline, C-Band still does not answer the questions of what spectrum Verizon will be using for the next three years,” wrote LightShed’s Walter Piecyk and Joe Galone at the time.

Following the news of the decision, analysts were clear in delivering their own verdict on how the decision would affect Verizon:

Verizon looks to us to be a net loser here,” wrote the MoffettNathanson team led by Craig Moffett.

…  

Approval of the T-Mobile/Sprint deal takes not just one but two spectrum options off the table,” wrote Moffett. “Sprint is now not a seller of 2.5 GHz spectrum, and Dish is not a seller of AWS-4. More than ever, Verizon must now bet on C-band.”

LightShed also pegged Tuesday’s merger ruling as a negative for Verizon.

“It’s not great news for Verizon, given that it removes Sprint and Dish’s spectrum as an alternative, created a new competitor in Dish, and has empowered T-Mobile with the tools to deliver a superior network experience to consumers,” wrote LightShed.

In a note following news reports that the court would side with T-Mobile and Sprint, New Street analyst Johnathan Chaplin wrote, “T-Mobile will be far more disruptive once they have access to Sprint’s spectrum than they have been until now.”

However, analysts were more sanguine about AT&T’s prospects:

AT&T, though, has been busy deploying additional spectrum, both as part of its FirstNet build and to support 5G rollouts. This has seen AT&T increase its amount of deployed spectrum by almost 60%, according to Moffett, which takes “some of the pressure off to respond to New T-Mobile.”

Still, while AT&T may be in a better position on the spectrum front compared to Verizon, it faces the “same competitive dynamics,” Moffett wrote. “For AT&T, the deal is probably a net neutral.”

The quantitative evidence from the stock market seems to agree with the qualitative analysis from the Wall Street research firms. Let’s look at the five-day window of trading from Monday morning to Friday (today). Unsurprisingly, Sprint, T-Mobile, and Dish have reacted very favorably to the news:

Consistent with the Wall Street analysis, Verizon stock remains down 2.5 percent over a five-day window while AT&T has been flat over the same period:

How do you separate beta from alpha in an event study?

Philippon argued that after market trading may be more efficient because it is dominated by hedge funds and includes less “noise trading.” In my opinion, the liquidity effect likely outweighs this factor. Also, it’s unclear why we should assume “smart money” is setting the price in the after hours market but not during regular trading when hedge funds are still active. Sophisticated professional traders often make easy profits by picking off panicked retail investors who only read the headlines. When you see a wild swing in the markets that moderates over time, the wild swing is probably the noise and the moderation is probably the signal.

And, as Karl Smith noted, since the aftermarket is thin, price moves in individual stocks might reflect changes in the broader stock market (“beta”) more than changes due to new company-specific information (“alpha”). Here are the last five days for e-mini S&P 500 futures, which track the broader market and are traded after hours:

The market trended up on Monday night and was flat on Tuesday. This slightly positive macro environment means we would need to adjust the returns downward for AT&T and Verizon. Of course, this is counter to Philippon’s conjecture that the merger decision would increase their stock prices. But to be clear, these changes are so minuscule in percentage terms, this adjustment wouldn’t make much of a difference in this case.

Lastly, let’s see what we can learn from a similar historical episode in the stock market.

The parallel to the 2016 presidential election

The type of reversal we saw in AT&T and Verizon is not unprecedented. Some commenters said the pattern reminded them of the market reaction to Trump’s election in 2016:

Much like the T-Mobile/Sprint merger news, the “event” in 2016 was not a single moment in time. It began around 9 PM Tuesday night when Trump started to overperform in early state results. Over the course of the next three hours, S&P 500 futures contracts fell about 5 percent — an enormous drop in such a short period of time. If Philippon had tried to estimate the “Trump effect” in the same manner he did the T-Mobile/Sprint case, he would have concluded that a Trump presidency would reduce aggregate future profits by about 5 percent relative to a Clinton presidency.

But, as you can see in the chart above, if we widen the aperture of the event study to include the hours past midnight, the story flips. Markets started to bounce back even before Trump took the stage to make his victory speech. The themes of his speech were widely regarded as reassuring for markets, which further pared losses from earlier in the night. When regular trading hours resumed on Wednesday, the markets decided a Trump presidency would be very good for certain sectors of the economy, particularly finance, energy, biotech, and private prisons. By the end of the day, the stock market finished up about a percentage point from where it closed prior to the election — near all time highs.

Maybe this is more noise than signal?

As a few others pointed out, these relatively small moves in AT&T and Verizon (less than 3 percent in either direction) may just be noise. That’s certainly possible given the magnitude of the changes. Contra Philippon, I think the methodology in question is too weak to rule out the pro-competitive theory of the case, i.e., that the new merged entity would be a stronger competitor to take on industry leaders AT&T and Verizon. We need much more robust and varied evidence before we can call anything “bogus.” Of course, that means this event study is not sufficient to prove the pro-competitive theory of the case, either.

Olivier Blanchard, a former chief economist of the IMF, shared Philippon’s thread on Twitter and added this comment above: “The beauty of the argument. Simple hypothesis, simple test, clear conclusion.”

If only things were so simple.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics ); and Kristian Stout (Associate Director, ICLE).]

As many in the symposium have noted — and as was repeatedly noted during the FTC’s Hearings on Competition and Consumer Protection in the 21st Century — there is widespread dissatisfaction with the 1984 Non-Horizontal Merger Guidelines

Although it is doubtless correct that the 1984 guidelines don’t reflect the latest economic knowledge, it is by no means clear that this has actually been a problem — or that a new set of guidelines wouldn’t create even greater problems. Indeed, as others have noted in this symposium, there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm

We can do little better in expressing our reservations that new guidelines are needed than did the current Chairman of the FTC, Joe Simons, writing on this very blog in a symposium on what became the 2010 Horizontal Merger Guidelines. In a post entitled, Revisions to the Merger Guidelines: Above All, Do No Harm, Simons writes:

My sense is that there is no need to revise the DOJ/FTC Horizontal Merger Guidelines, with one exception…. The current guidelines lay out the general framework quite well and any change in language relative to that framework are likely to create more confusion rather than less. Based on my own experience, the business community has had a good sense of how the agencies conduct merger analysis…. If, however, the current administration intends to materially change the way merger analysis is conducted at the agencies, then perhaps greater revision makes more sense. But even then, perhaps the best approach is to try out some of the contemplated changes (i.e. in actual investigations) and publicize them in speeches and the like before memorializing them in a document that is likely to have some substantial permanence to it.

Wise words. Unless, of course, “the current [FTC] intends to materially change the way [vertical] merger analysis is conducted.” But the draft guidelines don’t really appear to portend a substantial change, and in several ways they pretty accurately reflect agency practice.

What we want to draw attention to, however, is an implicit underpinning of the draft guidelines that we believe the agencies should clearly disavow (or at least explain more clearly the complexity surrounding): the extent and implications of the presumed functional equivalence of vertical integration by contract and by merger — the contract/merger equivalency assumption.   

Vertical mergers and their discontents

The contract/merger equivalency assumption has been gaining traction with antitrust scholars, but it is perhaps most clearly represented in some of Steve Salop’s work. Salop generally believes that vertical merger enforcement should be heightened. Among his criticisms of current enforcement is his contention that efficiencies that can be realized by merger can often also be achieved by contract. As he discussed during his keynote presentation at last year’s FTC hearing on vertical mergers:

And, finally, the key policy issue is the issue is not about whether or not there are efficiencies; the issue is whether the efficiencies are merger-specific. As I pointed out before, Coase stressed that you can get vertical integration by contract. Very often, you can achieve the vertical efficiencies if they occur, but with contracts rather than having to merge.

And later, in the discussion following his talk:

If there is vertical integration by contract… it meant you could get all the efficiencies from vertical integration with a contract. You did not actually need the vertical integration. 

Salop thus argues that because the existence of a “contract solution” to firm problems can often generate the same sorts of efficiencies as when firms opt to merge, enforcers and courts should generally adopt a presumption against vertical mergers relative to contracting:

Coase’s door swings both ways: Efficiencies often can be achieved by vertical contracts, without the potential anticompetitive harms from merger

In that vertical restraints are characterized as “just” vertical integration “by contract,” then claimed efficiencies in problematical mergers might be achieved with non-merger contracts that do not raise the same anticompetitive concerns. (emphasis in original)

(Salop isn’t alone in drawing such a conclusion, of course; Carl Shapiro, for example, has made a similar point (as have others)).

In our next post we explore the policy errors implicated by this contract/merger equivalency assumption. But here we want to consider whether it makes logical sense in the first place

The logic of vertical integration is not commutative 

It is true that, where contracts are observed, they are likely as (or more, actually)  efficient than merger. But, by the same token, it is also true that where mergers are observed they are likely more efficient than contracts. Indeed, the entire reason for integration is efficiency relative to what could be done by contract — this is the essence of the so-called “make-or-buy” decision. 

For example, a firm that decides to buy its own warehouse has determined that doing so is more efficient than renting warehouse space. Some of these efficiencies can be measured and quantified (e.g., carrying costs of ownership vs. the cost of rent), but many efficiencies cannot be easily measured or quantified (e.g., layout of the facility or site security). Under the contract/merger equivalency assumption, the benefits of owning a warehouse can be achieved “very often” by renting warehouse space. But the fact that many firms using warehouses own some space and rent some space indicates that the make-or-buy decision is often unique to each firm’s idiosyncratic situation. Moreover, the distinctions driving those differences will not always be readily apparent, and whether contracting or integrating is preferable in any given situation may not be inferred from the existence of one or the other elsewhere in the market — or even in the same firm!

There is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. The two are, quite simply, different bargaining environments, each with a different risk and cost allocation; accounting treatment; effect on employees, customers, and investors; tax consequence, etc. Even if the parties accomplished nominally “identical” outcomes, they would not, in fact, be identical.

Meanwhile, what if the reason for failure to contract, or the reason to prefer merger, has nothing to do with efficiency? What if there were no anticompetitive aim but there were a tax advantage? What if one of the parties just wanted a larger firm in order to satisfy the CEO’s ego? That these are not cognizable efficiencies under antitrust law is clear. But the adoption of a presumption of equivalence between contract and merger would — ironically — entail their incorporation into antitrust law just the same — by virtue of their effective prohibition under antitrust law

In other words, if the assumption is that contract and merger are equally efficient unless proven otherwise, but the law adopts a suspicion (or, even worse, a presumption) that vertical mergers are anticompetitive which can be rebutted only with highly burdensome evidence of net efficiency gain, this effectively deputizes antitrust law to enforce a preconceived notion of “merger appropriateness” that does not necessarily turn on efficiencies. There may (or may not) be sensible policy reasons for adopting such a stance, but they aren’t antitrust reasons.

More fundamentally, however, while there are surely some situations in which contractual restraints might be able to achieve similar organizational and efficiency gains as a merger, the practical realities of achieving not just greater efficiency, but a whole host of non-efficiency-related, yet nonetheless valid, goals, are rarely equivalent between the two

It may be that the parties don’t know what they don’t know to such an extent that a contract would be too costly because it would be too incomplete, for example. But incomplete contracts and ambiguous control and ownership rights aren’t (as much of) an issue on an ongoing basis after a merger. 

As noted, there is no basis for assuming that the structure of a merger and a contract would be identical. In the same way, there is no basis for assuming that the knowledge transfer that would result from a merger would be the same as that which would result from a contract — and in ways that the parties could even specify or reliably calculate in advance. Knowing that the prospect for knowledge “synergies” would be higher with a merger than a contract might be sufficient to induce the merger outcome. But asked to provide evidence that the parties could not engage in the same conduct via contract, the parties would be unable to do so. The consequence, then, would be the loss of potential gains from closer integration.

At the same time, the cavalier assumption that parties would be able — legally — to enter into an analogous contract in lieu of a merger is problematic, given that it would likely be precisely the form of contract (foreclosing downstream or upstream access) that is alleged to create problems with the merger in the first place.

At the FTC hearings last year, Francine LaFontaine highlighted this exact concern

I want to reemphasize that there are also rules against vertical restraints in antitrust laws, and so to say that the firms could achieve the mergers outcome by using vertical restraints is kind of putting them in a circular motion where we are telling them you cannot merge because you could do it by contract, and then we say, but these contract terms are not acceptable.

Indeed, legal risk is one of the reasons why a merger might be preferable to a contract, and because the relevant markets here are oligopoly markets, the possibility of impermissible vertical restraints between large firms with significant market share is quite real.

More important, the assumptions underlying the contention that contracts and mergers are functionally equivalent legal devices fails to appreciate the importance of varied institutional environments. Consider that one reason some takeovers are hostile is because incumbent managers don’t want to merge, and often believe that they are running a company as well as it can be run — that a change of corporate control would not improve efficiency. The same presumptions may also underlie refusals to contract and, even more likely, may explain why, to the other firm, a contract would be ineffective.

But, while there is no way to contract without bilateral agreement, there is a corporate control mechanism to force a takeover. In this institutional environment a merger may be easier to realize than a contract (and that applies even to a consensual merger, of course, given the hostile outside option). In this case, again, the assumption that contract should be the relevant baseline and the preferred mechanism for coordination is misplaced — even if other firms in the industry are successfully accomplishing the same thing via contract, and even if a contract would be more “efficient” in the abstract.

Conclusion

Properly understood, the choice of whether to contract or merge derives from a host of complicated factors, many of which are difficult to observe and/or quantify. The contract/merger equivalency assumption — and the species of “least-restrictive alternative” reasoning that would demand onerous efficiency arguments to permit a merger when a contract was notionally possible — too readily glosses over these complications and unjustifiably embraces a relative hostility to vertical mergers at odds with both theory and evidence

Rather, as has long been broadly recognized, there can be no legally relevant presumption drawn against a company when it chooses one method of vertical integration over another in the general case. The agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.

The case against AT&T began in 1974. The government alleged that AT&T had monopolized the market for local and long-distance telephone service as well as telephone equipment. In 1982, the company entered into a consent decree to be broken up into eight pieces (the “Baby Bells” plus the parent company), which was completed in 1984. As a remedy, the government required the company to divest its local operating companies and guarantee equal access to all long-distance and information service providers (ISPs).

Source: Mohanram & Nanda

As the chart above shows, the divestiture broke up AT&T’s national monopoly into seven regional monopolies. In general, modern antitrust analysis focuses on the local product market (because that’s the relevant level for consumer decisions). In hindsight, how did breaking up a national monopoly into seven regional monopolies increase consumer choice? It’s also important to note that, prior to its structural breakup, AT&T was a government-granted monopoly regulated by the FCC. Any antitrust remedy should be analyzed in light of the company’s unique relationship with regulators.

Breaking up one national monopoly into seven regional monopolies is not an effective way to boost innovation. And there are economies of scale and network effects to be gained by owning a national network to serve a national market. In the case of AT&T, those economic incentives are why the Baby Bells forged themselves back together in the decades following the breakup.

Source: WSJ

As Clifford Winston and Robert Crandall noted

Appearing to put Ma Bell back together again may embarrass the trustbusters, but it should not concern American consumers who, in two decades since the breakup, are overwhelmed with competitive options to provide whatever communications services they desire.

Moreover, according to Crandall & Winston (2003), the lower prices following the breakup of AT&T weren’t due to the structural remedy at all (emphasis added):

But on closer examination, the rise in competition and lower long-distance prices are attributable to just one aspect of the 1982 decree; specifically, a requirement that the Bell companies modify their switching facilities to provide equal access to all long-distance carriers. The Federal Communications Commission (FCC) could have promulgated such a requirement without the intervention of the antitrust authorities. For example, the Canadian regulatory commission imposed equal access on its vertically integrated carriers, including Bell Canada, in 1993. As a result, long-distance competition developed much more rapidly in Canada than it had in the United States (Crandall and Hazlett, 2001). The FCC, however, was trying to block MCI from competing in ordinary long-distance services when the AT&T case was filed by the Department of Justice in 1974. In contrast to Canadian and more recent European experience, a lengthy antitrust battle and a disruptive vertical dissolution were required in the U.S. market to offset the FCC’s anti-competitive policies. Thus, antitrust policy did not triumph in this case over restrictive practices by a monopolist to block competition, but instead it overcame anticompetitive policies by a federal regulatory agency.

A quick look at the data on telephone service in the US, EU, and Canada show that the latter two were able to achieve similar reductions in price without breaking up their national providers.

Source: Crandall & Jackson (2011)

The paradigm shift from wireline to wireless

The technological revolution spurred by the transition from wireline telephone service to wireless telephone service shook up the telecommunications industry in the 1990s. The rapid change caught even some of the smartest players by surprise. In 1980, the management consulting firm McKinsey and Co. produced a report for AT&T predicting how large the cellular market might become by the year 2000. Their forecast said that 900,000 cell phones would be in use. The actual number was more than 109 million.

Along with the rise of broadband, the transition to wireless technology led to an explosion in investment. In contrast, the breakup of AT&T in 1984 had no discernible effect on the trend in industry investment:

The lesson for antitrust enforcers is clear: breaking up national monopolies into regional monopolies is no remedy. In certain cases, mandating equal access to critical networks may be warranted. Most of all, technology shocks will upend industries in ways that regulators — and dominant incumbents — fail to predict.